Research into paranormal beliefs and cognitive functioning has expanded considerably since the last review almost 30 years ago, prompting the need for a comprehensive review. The current systematic review aims to identify the reported associations between paranormal beliefs and cognitive functioning, and to assess study quality.
We searched four databases (Scopus, ScienceDirect, SpringerLink, and OpenGrey) from inception until May 2021. Inclusion criteria comprised papers published in English that contained original data assessing paranormal beliefs and cognitive function in healthy adult samples. Study quality and risk of bias was assessed using the Appraisal tool for Cross-Sectional Studies (AXIS) and results were synthesised through narrative review. The review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and was preregistered as part of a larger registration on the Open Science Framework (https://osf.io/uzm5v).
From 475 identified studies, 71 (n = 20,993) met our inclusion criteria. Studies were subsequently divided into the following six categories: perceptual and cognitive biases (k = 19, n = 3,397), reasoning (k = 17, n = 9,661), intelligence, critical thinking, and academic ability (k = 12, n = 2,657), thinking style (k = 13, n = 4,100), executive function and memory (k = 6, n = 810), and other cognitive functions (k = 4, n = 368). Study quality was rated as good-to-strong for 75% of studies and appears to be improving across time. Nonetheless, we identified areas of methodological weakness including: the lack of preregistration, discussion of limitations, a-priori justification of sample size, assessment of nonrespondents, and the failure to adjust for multiple testing. Over 60% of studies have recruited undergraduates and 30% exclusively psychology undergraduates, which raises doubt about external validity. Our narrative synthesis indicates high heterogeneity of study findings. The most consistent associations emerge for paranormal beliefs with increased intuitive thinking and confirmatory bias, and reduced conditional reasoning ability and perception of randomness.
Although study quality is good, areas of methodological weakness exist. In addressing these methodological issues, we propose that authors engage with preregistration of data collection and analysis procedures. At a conceptual level, we argue poorer cognitive performance across seemingly disparate cognitive domains might reflect the influence of an over-arching executive dysfunction.
Citation: Dean CE, Akhtar S, Gale TM, Irvine K, Grohmann D, Laws KR (2022) Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research. PLoS ONE 17(5): e0267360. https://doi.org/10.1371/journal.pone.0267360
Editor: José C. Perales, Universidad de Granada, SPAIN
Received: October 12, 2021; Accepted: April 6, 2022; Published: May 4, 2022
Copyright: © 2022 Dean et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data files relating to the quality assessment are available from the OSF repository (https://osf.io/7bthg/). Data relating to the 71 reviewed studies can be found within the paper's Supporting Information files.
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
The term “paranormal” typically refers to phenomena, such as psychokinesis, hauntings, and clairvoyance, which contradict the basic limiting principles of current scientific understanding . Surveys consistently indicate paranormal beliefs are prevalent within the general population. For example, a representative survey of British adults conducted by the market-research company BMG Research  found that a third of their sample believed in paranormal phenomena, and a further 21% were ‘unsure’. Of those who either believed in the paranormal or were unsure, 40% indicated they had seen or felt the presence of a supernatural entity. Similarly, Pechey and Halligan  found 30% of participants held at least one strong paranormal belief, and 79% held at least one paranormal belief at any strength (weak, moderate, or strong belief). Comparable levels of belief have been documented across various cultures over recent decades [4–7].
The most frequently used scales to measure paranormal beliefs include Tobacyk’s Paranormal Belief Scale in both original (PBS)  and revised form (RPBS) , and the Australian Sheep-Goat Scale (ASGS) . Despite widespread use, some concerns exist about both the content and the factor structures of these measures [11–13]. Nonetheless, both the RPBS and ASGS have demonstrated excellent internal reliability, with Cronbach’s alpha values around .93 for the RPBS [14–16], and around .95 for the ASGS [17, 18].
Scores on paranormal belief measures have been linked to various personal and demographic characteristics. For example, higher belief scores have been noted for individuals high in extraversion and neuroticism [19–21], while lower belief scores have been seen for those with higher levels of education [22–24]. Paranormal belief levels also appear to vary across academic disciplines; with those engaged in hard (or natural) sciences, medicine, and psychology showing significantly lower paranormal belief scores than those in education, theology, or artistic disciplines [25, 26]. Higher levels of paranormal beliefs have been documented in women and younger individuals [27–32], though these sex and age effects are inconsistently reported  and have generated substantial debate [34–36].
Paranormal beliefs and cognitive function
The association between cognitive functioning and paranormal beliefs has been researched over several decades. Such functions include memory, attention, language, and executive function (the umbrella term used to describe set-shifting ability, inhibitory control, and working memory updating; for a full description of executive function, see Miyake et al.’s work ).
As important for cognitive function is an individual’s belief system. Religious and spiritual beliefs have been associated with slower cognitive decline in older adults [38, 39] but have also been shown to have an inverse relationship with memory performance  and intelligence [41, 42]. Similarly, so-called “epistemically unwarranted beliefs” , which includes belief in conspiracy theories, has been linked with lower educational attainment and reduced analytical thinking [43, 44]. Conspiracist beliefs are similarly associated with increased illusory pattern perception [45, 46], decreased need for cognition and cognitive reflection [47–49], biases against confirmatory and disconfirmatory evidence , and hindsight bias (for discussions on this topic see [51–53]).
The last published review to examine the relationships between paranormal beliefs and various aspects of cognition was conducted by Irwin in 1993 . That non-systematic narrative review of 43 studies is now almost 30 years old and may have introduced bias by “…citing null results only when these form a substantial proportion of the available data on a given relationship” (p.6). At the time of his review, Irwin  concluded that, owing to the variable findings, support for the cognitive deficits hypothesis remained uncertain.
Research has grown considerably since Irwin’s  review and an updated and systematic review is timely. The current review has two key aims: first, to provide the first assessment of study quality  in this area and second, to systematically review and summarise key associations between paranormal beliefs and a range of cognitive functions.
This review was conducted within the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines  (see S2 Appendix for PRISMA checklist). The systematic review was preregistered at the Open Science Framework (OSF; https://osf.io/uzm5v) as part of a larger study (also assessing the relationships between paranormal beliefs and schizotypal personality traits). Data used for the descriptive and inferential analyses presented in the results section are available at the OSF preregistration. One author (CED) conducted the search strategy, article eligibility assessment, and data extraction.
A systematic literature review was chosen for this area owing to its strength as a method to synthesise relevant evidence from large bodies of research [56, 57]. Our searches included both peer-reviewed articles published in scholarly journals and “grey literature” (concerning unpublished works such as doctoral theses).
We searched the electronic databases Scopus, ScienceDirect, SpringerLink, and OpenGrey from inception to May 2021. Our search terms were: (1) “paranormal belief” AND cogni*, (2) “paranormal belief” AND thinking, and (3) “paranormal belief” AND (memory OR “executive function”). For databases that did not permit wildcard Boolean operators (ScienceDirect), one of the above search terms was amended and entered as: “paranormal belief” AND (cognition OR cognitive), to best replicate the effect of the Boolean operator. Following exclusion of duplicate articles across databases, titles and abstracts were assessed to identify studies relevant to the review. Full-text assessment of eligible studies was performed to determine final inclusion. Full-text copies were unavailable for five studies, which were subsequently sought for retrieval. Finally, we hand-searched reference lists for each included article to identify any additional relevant articles. The PRISMA flow diagram presented in Fig 1 illustrates the full screening and selection process. The PRISMA checklist for abstracts is presented in S1 Appendix, and the full PRISMA checklist is presented in S2 Appendix.
Studies were eligible for inclusion if they were: published in the English language, conducted with a healthy adult sample (age 18 or over) and presented original data involving both a measure of paranormal belief and a measure of cognitive function. As cognitive functions have been shown to peak at different ages (for a detailed discussion on this topic, see ), we excluded samples that included children and adolescents under the age of 18 as some cognitive functions are still developing in these younger individuals.
We used a detailed data extraction form to collate the following information from included studies: sample sizes and demographic details (including sex, age and education), the measures of self-rated paranormal belief, the aspect of cognition assessed, the tests of cognitive functions used, and findings relating to the relationship between paranormal beliefs and cognitive function. We categorised eligible outcome measures broadly to include both global cognitive function and domain-specific cognitive functions. Any measure of cognitive function was eligible for inclusion (e.g., neuropsychological tests, self-report measures). Results for both paranormal beliefs and cognitive functioning could be reported as an overall test score that provides a composite measure, subscale scores that provide domain-specific measures, or a combination of the two. When multiple cognitive outcomes were investigated, we included all measures. To assess the strength of the relationships between paranormal beliefs and various cognitive functions, we calculated the number of positive, negative, or null findings reported by each study included in the review. Measures of paranormal belief were examined to determine the extent to which established questionnaires have been used.
In line with our preregistered protocol, we synthesised evidence narratively. Meta-analyses could not be undertaken because of the heterogeneity of study designs and outcome measures. We did, however, develop summary tables that include information relating to: sample size, gender composition, mean sample age, cognitive domain, outcome measure, and key findings. Given the range of outcome measures, we attempted to categorise the included studies by common cognitive domains. As the review took an explorative approach, and did not specify domains of interest, categorisation took place after full-text evaluation of included studies.
Electronic and hand searches identified 902 papers, of which 475 were unique. Most articles (k = 391) were excluded from the review following title and abstract screening, leaving 84 eligible for full-text evaluation. We removed 13 studies that included participants under the age of 18 (see S1 Table for details of these studies). Seventy-one papers met our inclusion criteria (see Fig 1), which included 70 published between 1980 and 2020 and one unpublished doctoral thesis .
Assessment of study quality and risk of bias
The preregistration for this review specified using a bespoke series of questions to assess study quality, but we subsequently decided to use a more well-established and validated measure of study quality in the Appraisal tool for Cross-Sectional Studies (AXIS) tool . Of the 20 AXIS items, seven assess reporting quality (items: 1, 4, 10, 11, 12, 16 and 18), seven relate to study design (items: 2, 3, 5, 8, 17, 19 and 20), and six to possible biases (items: 6, 7, 9, 13, 14 and 15). Two authors (DG and CED) independently rated each study, and these two sets of ratings had almost-perfect agreement (93%) with Kappa = .84.
Following previous research , we classified AXIS quality scores according to the number of "Yes" responses for the 20 items for each study—poor quality for scores <50%, fair quality for scores between 50 to 69%, good quality for scores of 70% to 79%, strong quality for scores of 80% and higher. Three in four studies were rated as either ‘strong’ (26/71: 37%) or ‘good’ (27/71: 39%). By contrast, 17/71 (24%) were rated as ‘fair’ and only 1/71 (1%) was rated as ‘poor’. The mean quality rating score across all 71 studies was in the ‘good’ range; however individual AXIS items are not weighted and so this total score provides a general, but limited, classification that should be interpreted with some caution. The number of papers meeting each AXIS criterion (‘Yes’) is presented in Table 1. The number of papers meeting the criteria for each AXIS domain (reporting quality, study design quality, and potential biases) is presented in Figs 2–4 respectively.
All studies scored positively for items concerning: clear objectives, appropriate study design, appropriate measurement of outcome variables, internal consistency of presented results, and appropriate conclusions justified by the results. Study quality correlated with year of publication (r = .64, p < .001), and appears to be improving with time (see Fig 5). Nonetheless, three main areas for study quality improvement were highlighted throughout the AXIS assessment: sample size justification, nonrespondents, and discussion of limitations.
Sample size justification, sample representativeness and open science
Only 5 of 71 (7%) papers included a-priori power analyses to justify their sample sizes. Although power analyses are rarely conducted in this research area, the mean sample size is large at 211 (median = 124), suggesting that both simple correlational and between-subject comparisons are well-powered to detect large (.99 and .98), moderate (.94 and .88) and potentially for small effect sizes (.72 and .72)–large, moderate and small effects being 0.7, 0.5 and 0.2 respectively . Despite this, many studies have assessed multiple outcomes and/or multiple metrics derived from the same tests and so, a simple power analysis will mislead. As a rough metric on this issue, we calculated the number of p-values presented in the results section for each of the 71 papers. This revealed a mean number of p-values per study of 43 (median = 30) with a range from 1  to over 200 . So, despite relatively large samples, the possibility of type-1 errors remains high, especially when studies fail to adjust alpha levels for high levels of multiple testing. Only 12/71 studies employed some correction; eleven used a Bonferroni correction [15, 25, 64–72], and one used the Newman–Keuls adjustment . Those studies that adjusted alpha levels tended to report more p-values than those that did not adjust (means 57 vs. 40). So, adjustment was made in fewer than one-in-five studies, most being published recently.
Despite good-strong quality ratings, some core features of open science practice including preregistration have yet to be embraced in this literature. Admittedly, we are assessing forty years of research and preregistration is a relatively recent innovation in psychology. Nonetheless, the Open Science Framework (OSF) began in 2013 as a repository for preregistrations–so potentially up to half of the 71 studies could have preregistered, yet only 2 (<3%) have done so [71, 74], with both published in 2020. The issue about preregistration is fundamental in this area of research. First, studies are characterised by large numbers of analyses often involving multiple outcome measures and/or multiple metrics derived from smaller numbers of tests. We have also seen that up to one-third of studies (25/71) have assessed relationships between cognitive function and paranormal test subscale scores (often with few items). This approach consciously or unconsciously increases the likelihood of reporting bias and HARKing (hypothesizing after results are known), often perhaps with little chance of, or interest in, replicating such findings (see Laws  for a discussion). Second, the preregistration of future trials will also help to assess whether null results remain unpublished. Third, preregistration would identify both the primary outcome and the sample size required to achieve an acceptable level of statistical power. Ironically, the lack of attention to pre-registration and justifying sample sizes contrasts with research on paranormal phenomena, where study registration and a priori power calculations have been employed for many years .
Another issue concerns the sampling frame and its representativeness. Almost two-thirds of all samples are undergraduates (45/71: 63%) and of those, 21 (30%) consisted wholly of, or a majority of, psychology undergraduates. Only one-third of all samples consisted of: non-undergraduates (15/71: 21%), mixed undergraduate and general population samples (8/71: 11%) or other non-undergraduate samples (2/71: 3%). One non-undergraduate study by Blackmore in 1997  consisted of a national newspaper-based study (Daily Telegraph) and recruited an exceptionally large sample (n = 6238). If we exclude this outlier, then 60% of all participants in the 70 remaining studies have been completely (k = 41) or majority undergraduate (k = 5) samples, with 16 involving only psychology graduates. Amongst the non-undergraduate samples, this includes visitors to a paranormal fair [29, 66], members of the Society for Psychical Research , Mechanical Turk participants , and some used Crowdflower, a crowdsourcing website [64, 80, 81]. So, even the non-undergraduate samples may not necessarily represent the wider population (see Stroebe et al.  for a discussion). Studies testing undergraduates and non-undergraduates did not differ in mean sample size (196 vs 215, with the exclusion of Blackmore , t(68) = .29, p = .78, d = .08) or in quality ratings (14.73 vs 15.19: t(69) = -.90, p = .37: d = .23). The profile of sampling, however, is pertinent because paranormal beliefs are inversely related to educational levels [22–24], and those studying sciences, medicine, and psychology exhibit lower levels of paranormal beliefs [25, 26]. Such samples are unrepresentative and may bias findings because they may combine lower levels of paranormal beliefs and higher cognitive functioning than occurs in the general population.
In addition to samples comprising more highly educated university students, most participants are female (>60%). The importance of this latter aspect of sampling is underscored for at least two reasons. First, some authors have documented greater levels of paranormal beliefs in women [27–32]. Indeed, the last literature review by Irwin in 1993  stated that “the endorsement of most, but certainly not all, paranormal beliefs is stronger among women than among men” (p.8). Second, gender (and age) effects are not consistently reported  and have resulted in substantial debate [34–36]. This debate largely results from differences in psychological test theories (see Dean et al. 2021  for a discussion). Classical test theory—used to develop common paranormal belief measures, such as the RPBS—does not test for the presence of differential item functioning (DIF). DIF refers to when individuals with the same latent ability (e.g., paranormal beliefs), but from different groups, have an unequal probability of giving a response. By contrast, modern test theory, including the use of Rasch scaling, can produce unbiased interval measures focused on the hierarchical properties of questionnaire items. This has resulted in the revision of older paranormal belief measures using modern test theory, to create scales that accurately capture fluctuations in levels of belief rather than differences in item functioning [84, 85]. When these problematic items are removed from scales such as the RPBS and ASGS, paranormal belief scores are no longer associated with sex, but small differences remain for age [84, 85]. Although these effect sizes seem to be small (e.g., 0.15 , identified by Cohen  as a small effect size), they are more likely to reflect a true and meaningful fluctuation in paranormal belief levels, compared to findings reported using scales developed through classical test theory.
Most studies (52/71) failed to state whether measures were undertaken to address and categorise nonrespondents. As such, response rates and risk of nonresponse bias could not be calculated. Nonresponse bias arises when respondents differ from nonrespondents beyond sampling error and may reduce external validity [86, 87]. Survey-based approaches are at a greater risk of nonresponse bias owing to their high nonresponse rates, with those relying on self-administered online surveys suffering from higher nonresponse rates than those using face-to-face methods . Most studies have been conducted in face-to-face settings (k = 59), however the past few years has seen a rise in online data capture (k = 12). Compared to face-to-face studies, online studies rated more highly on study quality (16.50 vs 14.49: t(69) = -3.87, p < .001, d = 1.32) and had larger mean sample sizes (482 vs 155: t(11.83) = -3.12, p = .008, d = -1.69, equal variances not assumed), but also report larger numbers of statistical comparisons (96.42 vs 31.58,: t(12) = -3.47, p = .005, d = 1.33, equal variances not assumed).
Of the 19 papers that did provide nonresponse rates, seven had response rates < 70% and so raise concerns about potential nonresponse bias . Only one of 19 papers  presented any information about nonrespondents, reporting that they had marginally lower educational attainment than respondents. Similar findings for nonrespondents have been reported in other research areas [91–94]. Finally, we note that online studies more often have records of nonrespondents. Guidance has been developed on detailing non-response details in online survey-type studies e.g., the Checklist for Reporting Results of Internet E-Surveys (CHERRIES)  and should routinely be reported.
Surprisingly, up to 40% of the included papers (29 of 71) did not include a discussion of study limitations. Discussion of study limitations forms a fundamental part of scientific discourse and is crucial for genuine scientific progress, allowing a reader to contextualise research findings . The failure to discuss limitations might be viewed partly as a failure of the peer review process , but responsibility ultimately resides with authors. Detailing limitations allows other researchers to consider methodological improvements, identify gaps in the literature and has an ethical element by aiding research transparency. The inclusion of limitations not only helps increase research quality, but facilitates directions for future research and crucially, replications.
Of the 71 studies published since 1980, three-quarters were rated as ‘good’ or ‘strong’ in quality, and only one received a ‘poor’ quality rating. Indeed, study quality also indicates a continuous improvement in study quality across four decades of research. Despite the high levels of study quality and evidence of improving quality, we identified areas of methodological weakness: justifying sample size, providing more detail about non-respondents, and discussing study limitations.
One issue of note is the sampling, where almost two in three studies have relied on exclusively undergraduate samples (46/71: 65%), with many being psychology undergraduates. Future recruitment needs to move beyond the highly educated and address the bias towards female participants. Despite recruiting large samples, studies use large numbers of analyses, with a mean of 43 p-values reported in results sections, and rarely report appropriate adjustment of significance levels (12/71: 17%). These methodological issues are compounded by the fact that so few studies pre-register their primary hypotheses and analyses in advance (2/71: 3%).
The 71 studies were grouped into six sections: (1) perceptual and cognitive biases, (2) reasoning, (3) intelligence, critical thinking, and academic performance, (4) thinking style, (5) executive function, and (6) other cognitive functions. Whenever possible, categories were classified according to the focus identified by the authors in each study. Such classifications are necessarily a simplification and not intended to provide a definitive organisation. Moreover, many studies could receive multiple classifications owing to the breadth of testing conducted (see S9 Table). In this context, S9 Table shows that two in three (48/71) studies might be classified as assessing executive function.
Articles presented in the first section (perceptual and cognitive biases) included scenarios aimed at measuring cognitive biases towards confirmatory evidence, and the impact of visually degraded stimuli on biases in perceptual decision-making. Examples of tasks used in the second section (reasoning) include the mental dice task  aimed at measuring probabilistic reasoning, and the Reasoning Tasks Questionnaire (RTQ)  to assess both probabilistic and conditional reasoning. Studies in the third category (intelligence, critical thinking, and academic performance) included published measures such as the Watson-Glaser Critical Thinking Appraisal (WGCTA)  and variations of Raven’s matrices (e.g., the Advanced Progressive Matrices Test ; Raven’s Progressive Matrices , and measures of academic achievement such as grade point average. In the fourth section (thinking style), papers used measures such as the Rational Experiential Inventory (REI)  and the Cognitive Reflection Test , aimed at assessing intuitive and analytical thinking. Studies in the fifth section (executive function and memory) included tasks such as the Deese-Roediger-McDermott task (DRM)  and the Wisconsin Card Sorting Test [105, 106]. The final cognitive section (other cognitive functions) included tasks to measure indirect semantic priming (using prime-target word pairs) and implicit sequence learning.
Perceptual and cognitive biases
Nineteen articles (n = 3,397) assessed perceptual and cognitive biases. Perceptual decision-making with high visual noise stimuli has produced inconsistent findings (k = 7). For example, in 2014 Simmonds-Moore  found believers made more misidentifications of degraded black and white images of objects and animals (e.g., shark, umbrella), despite having faster response latencies than sceptics (suggesting a potential speed-error trade-off, with believers favouring speed over accuracy). By contrast, Van Elk  found sceptics mis-categorised degraded black and white images of face stimuli as houses more frequently than believers. The findings from both studies, however, contradict those from Blackmore and Moore’s 1994 study , which reported no difference in the accurate identification of degraded monochrome images for believers and sceptics.
Two studies assessed perceptual decision-making relating to faces within degraded and artifact stimuli. Using black and grey images of faces and “nonfaces” (scrambled eyes-nose-mouth configurations), Krummenacher and colleagues  found believers made significantly more Type I errors than sceptics, favouring “false alarms” over “misses” (i.e., believers had a lower response criterion when classifying images as faces, with a bias towards “yes” responses). Similarly, Riekki et al.  presented participants with 98 artifact face pictures (containing a face-like area where eyes and a mouth could be perceived, e.g., a tree trunk) and 87 theme-matched non-face pictures (e.g., a tree trunk with no face-like areas). Believers rated the non-face pictures as more face-like and assigned more extreme positive and negative emotions to non-faces than sceptics.
A study conducted by Caputo  employed the strange-face illusion paradigm, in which pairs of participants are instructed to gaze into each other’s eyes for 10 minutes in a dimly-lit room. This paradigm induces the experience of seeing face-related illusions and is assessed on a self-report measure (Strange Face Questionnaire; SFQ ). No association was found for paranormal beliefs and the experience of strange-face illusions. A final study of perceptual decision-making conducted by Van Elk  used point-light-walker displays (an animated-point-set of 12 points, representing a human walking on a treadmill), randomly scrambling the location of each individual dot across the display; and participants had to detect if a human agent was present. Paranormal believers were more prone to illusory agency detection than sceptics, being biased towards ‘yes’ responses when no agent was present.
Cognitive biases have been assessed in 11 papers. These include reports of significant associations between paranormal belief and illusion of control or differences in causation judgements [65, 112–114] and risk perception . Two studies, however, report no significant relationships [29, 116]. Further work shows that paranormal beliefs positively correlated with biases towards: anthropomorphism, dualism, teleology, and mentalising, but were not predicted by mentalising .
Proneness to jump to conclusions was assessed by Irwin and colleagues  using a computerised task . Participants were informed of proportions of beads in two jars (e.g., 70 black and 30 red beads in jar one, but 30 black and 70 red beads in jar two), then shown a sequence of beads drawn one at a time from one of the jars and asked to identify whether beads were drawn from jar one or two, and to indicate when they are certain. Those who require fewer draws before being certain of their decision are identified as being prone to “jump to conclusions”. A significant negative correlation emerged for jumping to conclusions, but only with the Traditional Religious Beliefs (TRB) subscale of the Rasch-devised RPBS . A significant positive correlation was also found between TRB scores and self-report indices of jumping to conclusions as measured with the Cognitive Biases Questionnaire [118, 119] (e.g., “imagine you hear that a friend is having a party and you have not been invited”, 1 = little or no inclination to jump to a premature conclusion, 2 = inclination to make a cautious inference, 3 = inclination to jump to a dramatic inference).
Prike et al.  assessed proneness to jumping to conclusions using both a neutral (beads task) and an emotional draws-to-decision task (where participants decide whether positive or negative words are more likely a description of “Person A” or “Person B”–for a full description see Dudley et al.’s work ). Participants also saw a series of 24 scenarios to assess bias towards confirmatory and disconfirmatory evidence, as well as liberal acceptance. Each scenario consisted of three statements presented one at a time, e.g., (a) “Eric often carries binoculars with him”, (b) “Eric always has an unpredictable schedule”, (c) “Eric tries to solve mysteries”. Participants rated the likelihood of the same four response options after each statement, e.g., (a) “Eric is a private detective”, (b) “Eric is a bird expert”, (c) “Eric is a stalker”, (d) “Eric is an astronaut”. Each scenario presented an absurd interpretation (implausible for all three statements), a neutral lure, an emotional lure, and a true interpretation (less or equally as plausible as the lure options after the first statement but became the most plausible by the third statement). Paranormal beliefs were related to both disconfirmitory and confirmatory biases, but not to jumping-to-conclusions. Liberal acceptance predicted belief in the paranormal, but not after controlling for delusion proneness (as measured by the Peters et al. Delusions Inventory; PDI ). Lesaffre et al.  exposed participants to a magic performance and asked whether it was accomplished through: (1) paranormal, psychic, or supernatural powers, (2) ordinary magic trickery, or (3) religious miracles. Confirmation bias (i.e., explaining the magic performance in terms of paranormal powers) was associated with higher levels of paranormal beliefs. Barberia and colleagues  demonstrated that educating participants about confirmatory bias reduced scores on the Precognition subscale of the RPBS (but did not reduce global belief scores).
The studies assessing perceptual and cognitive biases are somewhat inconsistent regarding perceptual decision-making errors in response to degraded or ambiguous stimuli. Of the studies exploring perceptual decision-making, four suggest an inverse relationship between paranormal belief and perceptual decision-making, two found no relationship, and one reported more perceptual decision-making errors from sceptics. Results show greater consistency when perceptual decision-making tasks involve identifying a human face/agent (rather than inanimate objects or animals), with believers making significantly more false-positive misidentifications than sceptics. In the 11 studies exploring cognitive biases, paranormal believers show a consistent bias towards both confirmatory and disconfirmatory evidence. The evidence that paranormal belief links to the tendency to “jump to conclusions” is weaker, but only two studies present findings related to this outcome.
Seventeen papers have focussed on reasoning ability (n = 9,661), with the majority (12/17) reporting significant inverse relationships with paranormal beliefs and probabilistic reasoning. Perception of randomness and the conjunction fallacy have also been associated with paranormal beliefs on tasks with both neutral and paranormal content [69, 80, 124–128].
In 2007, Dagnall et al.  presented 17 reasoning problems across four categories: perception of randomness, base rate, conjunction fallacy, and probability. Perception of randomness problems required participants to determine the likelihood of obtaining particular strings (e.g., “Imagine a coin was tossed six times. Which pattern of results do you think is most likely? (a) HHHHHH, (b) HHHTTT, (c) HTHHTT, (d) all equally likely”). Performance on these problems significantly predicted paranormal belief, with believers making more errors than sceptics. No significant differences or predictive effects emerged for the three other problem categories. In a later study, Dagnall and colleagues  presented 20 reasoning problems across five categories of: perception of randomness, base rate, conjunction fallacy, paranormal conjunction fallacy, and probability. The authors again reported perception of randomness to be the sole predictor of paranormal beliefs, with high belief associated with fewer correct responses. While these papers report no effects in relation to conjunction fallacy, Rogers et al.  demonstrated a significant main effect of paranormal belief on conjunction errors, with believers making more errors than sceptics. In later studies, both Prike et al.  and Rogers et al.  reported an association between paranormal belief and conjunction fallacy, but this association was only significant for scenarios with confirmatory outcomes in the latter study.
Probabilistic reasoning ability has been consistently associated with paranormal beliefs across five studies. In one paper , participants received a probabilistic reasoning test battery comprised of six tasks. For example, one task was a variant of the birthday paradox (from Blackmore and Troscianko ), in which participants are asked: “How many people would you need to have at a party to have a 50:50 chance that two of them will have the same birthday (regardless of year of birth)”. Possible answers for this task were 22 (correct), 43, or 98. Significant positive correlations emerged between paranormal beliefs and errors on three of the six tasks (dice sequences, dice throws, and sample size estimates). In the second study , participants received written descriptions of two hypothetical events: throwing 10 dice once to get 10 sixes and throwing one die 10 times to get 10 successive sixes; and had to identify whether one event was more probable or both equally probable. The authors reported 64% of believers and 80% of sceptics correctly identified that both events were equally probable. Brugger et al.  assessed differences in repetition avoidance between believers and sceptics on a mental dice task (where participants imagined throwing a die and had to write down the number they imagined being on top of the die), finding significantly fewer repetitions in believers than sceptics. Similarly, Bressan et al.  used a probabilistic reasoning questionnaire with problems concerning the comprehension of sampling issues, sensitivity to sample size, representative bias (as applied to sample size or random sequences) and the generation of random sequences. Believers made more probabilistic errors on two of four generation of random sequences problems: (1) simulated coin toss problem, in which participants were asked to fill in 66 empty cells by writing ‘H’ (heads) or ‘T’ (tails) randomly to make a resulting sequence that was indistinguishable from that of an actually tossed coin), and (2) an adapted version of Brugger et al.’s  mental dice task. Finally, Blackmore  asked participants whether a list of 10 statements (as might be produced by a psychic, e.g., “there is someone called Jack in my family”) were true for them, and to estimate the number of these statements that might be true for a stranger in the street. The number of ‘true’ statements was greater for believers than sceptics (significantly on five of the ten questions), however no significant differences emerged when estimating the number of statements true for a stranger.
The final four papers in this section found non-significant correlations between paranormal belief and probabilistic reasoning, but significant correlations with conditional reasoning tasks. Using the Reasoning Tasks Questionnaire (RTQ) , one study  found neither probabilistic reasoning nor neutral conditional reasoning were associated with paranormal beliefs. However, conditional reasoning was associated with paranormal beliefs when conditional reasoning tasks contained paranormal content rather than neutral content, with believers making fewer errors on these tasks. The second paper  measured reasoning using a test that combined probabilistic reasoning questions (seven in total, four of which were derived from the RTQ), conditional reasoning questions with abstract content (e.g., “if C is true, then D will be observed. D is observed. Therefore, C is true: True or False?”), and conditional reasoning questions with paranormal content (e.g., “if people are aware of hidden objects, then clairvoyance exists. People are aware of hidden objects. Therefore, clairvoyance does exist: True or False?”). Overall, paranormal beliefs correlated negatively with reasoning ability and conditional reasoning ability, but not with probabilistic reasoning ability. When comparing the two types of conditional reasoning questions, the authors reported no difference between the correlations for paranormal beliefs and either the abstract or paranormal conditions. Following a similar format, Wierzbicki  assessed reasoning ability using 16 conditional reasoning statements with either parapsychological or abstract content, finding paranormal belief scores and number of reasoning errors correlated positively. The final paper in this section  employed 32 statements conditional reasoning statements and found participants with strong paranormal beliefs made more reasoning errors than those with weak paranormal beliefs.
In general, evidence suggests paranormal beliefs are associated with poorer reasoning, however this line of research is characterised by inconsistent findings. Two studies report that the perception of randomness is a significant predictor of paranormal belief and provide some evidence of replicability [126, 127]. Despite this, evidence regarding the association between paranormal belief and the conjunction fallacy are conflicting, with two studies [127, 128] reporting no effect, and three [80, 128, 129] reporting significant associations. This may be due, in part, to the different statistical techniques used within each study, as those reporting no effect [126, 127] used multiple regression analyses with all probabilistic tasks entered as predictor variables, while studies reporting significant associations [80, 128, 129] only included conjunction fallacy tasks in their predictive models. Similar inconsistency emerges for probabilistic reasoning, with nearly equal numbers of studies reporting significant and nonsignificant associations with paranormal beliefs.
Intelligence, critical thinking, and academic performance
Twelve studies explored intelligence, critical thinking, and academic performance (n = 2,657). Seven papers focused on critical thinking ability, with two finding significant reductions in paranormal belief following a course in critical thinking [70, 135]. Alcock and Otis’ 1980 study  employed the Watson-Glaser Critical Thinking Appraisal (WGCTA)  significantly higher levels of critical thinking ability in sceptics than believers. In 1998, Morgan and Morgan  conducted a similar study, measuring critical thinking using a revised version of the WGCTA , finding significant negative correlations between critical thinking ability and three subscales of the PBS (Superstition, Traditional Religious Belief, and Spiritualism). No significant correlation between paranormal belief and critical thinking emerged in the remaining three papers [139–141]. One did, however, report significant negative correlations between reasoning ability (measured using the Winer Matrizen-Test ) and three subscales of the PBS: Traditional Paranormal Beliefs, Traditional Religiosity, and Superstition .
The links between paranormal beliefs and academic achievement, or general intelligence are both mixed and weak. Two papers report significant negative correlations, one between overall paranormal belief scores and mean academic grade  and one between grade point average and the Witchcraft and Superstition subscales of the PBS . Turning to intelligence, Betsch et al.  found a significant inverse relationship between IQ and paranormal beliefs, but only when controlling for sex, supporting similar findings from Smith et al.’s 1998 study  which reported a significant negative correlation between paranormal beliefs and intelligence (using the Advanced Progressive Matrices Test, Set 1 ). Nevertheless, two studies found no association between paranormal beliefs and intelligence. Royalty  used the information subtest of the Wechsler Adult Intelligence Scale  as an estimate of full-scale IQ, and the vocabulary subtest of the Multidimensional Aptitude Battery  as a measure of verbal intelligence. Stuart-Hamilton et al.  found no relationship with fluid intelligence using Raven’s Progressive Matrices ; however, this sample were older (mean age of 71).
Conflicting findings emerge from studies of intelligence, critical thinking, and academic performance, with an almost equal number of significant and non-significant associations to paranormal beliefs. Some of this heterogeneity, however, appears to reflect whether studies used crystallised or fluid intelligence tasks and the age of the sample (e.g., Stuart-Hamilton et al.  failed to find a relationship between fluid IQ and paranormal beliefs in an older sample, but Smith et al.  found a significant negative association in a younger sample). The precise relationship of paranormal belief with intelligence requires further investigation, both by considering the age of the sample and assessing relationships with fluid and crystallised intelligence separately.
Thirteen studies (n = 4,100) examined aspects of thinking style. One consistent finding is a significant association between paranormal belief and an intuitive thinking style, which is characterised as being quick and guided by emotion [148–152]. A further study  also reports a significant partial correlation after controlling for sample type (online versus recruited face-to-face recruitment) owing to significantly higher levels of paranormal beliefs and intuitive thinking, and significantly lower rational/analytical thinking, in the online sample versus the face-to-face sample.
Contradictory findings, however, have emerged concerning paranormal beliefs and an analytical thinking style, which is thought to be more effortful and driven by logic. A positive relationship emerged in two studies [149, 150] while two [72, 152] found no relationship between paranormal beliefs and analytical thinking as assessed by the Rational Experiential Inventory (REI ). Four further studies report significant negative relationships between paranormal beliefs and analytical thinking using various measures: two [81, 154] used different versions of the Cognitive Reflection Test ; one  used the Rational Experiential Multimodal Inventory ; and one  used both the Argument Evaluation Test  and the Actively Open-Minded Thinking scale [156, 157]. A further study reported a significant negative relationship between paranormal beliefs and analytical thinking but could not replicate the finding .
The final two papers in this section document relationships between paranormal belief and other cognitive styles. Gianotti et al.  presented participants with 80 word-pairs (40 semantically indirectly related, 40 semantically unrelated), and they had to state if a third noun was semantically related to both words. Believers showed increased verbal creativity, making significantly more rare associations than sceptics for unrelated word-pairs, but not for indirectly related word-pairs. Hergovich  used the Gestaltwahrnehmungstest  to assess degree of field dependence, by presenting participants with figures in which they needed to find an embedded figure in the form of a house and reported a significant positive relationship between paranormal beliefs and field dependence.
Eight papers report positive associations between an intuitive thinking style and paranormal belief (although it should be noted that one study reported only a partial correlation after controlling for sample type). By contrast, evidence concerning an analytical thinking style is inconsistent, with reports of a negative relationship with belief (k = 4), a positive relationship (k = 2), and no relationship (k = 2). An additional study did report a negative relationship between analytical thinking and paranormal belief, but this was not replicated in a follow-up study. The final two studies in this section suggest positive relationships between paranormal belief and both verbal creativity and field dependence.
Executive function and memory
Six studies (n = 810) assessed memory or executive function. Turning first to memory, the findings are inconsistent. One study  showed paranormal belief predicted false memory responses on a questionnaire-based measure, and two others [59, 78] reported associations between belief and behavioural measures of false memories but failed to replicate this in additional samples. Dudley’s 1999 study  had participants complete the Paranormal Belief Scale while rehearsing a five-digit number or not; and found significantly higher paranormal belief scores in the group who had their working memory restricted (by the rehearsal task). However, a recent study by Gray and Gallo  failed to find any differences in working memory, episodic memory or autobiographical memory for believers and sceptics.
Further inconsistencies can be seen when exploring relationships between paranormal belief and inhibitory control, with Lindeman et al.  noting more errors from believers than sceptics on the Wisconsin Card Sorting Test [105, 106], but not on the Stroop task . Wain and Spinella  explored executive function using a self-report measure and found a negative correlation between paranormal belief and executive functioning, with negative correlations between belief and both inhibition and organisation.
The studies in this section report inconsistent links between paranormal belief and memory. While three of four memory studies report links between paranormal beliefs and an increased tendency to create false memories, two of these studies failed to replicate the finding. Two studies assessing executive functioning both suggest poorer performance is associated with belief but may interact with the measure of executive functioning.
Other cognitive functions
Finally, four papers (n = 368) explored other aspects of cognitive function not covered by the categories already described. Pizzagalli et al.  tested the association between indirect semantic priming and paranormal beliefs using 240 prime-target word pairs, with target words either directly related, indirectly related, or unrelated to the prime word. Compared to sceptics, believers had shorter reaction times for indirectly related target words were presented in the left visual field, suggesting a faster appreciation of distant semantic associations which the authors view as evidence of disordered thought. The final three papers did not find any significant relationships between paranormal beliefs and: implicit sequence learning , cognitive complexity , or central monitoring efficiency .
This systematic review provides the first evidence synthesis of the associations between paranormal beliefs and cognitive function since the early ‘90s  and the first assessment of study quality. The review identified 71 studies involving 20,993 participants. While most studies achieve good-strong quality ratings, specific areas of methodological weakness warrant further attention. In particular, studies often employ large numbers of measures, metrics and analyses, with no clearly identified primary outcome or adjustment of probability levels. These factors necessarily constrain any firm conclusions because of the high probability of Type 1 errors. Second, information about nonrespondents was either unreported or reported with insufficient detail to permit an assessment of potential nonresponse bias. Finally, up to a third of studies failed to discuss study limitations.
The cognitive deficits hypothesis is apparent in most papers (55/71), and a simple vote count shows that two-in-three studies (46/71) document that paranormal beliefs are associated with poorer cognitive performance. The most consistent findings across the six cognitive domains emerged between paranormal belief and an intuitive thinking style, with all eight studies confirming a positive association. Consistent findings also emerged for a bias towards confirmatory and disconfirmatory outcomes, as well as for poorer conditional reasoning ability and perception of randomness, though fewer studies were conducted in these areas. The two studies assessing executive functioning identified a negative association with paranormal belief but showed some inconsistency depending upon the type of executive test used. Associations with all other aspects of cognitive functioning (perceptual decision-making, jumping to conclusions and repetition avoidance, the conjunction fallacy, probabilistic reasoning, critical thinking ability, intelligence, analytical thinking style, and memory) have proven inconsistent, with nearly equal numbers of significant and null findings.
Various measurement issues, however, need to be considered. One concerns the large number of paranormal belief measures employed and their varied psychometric properties. The studies reviewed employed 26 different tests of paranormal belief, with the most common being the RPBS and a Rasch variant, with the next most common being 13 bespoke tests created by the authors. Such variability most likely contributes to heterogeneity across studies and potentially undermines the reliability of reported associations between cognitive functions and paranormal beliefs. For a full summary of the scales used in each study, see S8 Table.
Not only does the range of cognitive measures used within each cognitive domain contribute to heterogeneity across studies, but so does the reliability of such measures. As Hedge et al.  note, individual differences in relation to cognition and brain function often employ cognitive tasks that have been well-established in experimental research. Such tasks may not be directly adaptable to correlational research, however, for the very reason that they elicit robust experimental effects; they are specifically designed and selected for low between-participant variability. Most studies presented here are correlational and use a combination of established experimental tasks (e.g., the WCST, Raven’s Matrices, Cognitive Reflection Test, Embedded Figures Test) and questionnaire-based methods to assess cognition. This may undermine the reliability of reported associations between cognitive functions and paranormal beliefs if studies use experimentally derived cognitive tasks that are sub-optimal for correlational studies. Hedge et al.  offer several suggestions to overcome this, such as the use of alternative statistical techniques (e.g., structural equation modelling), factoring reliability into a-priori power calculations to reduce the risk of bias towards a null effect, or using within-subjects designs when the primary goal of the study is to examine associations between measures rather than focusing on individual differences per se. The largely correlational approach of studies reviewed here also suffers from the standard limitations of questionnaire studies and correlational designs. Although regression approaches can be powerful, they cannot establish causality without the use of longitudinal methods. This correlational approach also means that moderators and mediators of the relationship between paranormal beliefs and cognition remain underspecified.
Future directions–the fluid-executive model
The general trend of the current review accords with the cognitive deficits hypothesis approach described by Irwin almost 30 years ago –at least insofar as around 60% of published studies document paranormal beliefs to be associated with poorer cognitive performance. Nonetheless, the cognitive deficits hypothesis does not provide an entirely satisfying account of why paranormal believers and sceptics perform differently on such a wide variety of cognitive tasks. This has some key implications: first, that people who believe in the paranormal seemingly have a disparate array of cognitive deficits–are these assumed to have occurred independently of each other or do they somehow accumulate various cognitive deficits? Another implication is that such an array of cognitive deficits is largely atheroetical, with various researchers pursuing seemingly independent lines of research linking cognitive function to paranormal beliefs with little attention to integration. Hence a somewhat underspecified model pervades the literature, with often limited justification for the specific role played by cognitive function in paranormal beliefs or how and why such an array of deficits are identifiable in paranormal believers. Given the almost complete lack of preregistration, accompanied by the large numbers of statistical analyses often conducted without correction, we also cannot exclude concerns about potential publication bias, false positives, and selection bias. Empirical studies presenting significant or favourable findings are, of course, more likely to be published ; and crucially, psychologists tend to rate studies as having better quality when they conform to prior expectations. Hergovich et al.  demonstrated this bias by presenting psychologists (all of whom did not believe in astrology) with descriptions of parapsychological studies, finding that they gave higher quality ratings to studies disproving astrological hypotheses. Participants were less likely to complete the study if they received an abstract confirming astrological hypotheses, with an attrition rate of 38.90%. These issues underscore the importance of pre-registered replications of key findings (see Laws  for a discussion). To our knowledge, potential publication bias has not been extensively assessed. A previous meta-analysis of psychokinesis studies indicated the presence of publication bias , but this claim has been challenged . Finally, questions also arise about whether poorer performance by believers on any cognitive ability tests even merits the descriptor of ‘deficits’; and recently has been rephrased more neutrally as the cognitive differences hypothesis . The term ‘deficit’ typically implies a permanent lack or loss of cognitive function; however, little to no research has looked at the consistency of cognitive performance in paranormal believers across time and established whether poorer cognitive performance is more trait than state dependent. While paranormal beliefs appear to be largely trait-like, they may have a state component .
While current studies do not necessarily endorse Irwin’s 1993  comment that “…the believer in the paranormal is held variously to be illogical, irrational, credulous, uncritical, and foolish” (p.16), they converge on an underlying non-specific cognitive deficit or collection of deficits. Typically, when an array of cognitive deficits/differences are documented, researchers would want to know if specific areas of cognitive weakness emerge. Currently, no cognitive area suggests a specific deficit profile in paranormal believers. Although not directly tested, paranormal believers might display heterogeneous cognitive profiles that link to different paranormal belief components. Nonetheless, it is hard to see why or how specific types of paranormal belief content would link to different cognitive deficits.
One possibility is that the failure of any specific area of cognitive dysfunction to emerge (amongst perceptual and cognitive biases, reasoning, intelligence, critical thinking and academic performance, thinking style, and executive functioning), may point to a common shared underlying cognitive component. One feasible interpretation is that many of the tasks described in the various domains described here do in fact share a common cognitive ability—higher-order executive functions (planning, reasoning and problem-solving, impulse control, initiation, abstract reasoning, and mental flexibility), which in turn may be related to aspects of fluid intelligence .
Human functional brain imagining identifies strikingly similar patterns of prefrontal cortex activity in response to cognitive challenges across various seemingly different domains, including: increased perceptual difficulty (high vs low noise degradation), novelty, response conflict, working memory, episodic and semantic memory, problem solving, and task novelty [177–179]. This demand-general activity underlies our ability to engage in flexible thought and problem-solving  and is closely linked to fluid intelligence . We propose that the broad cognitive-deficit profile linked to paranormal beliefs may overlap with functions of the multiple-demand (MD) system. Part of the function of the MD system concerns its role in the separation and assembly of task components and that this accounts for the link with fluid intelligence. In this context, we suggest that each of the cognitive domains linked to paranormal beliefs may indeed be subserved by this MD system housed in the fronto-parietal cortex. The section on executive function is self-evidently linked with the frontal system. The section on intelligence similarly highlights links between paranormal beliefs and fluid IQ measures such as the Ravens Matrices [100, 101]. Studies further show the same MD system is recruited when confronted with perceptually difficult tasks (such as those outlined in the section on perceptual and cognitive biases for degraded visual input) [66, 67, 107, 108]. Aside from supporting our problem-solving ability, fluid intelligence and various aspects of executive functioning (e.g., working memory) underpins our ability to reason and to see relations among items and includes both inductive and deductive logical reasoning. The section on reasoning shows paranormal beliefs are related to conditional and probabilistic reasoning [69, 77, 80, 124–134]. Thus, many of the cognitive deficit-paranormal belief associations may be reframed as the product of a single underlying fluid intelligence-executive component. Going forward, such a model suggests potential avenues of research. One prediction would be that groups of believers and sceptics matched for fluid IQ would be less likely differ on a range of cognitive tasks.
Limitations of the present review
The current review is the first to assess the quality of studies examining cognitive function and paranormal beliefs. We report study quality is good-to-strong, with interrater reliability on AXIS ratings being almost-perfect (93%). Individual AXIS items however are not weighted and any simple comparisons between specific studies across total summed quality scores should be regarded with caution [181–183]. Thus, two studies with the same total quality score, but across different items, might not be comparable because some items may be more concerning to quality than others. Hence, we have focused on specific domains of strength or weakness across studies.
We acknowledge substantial limitations regarding the classification of studies into six areas of cognitive function: (1) perceptual and cognitive biases, (2) reasoning, (3) intelligence, critical thinking, and academic performance, (4) thinking style, (5) executive function, and (6) other cognitive functions. S9 Table shows that many of the studies could be re-classified and indeed, two-thirds (48/71) could be re-classified as assessing executive functioning. The latter is consistent with our proposal that a substantial proportion of the published studies may be documenting a relationship between paranormal beliefs and higher-level executive function/fluid intelligence.
Our preregistered protocol had an exclusion criterion concerning samples with individuals aged less than 18, and this led to our excluding 11 datasets (see S1 Table for a complete list and details; Aarnio & Lindeman , Saher & Lindeman , and Lindeman & Aarnio  were overlapping or identical samples). A key reason for exclusion was because age impacts both cognitive functions and paranormal beliefs. Certain cognitive functions, for example executive functions, take until late adolescence or early adulthood to mature . Additionally, younger individuals also show higher levels of paranormal beliefs [187; for a discussion see Irwin’s review, 53]. While the exclusion of these studies is a potential limitation, their exclusion does not change our key findings or conclusions drawn from this review. In the same context, our lack of an upper age limit exclusion criterion could also be considered as a limitation. Sixteen papers (23%) reviewed here included participants aged 65+ (though 25/71 (36%) studies did not report on the age range of participants). While some cognitive functions do not mature until late adolescence or early adulthood, measurable changes in cognitive function occur with normal aging. Performance on certain cognitive tasks has been shown to decline with age, such as those requiring executive functioning (including decision-making, working memory and inhibitory control), visuoperceptual judgement and fluid-intelligence [188, 189]. Such cognitive declines have been associated with age-related reductions of white matter connections in brain regions including the prefrontal cortex [190, 191].
Finally, one limitation is that we were unable to conduct a meta-analysis because of the large variability in outcome measures within and between studies, which make it challenging to determine the precise outcome being tested. In parallel, the large numbers of analyses per study also mean that conclusions from our systematic review regarding findings for specific cognitive domains must also be interpreted with some caution.
Our systematic review identified 71 studies spanning: perceptual and cognitive biases, reasoning, intelligence, critical thinking, and academic performance, thinking styles, and executive function. However, then tasks employed to assess performance in each domain often appear to require higher-order executive functions and fluid intelligence. We therefore propose a new, more parsimonious, fluid-executive theory account for future research to consider. Methodological quality is generally good; however, we highlight specific theoretical and methodological weaknesses within the research area. In particular, we recommend future studies preregister their study design and proposed analyses prior to data collection, and address both the heterogeneity issues linked to paranormal belief measures and the reliability of cognitive tasks. We hope these methodological recommendations alongside the fluid-executive theory will help to further progress our understanding of the relationship between paranormal beliefs and cognitive function.
S1 Table. Papers excluded from the review (participants < 18).
Note: Ts = Thinking Style, CPb = Cognitive and Perceptual Biases, O = Other Cognitive Functions, REI = Rational and Experiential Inventory (Epstein et al., 1996), SJQ = Scenario Judgements Questionnaire (Rogers et al., 2016; Rogers et al., 2011), IPO-RT = Inventory of Personality Organization (Lenzenweger et al., 2001), RT = reality testing, ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), ESP = extrasensory perception, LAD = life after death, PK = psychokinesis, NAP = new age philosophy, TPB = traditional paranormal beliefs, RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004; Lange et al., 2000), CKCS = Core Knowledge Confusions scale (Lindeman & Aarnio, 2007; Lindeman et al., 2008), CRT = Cognitive Reflection Test (Frederick, 2005), BRC = base-rate conflict, BRN = base-rate neutral, SREIT = Self-Report Emotional Intelligence Test (Schutte et al., 1998), WCQ = Ways of Coping Questionnaire (Folkman & Lazarus, 1988), IBI = Irrational Beliefs Inventory (Koopmans et al., 1994).
S2 Table. Studies included in the systematic review concerning perceptual and cognitive biases.
Note: / = information not reported, P = perceptual biases, C = cognitive biases, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, BADE = bias against disconfirmatory evidence, BACE = bias against confirmatory evidence, TRB = traditional religious beliefs, ELF = extraordinary lifeforms, PRI = Personal Risk Inventory (Hockey et al., 2000), SFQ = Strange-Face Questionnaire (Caputo, 2015), IDAQ = Individual Differences in Anthropomorphism Quotient (Waytz et al., 2010), DS = Dualism Scale (Stanovich, 1989), EQ = Empathy Quotient (Baron-Cohen & Wheelwright, 2004).
S3 Table. Studies included in the systematic review concerning reasoning.
Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ESP = extrasensory perception, PK = psychokinesis, LAD = life after death, NAP = new age philosophy, DR = deductive reasoning, RTQ = Reasoning Task Questionnaire (Blackmore & Troscianko, 1985), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), RPBS = Revised Paranormal Belief Scale (Tobacyk, 2004), MMU-N = Manchester Metropolitan University New (Dagnall et al., 2010).
S4 Table. Studies included in the systematic review concerning intelligence, critical thinking, and academic performance.
Note: / = information not reported, C = cognitive ability, I = intelligence, m = males, f = females, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, ATS = Assessment of Thinking Skills (Wesp & Montgomery, 1998), WGCTA-S = Watson-Glaser Critical Thinking Appraisal Form S (Watson & Glaser, 1994), WGCTA = Watson-Glaser Critical Thinking Appraisal (Watson & Glaser, 2002; Watson & Glaser, 1980; Watson & Glaser, 1964), RPM = Raven’s Progressive Matrices (Raven et al., 2000), RPM Rasch Model = Raven’s Progressive Matrices Rasch Model (Rasch, 1960), MHVT = Mill Hill Vocabulary Test (Raven et al., 1998), CCTT = Cornell Critical Thinking Test (Ennis & Millman, 1985), WMT = Wiener Matrizen Test (Formann & Piswanger, 1979), APM = Advanced Progressive Matrices (Raven, 1976), WAIS-IS = Wechsler Adult Intelligence Scale Information Subtest (Wechsler, 1955), GPA = Grade Point Average.
S5 Table. Studies included in the systematic review concerning thinking style.
Note: / = information not reported, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, AOT = Actively Open-Minded Thinking Scale (Stanovich et al., 2016; Stanovich, 1999), CRT = Cognitive Reflection Test (Frederick, 2005), CRT-2 = Cognitive Reflection Test-2 (Thompson & Oppenheimer, 2016), REI = Rational-Experiential Inventory (Pacini & Epstein, 1999), WST = WordSum Test (Huang & Hauser, 1998), RI = Rational/Experiential Inventory (Norris & Epstein, 2011), IPSI-SF = Information-Processing Style Inventory Short Form (Naito et al., 2004), FIS = Faith in Intuition Scale (Pacini & Epstein 1999), NFC = Need for Cognition scale (Cacioppo et al., 1984), AET = Argument Evaluation Test (Stanovich & West, 1997), 10-Item REI = 10-Item Rational-Experiential Inventory (Epstein et al., 1996), GWT = Gestaltwahrnehmungs Test (Hergovich & Hörndler, 1994), EFT = Embedded Figures Test (Witkin et al., 1971).
S6 Table. Studies included in the systematic review concerning executive function and memory.
Note: / = information not reported, M = memory, EF = executive function, bl = believers, sc = sceptics, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, DRM = Deese-Roediger-McDermott (Roediger & McDermott, 1995), CRT = Criterial Recollection Task (Gallo, 2013), IIT = Imagination Inflation Task (Garry et al., 1996), RSPAN = Reading-Span Task (Daneman & Carpenter, 1980), OSPAN = Operation Span Task (Turner & Engle, 1989), SILS = Shipley Institute of Living Scale (Zachary, 1986), AET = Argument Evaluation Task (Stanovich & West, 1997), RAT = Remote Associations Test (Mednick, 1962), WCST = Wisconsin Card Sorting Test (Berg, 1948; Grant & Berg, 1948), EFI = Executive Function Index (Spinella, 2005), ANP = anomalous natural phenomena, TRB = traditional religious beliefs, NCQ = News Coverage Questionnaire (Wilson & French, 2006), ASGS = Australian Sheep-Goat Scale (Thalbourne 1995; Thalbourne & Delin, 1993), AEI = Anomalous Experiences Inventory (Kumar et al., 1994).
S7 Table. Studies included in the review concerning other cognitive functions.
Note: / = information not reported, bl = believers, sc = sceptics, f = females, m = males, ISL = implicit sequence learning, ISP = implicit semantic priming, VF = visual field, LVF = left visual field, RVF = right visual field, CME = central monitoring efficiency, RE = reasoning errors, CC = cognitive complexity, + = positive,— = negative, corr. = correlation, Ns. = nonsignificant, SPQ-B = Schizotypal Personality Questionnaire Brief (Raine & Benishay, 1995), RCRG = Role Construct Repertory Grid (Kelly, 1955).
S8 Table. Measures of paranormal beliefs used in the 71 studies included in the review.
Note:† = papers that provided reliability statistics for their novel scales, ‡ = used a translated version of the original scale, * = Musch & Ehrenberg (2002) developed a novel scale that was later named the BPS and was used in two subsequent studies. RPBS = Revised Paranormal Belief Scale (Tobacyk 1988; 2004), ASGS = Australian Sheep-Goat Scale (Thalbourne & Delin, 1993), PBS = Paranormal Belief Scale (Tobacyk & Milford, 1982), Rasch RPBS = Rasch devised Revised Paranormal Belief Scale (Lange et al., 2000), BPS-O = Belief in the Paranormal Scale (Original; Jones et al., 1977), BPS = Belief in the Paranormal Scale (Musch & Ehrenberg, 2002), MMU-N = Manchester Metropolitan University New (see Dagnall et al., 2010), MMU-PS = Manchester Metropolitan University Paranormal Scale (see Dagnall et al., 2010), SSUB = Survery of Scientifically Unsubstantiated Beliefs (Irwin & Marks, 2013), OS = Occultism Scale (Böttinger, 1976), PS = Paranormal Scale (Orenstein, 2002), AEI = Anomalous Experiences Inventory (Gallagher et al., 1994; includes a ‘belief’ subscale).
- 1. Broad CD. The relevance of psychical research to philosophy. Philosophy. 1949 Oct;24(91):291–309.
- 2. Research BMG. BMG Halloween Poll: A third of Brits believe in ghosts, spirits, or other types of paranormal activity [Internet]. Birmingham (UK): BMG Research; 2017 Oct [cited 2021 Oct]. Available from: https://www.bmgresearch.co.uk/bmg-halloween-poll-third-brits-believe-ghosts-spirits-types-paranormal-activity/
- 3. Pechey R, Halligan P. The prevalence of delusion-like beliefs relative to sociocultural beliefs in the general population. Psychopathology. 2011;44(2):106–15. pmid:21196811
- 4. Pérez Navarro JM, Martínez Guerra X. Personality, cognition, and morbidity in the understanding of paranormal belief. PsyCh journal. 2020 Feb;9(1):118–31. pmid:31183994
- 5. Eder E, Turic K, Milasowszky N, Van Adzin K, Hergovich A. The relationships between paranormal belief, creationism, intelligent design and evolution at secondary schools in Vienna (Austria). Science & Education. 2011 May;20(5):517–34.
- 6. Göritz AS, Schumacher J. The WWW as a research medium: An illustrative survey on paranormal belief. Perceptual and Motor Skills. 2000 Jun;90(3_suppl):1195–206. pmid:10939070
- 7. Clarke D. Belief in the paranormal: A New Zealand survey. Journal of the Society for Psychical Research. 1991 Apr.
- 8. Tobacyk J, Milford G. Belief in paranormal phenomena: assessment instrument development and implications for personality functioning. Journal of personality and social psychology. 1983 May;44(5):1029.
- 9. Tobacyk JJ. A revised paranormal belief scale. The International Journal of Transpersonal Studies. 2004;23(23):94–8.
- 10. Thalbourne MA, Delin PS. A new instrument for measuring the sheep-goat variable: its psychometric properties and factor structure. Journal of the Society for Psychical Research. 1993 Jul.
- 11. Drinkwater K, Denovan A, Dagnall N, Parker A. The Australian sheep-goat scale: an evaluation of factor structure and convergent validity. Frontiers in psychology. 2018 Aug 28;9:1594. pmid:30210415
- 12. Drinkwater K, Denovan A, Dagnall N, Parker A. An assessment of the dimensionality and factorial structure of the revised paranormal belief scale. Frontiers in Psychology. 2017 Sep 26;8:1693. pmid:29018398
- 13. Wiseman R, Watt C. Measuring superstitious belief: Why lucky charms matter. Personality and individual differences. 2004 Dec 1;37(8):1533–41.
- 14. Pennycook G, Cheyne JA, Barr N, Koehler DJ, Fugelsang JA. On the reception and detection of pseudo-profound bullshit. Judgment and Decision making. 2015 Nov 1;10(6):549–63.
- 15. Willard AK, Norenzayan A. Cognitive biases explain religious belief, paranormal belief, and belief in life’s purpose. Cognition. 2013 Nov 1;129(2):379–91. pmid:23974049
- 16. Lindeman M. Biases in intuitive reasoning and belief in complementary and alternative medicine. Psychology and Health. 2011 Mar 1;26(3):371–82. pmid:20419560
- 17. Brotherton R, French CC, Pickering AD. Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in psychology. 2013 May 21;4:279. pmid:23734136
- 18. Cardeña E, Marcusson-Clavertz D, Wasmuth J. Hypnotizability and dissociation as predictors of performance in a precognition task: A pilot study. Journal of Parapsychology. 2009 Mar 1;73(1).
- 19. Lobato E, Mendoza J, Sims V, Chin M. Examining the relationship between conspiracy theories, paranormal beliefs, and pseudoscience acceptance among a university population. Applied Cognitive Psychology. 2014 Sep;28(5):617–25.
- 20. Williams E, Francis LJ, Robbins M. Personality and paranormal belief: A study among adolescents. Pastoral Psychology. 2007 Sep 1;56(1):9–14.
- 21. Peltzer K. Paranormal beliefs and personality among black South African students. Social behavior and personality. 2002 May 20;30(4):391.
- 22. Bader CD, Baker JO, Molle A. Countervailing forces: Religiosity and paranormal belief in Italy. Journal for the Scientific Study of Religion. 2012 Dec;51(4):705–20.
- 23. Van den Bulck J, Custers K. Belief in complementary and alternative medicine is related to age and paranormal beliefs in adults. European Journal of Public Health. 2010 Apr 1;20(2):227–30. pmid:19587233
- 24. Sparks G, Miller W. Investigating the relationship between exposure to television programs that depict paranormal phenomena and beliefs in the paranormal. Communication Monographs. 2001 Mar 1;68(1):98–113.
- 25. Andrews RA, Tyson P. The superstitious scholar: Paranormal belief within a student population and its relationship to academic ability and discipline. Journal of Applied Research in Higher Education. 2019 Jul 1.
- 26. Aarnio K, Lindeman M. Paranormal beliefs, education, and thinking styles. Personality and individual differences. 2005 Nov 1;39(7):1227–36.
- 27. Ward SJ, King LA. Examining the roles of intuition and gender in magical beliefs. Journal of Research in Personality. 2020 Jun 1;86:103956.
- 28. Rogers P, Fisk JE, Lowrie E. Paranormal belief, thinking style preference and susceptibility to confirmatory conjunction errors. Consciousness and Cognition. 2018 Oct 1;65:182–96. pmid:30199770
- 29. van Elk M. The self-attribution bias and paranormal beliefs. Consciousness and cognition. 2017 Mar 1;49:313–21. pmid:28236749
- 30. Rogers P, Qualter P, Wood D. The impact of event vividness, event severity, and prior paranormal belief on attributions towards a depicted remarkable coincidence experience: Two studies examining the misattribution hypothesis. British Journal of Psychology. 2016 Nov;107(4):710–51. pmid:26829906
- 31. Voracek M. Who wants to believe? Associations between digit ratio (2D: 4D) and paranormal and superstitious beliefs. Personality and Individual Differences. 2009 Jul 1;47(2):105–9.
- 32. Watt C, Watson S, Wilson L. Cognitive and psychological mediators of anxiety: Evidence from a study of paranormal belief and perceived childhood control. Personality and Individual Differences. 2007 Jan 1;42(2):335–43.
- 33. Lange R, Irwin HJ, Houran J. Objective measurement of paranormal belief: A rebuttal to Vitulli. Psychological Reports. 2001 Jun;88(3):641–4. pmid:11507996
- 34. Vitulli WF, Tipton SM, Rowe JL. Beliefs in the paranormal: Age and sex differences among elderly persons and undergraduate students. Psychological Reports. 1999 Dec;85(3):847–55. pmid:10672744
- 35. Irwin HJ. Age and sex differences in paranormal beliefs: a response to Vitulli, Tipton, and Rowe (1999). Psychological Reports. 2000 Apr;86(2):595–6. pmid:10840917
- 36. Vitulli WF. Rejoinder to Irwin’s (2000)“Age and Sex Differences in Paranormal Beliefs: A Response to Vitulli, Tipton, and Rowe (1999)”. Psychological reports. 2000 Oct;87(2):699–700. pmid:11086628
- 37. Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive psychology. 2000 Aug 1;41(1):49–100. pmid:10945922
- 38. Hosseini S, Chaurasia A, Oremus M. The effect of religion and spirituality on cognitive function: A systematic review. The Gerontologist. 2019 Mar 14;59(2):e76–85. pmid:28498999
- 39. Kaufman Y, Anaki D, Binns M, Freedman M. Cognitive decline in Alzheimer disease: Impact of spirituality, religiosity, and QOL. Neurology. 2007 May 1;68(18):1509–14. pmid:17470754
- 40. Kraal AZ, Sharifian N, Zaheed AB, Sol K, Zahodne LB. Dimensions of religious involvement represent positive pathways in cognitive aging. Research on aging. 2019 Oct;41(9):868–90. pmid:31303123
- 41. Ritchie SJ, Gow AJ, Deary IJ. Religiosity is negatively associated with later-life intelligence, but not with age-related cognitive decline. Intelligence. 2014 Sep 1;46:9–17. pmid:25278639
- 42. Zuckerman M, Silberman J, Hall JA. The relation between intelligence and religiosity: A meta-analysis and some proposed explanations. Personality and social psychology review. 2013 Nov;17(4):325–54. pmid:23921675
- 43. Georgiou N, Delfabbro P, Balzan R. Conspiracy beliefs in the general population: The importance of psychopathology, cognitive style and educational attainment. Personality and Individual Differences. 2019 Dec 1;151:109521.
- 44. Mikušková EB. Conspiracy beliefs of future teachers. Current Psychology. 2018 Sep;37(3):692–701.
- 45. van der Wal RC, Sutton RM, Lange J, Braga JP. Suspicious binds: Conspiracy thinking and tenuous perceptions of causal connections between co‐occurring and spuriously correlated events. European Journal of Social Psychology. 2018 Dec;48(7):970–89. pmid:30555189
- 46. van Prooijen JW, Douglas KM, De Inocencio C. Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European journal of social psychology. 2018 Apr;48(3):320–35. pmid:29695889
- 47. Su Y, Lee DK, Xiao X, Li W, Shu W. Who endorses conspiracy theories? A moderated mediation model of Chinese and international social media use, media skepticism, need for cognition, and COVID-19 conspiracy theory endorsement in China. Computers in Human Behavior. 2021 Jul 1;120:106760. pmid:34955595
- 48. Denovan A, Dagnall N, Drinkwater K, Parker A, Neave N. Conspiracist beliefs, intuitive thinking, and schizotypal facets: a further evaluation. Applied Cognitive Psychology. 2020 Nov;34(6):1394–405.
- 49. Alper S, Bayrak F, Yilmaz O. Psychological correlates of COVID-19 conspiracy beliefs and preventive measures: Evidence from Turkey. Current psychology. 2020 Jun 29:1–0. pmid:32837129
- 50. Georgiou N, Delfabbro P, Balzan R. Conspiracy-Beliefs and Receptivity to Disconfirmatory Information: A Study Using the BADE Task. SAGE Open. 2021 Mar;11(1):21582440211006131.
- 51. Lamberty PK, Hellmann JH, Oeberst A. The winner knew it all? Conspiracy beliefs and hindsight perspective after the 2016 US general election. Personality and Individual Differences. 2018 Mar 1;123:236–40.
- 52. Collins L. Bullspotting: finding facts in the age of misinformation. Prometheus Books; 2012 Oct 30.
- 53. Irwin HJ. Belief in the paranormal: A review of the empirical literature. Journal of the american society for Psychical research. 1993 Jan 1;87(1):1–39.
- 54. Briner RB, Denyer D. Systematic review and evidence synthesis as a practice and scholarship tool. Handbook of evidence-based management: Companies, classrooms and research. 2012 Nov:112–29.
- 55. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Bmj. 2021 Mar 29;372.
- 56. Mallett R, Hagen-Zanker J, Slater R, Duvendack M. The benefits and challenges of using systematic reviews in international development research. Journal of development effectiveness. 2012 Sep 1;4(3):445–55.
- 57. Harari MB, Parola HR, Hartwell CJ, Riegelman A. Literature searches in systematic reviews and meta-analyses: A review, evaluation, and recommendations. Journal of Vocational Behavior. 2020 Apr 1;118:103377.
- 58. Hartshorne JK, Germine LT. When does cognitive functioning peak? The asynchronous rise and fall of different cognitive abilities across the life span. Psychological science. 2015 Apr;26(4):433–43. pmid:25770099
- 59. Greening EK. The relationship between false memory and paranormal belief [dissertation]. Hertfordshire, UK: University of Hertfordshire; 2002.
- 60. Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ open. 2016 Dec 1;6(12):e011458. pmid:27932337
- 61. Lannoy S, Duka T, Carbia C, Billieux J, Fontesse S, Dormal V, et al. Emotional processes in binge drinking: A systematic review and perspective. Clinical Psychology Review. 2021 Jan 13:101971. pmid:33497920
- 62. Cohen J. Statistical power analysis for the social sciences. 2nd ed. New York: Lawrence Erlbaum Associates; 1988.
- 63. Brugger P, Regard M, Landis T. Belief in extrasensory perception and illusory control: A replication. Journal of Psychology. 1991 Jul 1;125(4):501–2.
- 64. Prike T, Arnold MM, Williamson P. The relationship between anomalistic belief and biases of evidence integration and jumping to conclusions. Acta psychologica. 2018 Oct 1;190:217–27. pmid:30145485
- 65. Schienle A, Vaitl D, Stark R. Covariation bias and paranormal belief. Psychological reports. 1996 Feb;78(1):291–305. pmid:8839320
- 66. Van Elk M. Perceptual biases in relation to paranormal and conspiracy beliefs. PloS one. 2015 Jun 26;10(6):e0130422. pmid:26114604
- 67. Simmonds-Moore C. Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena. Consciousness and cognition. 2014 Aug 1;28:30–46. pmid:25036936
- 68. Irwin HJ, Drinkwater K, Dagnall N. Are believers in the paranormal inclined to jump to conclusions?. Australian Journal of Parapsychology. 2014 Jun;14(1):69–82.
- 69. Denovan A, Dagnall N, Drinkwater K, Parker A. Latent profile analysis of schizotypy and paranormal belief: Associations with probabilistic reasoning performance. Frontiers in psychology. 2018 Jan 26;9:35. pmid:29434562
- 70. Wilson JA. Reducing pseudoscientific and paranormal beliefs in university students through a course in science and critical thinking. Science & Education. 2018 Mar;27(1):183–210.
- 71. Betsch T, Aßmann L, Glöckner A. Paranormal beliefs and individual differences: story seeking without reasoned review. Heliyon. 2020 Jun 1;6(6):e04259. pmid:32637687
- 72. Irwin HJ. Thinking style and the making of a paranormal disbelief. Journal of the Society for Psychical Research. 2015 Jul 1;79(920):129–39.
- 73. Krummenacher P, Mohr C, Haker H, Brugger P. Dopamine, paranormal belief, and the detection of meaningful stimuli. Journal of Cognitive Neuroscience. 2010 Aug 1;22(8):1670–81. pmid:19642883
- 74. Ballová Mikušková E, Čavojová V. The effect of analytic cognitive style on credulity. Frontiers in psychology. 2020 Oct 15;11:2682. pmid:33178085
- 75. Laws KR. Negativland-a home for all findings in psychology. BMC psychology. 2013 Dec;1(1):1–8. pmid:25566353
- 76. Cardeña E. The experimental evidence for parapsychological phenomena: A review. American Psychologist. 2018 Jul;73(5):663–77. pmid:29792448
- 77. Blackmore SJ. Probability misjudgment and belief in the paranormal: A newspaper survey. British Journal of Psychology. 1997 Nov;88(4):683–9.
- 78. Lawrence E, Peters E. Reasoning in believers in the paranormal. The Journal of nervous and mental disease. 2004 Nov 1;192(11):727–33. pmid:15505516
- 79. Gray SJ, Gallo DA. Paranormal psychic believers and skeptics: a large-scale test of the cognitive differences hypothesis. Memory & cognition. 2016 Feb;44(2):242–61. pmid:26503412
- 80. Prike T, Arnold MM, Williamson P. Psychics, aliens, or experience? Using the Anomalistic Belief Scale to examine the relationship between type of belief and probabilistic reasoning. Consciousness and cognition. 2017 Aug 1;53:151–64. pmid:28683360
- 81. Ståhl T, Van Prooijen JW. Epistemic rationality: Skepticism toward unfounded beliefs requires sufficient cognitive ability and motivation to be rational. Personality and Individual Differences. 2018 Feb 1;122:155–63.
- 82. Stroebe W, Gadenne V, Nijstad BA. Do our psychological laws apply only to college students?: External validity revisited. Basic and Applied Social Psychology. 2018 Nov 2;40(6):384–95.
- 83. Dean CE, Akhtar S, Gale TM, Irvine K, Wiseman R, Laws KR. Development of the Paranormal and Supernatural Beliefs Scale using classical and modern test theory. BMC Psychology. 2021 Dec;9(1):1–20. pmid:33388086
- 84. Lange R, Thalbourne MA. Rasch scaling paranormal belief and experience: Structure and semantics of Thalbourne’s Australian Sheep-Goat Scale. Psychological Reports. 2002 Dec;91(3_suppl):1065–73. pmid:12585514
- 85. Lange R, Irwin HJ, Houran J. Top-down purification of Tobacyk’s revised paranormal belief scale. Personality and individual differences. 2000 Jul 1;29(1):131–56.
- 86. Werner S, Praxedes M, Kim HG. The reporting of nonresponse analyses in survey research. Organizational Research Methods. 2007 Apr;10(2):287–95.
- 87. Hawkins DF. Estimation of nonresponse bias. Sociological Methods & Research. 1975 May;3(4):461–88.
- 88. Tobacyk J. Cognitive complexity and paranormal beliefs. Psychological Reports. 1983 Feb;52(1):101–2.
- 89. Prince M. Epidemiology. In Wright P, Stern J, Phelan M, editors. Core Psychiatry. Elsevier Ltd; 2012. p. 115–29.
- 90. Lindeman M, Svedholm‐Häkkinen AM. Does poor understanding of physical world predict religious and paranormal beliefs?. Applied Cognitive Psychology. 2016 Sep;30(5):736–42.
- 91. Kontto J, Tolonen H, Salonen AH. What are we missing? The profile of non-respondents in the Finnish gambling 2015 survey. Scandinavian journal of public health. 2020 Feb;48(1):80–7. pmid:31096858
- 92. Tolonen H, Dobson A, Kulathinal S, WHO Monica Project. Effect on trend estimates of the difference between survey respondents and non-respondents: results from 27 populations in the WHO MONICA Project. European journal of epidemiology. 2005 Nov 1;20(11):887–98. pmid:16284866
- 93. Dalecki MG, Whitehead JC, Blomquist GC. Sample non-response bias and aggregate benefits in contingent valuation: an examination of early, late and non-respondents. Journal of Environmental Management. 1993 Jun 1;38(2):133–43.
- 94. Gannon MJ, Nothern JC, Carroll SJ. Characteristics of nonrespondents among workers. Journal of Applied Psychology. 1971 Dec;55(6):586–8.
- 95. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). Journal of medical Internet research. 2004 Sep 29;6(3):e132.
- 96. Ioannidis JP. Limitations are not properly acknowledged in the scientific literature. Journal of clinical epidemiology. 2007 Apr 1;60(4):324–9. pmid:17346604
- 97. Horton R. The hidden research paper. Jama. 2002 Jun 5;287(21):2775–8. pmid:12038909
- 98. Blackmore S, Trościanko T. Belief in the paranormal: Probability judgements, illusory control, and the ‘chance baseline shift’. British journal of Psychology. 1985 Nov;76(4):459–68.
- 99. Watson G. Watson-Glaser critical thinking appraisal. San Antonio, TX: Psychological Corporation; 1980.
- 100. Raven JS. Raven’s Matrices: The Advanced Progressive Matrices Set 1. Oxford, England: Oxford Psychologists Press; 1976.
- 101. Raven J, Raven JC, Court JH. Manual for standard progressive matrices 2000 edition. Oxford, England: Oxford Psychologists Press; 2000.
- 102. Pacini R, Epstein S. The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratio-bias phenomenon. Journal of personality and social psychology. 1999 Jun;76(6):972–87. pmid:10402681
- 103. Frederick S. Cognitive reflection and decision making. Journal of Economic perspectives. 2005 Dec;19(4):25–42.
- 104. Roediger HL, McDermott KB. Creating false memories: Remembering words not presented in lists. Journal of experimental psychology: Learning, Memory, and Cognition. 1995 Jul;21(4):803–14.
- 105. Berg EA. A simple objective technique for measuring flexibility in thinking. The Journal of general psychology. 1948 Jul 1;39(1):15–22. pmid:18889466
- 106. Grant DA, Berg E. A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. Journal of experimental psychology. 1948 Aug;38(4):404–11. pmid:18874598
- 107. Blackmore S, Moore R. Seeing things: Visual recognition and belief in the paranormal. European Journal of Parapsychology. 1994;10(1994):91–103.
- 108. Riekki T, Lindeman M, Aleneff M, Halme A, Nuortimo A. Paranormal and religious believers are more prone to illusory face perception than skeptics and non‐believers. Applied Cognitive Psychology. 2013 Mar;27(2):150–5.
- 109. Caputo GB. Strange-face illusions during interpersonal-gazing and personality differences of spirituality. Explore. 2017 Nov 1;13(6):379–85. pmid:28964712
- 110. Caputo GB. Dissociation and hallucinations in dyads engaged through interpersonal gazing. Psychiatry research. 2015 Aug 30;228(3):659–63. pmid:26112448
- 111. Van Elk M. Paranormal believers are more prone to illusory agency detection than skeptics. Consciousness and cognition. 2013 Sep 1;22(3):1041–6. pmid:23933505
- 112. Griffiths O, Shehabi N, Murphy RA, Le Pelley ME. Superstition predicts perception of illusory control. British Journal of Psychology. 2019 Aug;110(3):499–518. pmid:30144046
- 113. Blanco F, Barberia I, Matute H. Individuals who believe in the paranormal expose themselves to biased information and develop more causal illusions than nonbelievers in the laboratory. PloS one. 2015 Jul 15;10(7):e0131378. pmid:26177025
- 114. Rudski J. The illusion of control, superstitious belief, and optimism. Current Psychology. 2004 Dec 1;22(4):306–15.
- 115. Drinkwater K, Dagnall N, Denovan A, Parker A. The moderating effect of mental toughness: perception of risk and belief in the paranormal. Psychological reports. 2019 Feb;122(1):268–87. pmid:29402179
- 116. Gagné H, McKelvie SJ. Effects of paranormal beliefs on response bias and self-assessment of performance in a signal detection task. Australian journal of psychology. 1990 Aug 1;42(2):187–95.
- 117. Dudley RE, John CH, Young AW, Over DE. Normal and abnormal reasoning in people with delusions. British Journal of Clinical Psychology. 1997 May;36(2):243–58. pmid:9167864
- 118. Peters E, Joseph S, Day S, Garety P. Measuring delusional ideation: the 21-item Peters et al. Delusions Inventory (PDI). Schizophrenia bulletin. 2004 Jan 1;30(4):1005–22. pmid:15954204
- 119. Lesaffre L, Kuhn G, Jopp DS, Mantzouranis G, Diouf CN, Rochat D, et al. Talking to the Dead in the Classroom: How a Supposedly Psychic Event Impacts Beliefs and Feelings. Psychological Reports. 2020 Sep 5:0033294120961068.
- 120. Barberia I, Tubau E, Matute H, Rodríguez-Ferreiro J. A short educational intervention diminishes causal illusions and specific paranormal beliefs in undergraduates. PLoS One. 2018 Jan 31;13(1):e0191907. pmid:29385184
- 121. Garety PA, Freeman D, Jolley S, Dunn G, Bebbington PE, Fowler DG, et al. Reasoning, emotions, and delusional conviction in psychosis. Journal of abnormal psychology. 2005 Aug;114(3):373. pmid:16117574
- 122. Peters E, Moritz S, Wiseman Z, Greenwood K, Kuipers E, Schwannauer M, et al. The cognitive biases questionnaire for psychosis (CBQP). Schizophrenia Research [Internet]. 2010 [cited 2022 Mar] Apr;117(2–3):413. Available from: https://www.sciencedirect.com/science/article/pii/S0920996410008443?via%3Dihub
- 123. Peters ER, Moritz S, Schwannauer M, Wiseman Z, Greenwood KE, Scott J, et al. Cognitive biases questionnaire for psychosis. Schizophrenia bulletin. 2014 Mar 1;40(2):300–13. pmid:23413104
- 124. Dagnall N, Denovan A, Drinkwater K, Parker A, Clough P. Toward a better understanding of the relationship between belief in the paranormal and statistical bias: the potential role of schizotypy. Frontiers in psychology. 2016 Jul 14;7:1045. pmid:27471481
- 125. Dagnall N, Drinkwater K, Denovan A, Parker A, Rowley K. Misperception of chance, conjunction, framing effects and belief in the paranormal: a further evaluation. Applied Cognitive Psychology. 2016 May;30(3):409–19.
- 126. Dagnall N, Parker A, Munley G. Paranormal belief and reasoning. Personality and Individual Differences. 2007 Oct 1;43(6):1406–15.
- 127. Dagnall N, Drinkwater K, Parker A, Rowley K. Misperception of chance, conjunction, belief in the paranormal and reality testing: a reappraisal. Applied Cognitive Psychology. 2014 Sep;28(5):711–9.
- 128. Rogers P, Davis T, Fisk J. Paranormal belief and susceptibility to the conjunction fallacy. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 2009 May;23(4):524–42
- 129. Rogers P, Fisk JE, Lowrie E. Paranormal believers’ susceptibility to confirmatory versus disconfirmatory conjunctions. Applied Cognitive Psychology. 2016 Jul;30(4):628–34.
- 130. Musch J, Ehrenberg K. Probability misjudgment, cognitive ability, and belief in the paranormal. British Journal of Psychology. 2002 May;93(2):169–77.
- 131. Brugger P, Landis T, Regard M. A ‘sheep‐goat effect’in repetition avoidance: Extra‐sensory perception as an effect of subjective probability?. British Journal of Psychology. 1990 Nov;81(4):455–68.
- 132. Bressan P. The connection between random sequences, everyday coincidences, and belief in the paranormal. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 2002 Jan;16(1):17–34.
- 133. Roberts MJ, Seager PB. Predicting belief in paranormal phenomena: A comparison of conditional and probabilistic reasoning. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition. 1999 Oct;13(5):443–50.
- 134. Wierzbicki M. Reasoning errors and belief in the paranormal. The Journal of Social Psychology. 1985 Aug 1;125(4):489–94.
- 135. McLean CP, Miller NA. Changes in critical thinking skills following a course on science and pseudoscience: A quasi-experimental study. Teaching of Psychology. 2010 Apr;37(2):85–90.
- 136. Alcock JE, Otis LP. Critical thinking and belief in the paranormal. Psychological Reports. 1980 Apr;46(2):479–82.
- 137. Watson G, Glaser EM. Critical Thinking Appraisal. New York: Harcourt, Brace & World; 1964.
- 138. Morgan RK, Morgan DL. Critical thinking and belief in the paranormal. College Student Journal. 1998;32(1):135–9.
- 139. Hergovich A, Arendasy M. Critical thinking ability and belief in the paranormal. Personality and Individual Differences. 2005 Jun 1;38(8):1805–12.
- 140. Roe CA. Critical thinking and belief in the paranormal: A re‐evaluation. British Journal of Psychology. 1999 Feb;90(1):85–98. pmid:10085547
- 141. Royalty J. The generalizability of critical thinking: Paranormal beliefs versus statistical reasoning. The Journal of Genetic Psychology. 1995 Dec 1;156(4):477–88.
- 142. Formann AK, PiswangerK. WMT—Wiener Matrizentest. Weinheim: Beltz Test; 1979.
- 143. Tobacyk J. Paranormal belief and college grade point average. Psychological Reports. 1984 Feb;54(1):217–8.
- 144. Smith MD, Foster CL, Stovin G. Intelligence and paranormal belief: Examining the role of context. The Journal of Parapsychology. 1998 Mar 1;62(1):65–78.
- 145. Wechsler D. Wechsler adult intelligence scale. Archives of Clinical Neuropsychology; 1955. pmid:13263050
- 146. Jackson O. Manual for the Multidimensional Aptitude Battery. Port Huron, MI: Research Psychologists Press; 1985.
- 147. Stuart-Hamilton I, Nayak L, Priest L. Intelligence, belief in the paranormal, knowledge of probability and aging. Educational Gerontology. 2006 Mar 1;32(3):173–84.
- 148. Branković M. Who believes in ESP: Cognitive and motivational determinants of the belief in extra-sensory perception. Europe’s journal of psychology. 2019 Feb;15(1):120–39. pmid:30915177
- 149. Lasikiewicz N. Perceived stress, thinking style, and paranormal belief. Imagination, cognition and personality. 2016 Mar;35(3):306–20.
- 150. Majima Y. Belief in pseudoscience, cognitive style and science literacy. Applied Cognitive Psychology. 2015 Jul;29(4):552–9.
- 151. Svedholm AM, Lindeman M. The separate roles of the reflective mind and involuntary inhibitory control in gatekeeping paranormal beliefs and the underlying intuitive confusions. British Journal of Psychology. 2013 Aug;104(3):303–19. pmid:23848383
- 152. Genovese JE. Paranormal beliefs, schizotypy, and thinking styles among teachers and future teachers. Personality and Individual Differences. 2005 Jul 1;39(1):93–102.
- 153. Rogers P, Hattersley M, French CC. Gender role orientation, thinking style preference and facets of adult paranormality: A mediation analysis. Consciousness and cognition. 2019 Nov 1;76:102821. pmid:31590056
- 154. Rizeq J, Flora DB, Toplak ME. An examination of the underlying dimensional structure of three domains of contaminated mindware: paranormal beliefs, conspiracy beliefs, and anti-science attitudes. Thinking & Reasoning. 2021 Apr 3;27(2):187–211.
- 155. Norris P, Epstein S. An experiential thinking style: Its facets and relations with objective and subjective criterion measures. Journal of personality. 2011 Oct;79(5):1043–80. pmid:21241307
- 156. Stanovich KE, West RF. Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology. 1997 Jun;89(2):342–57.
- 157. Sá WC, West RF, Stanovich KE. The domain specificity and generality of belief bias: Searching for a generalizable critical thinking skill. Journal of educational psychology. 1999 Sep;91(3):497.
- 158. Gianotti LR, Mohr C, Pizzagalli D, Lehmann D, Brugger P. Associative processing and paranormal belief. Psychiatry and clinical neurosciences. 2001 Dec;55(6):595–603. pmid:11737792
- 159. Hergovich A. Field dependence, suggestibility and belief in paranormal phenomena. Personality and Individual Differences. 2003 Feb 1;34(2):195–209.
- 160. Hergovich A, Hörndler H. Gestaltwahrnehmungstest: e. computerbasiertes Verfahren zur Messung d. Feldartikulation; Manual. Swets Test Services; 1994.
- 161. Wilson K, French CC. The relationship between susceptibility to false memories, dissociativity, and paranormal belief and experience. Personality and Individual Differences. 2006 Dec 1;41(8):1493–502.
- 162. Dudley RT. Effect of restriction of working memory on reported paranormal belief. Psychological reports. 1999 Feb;84(1):313–6. pmid:10203968
- 163. Lindeman M, Riekki T, Hood BM. Is weaker inhibition associated with supernatural beliefs?. Journal of Cognition and Culture. 2011 Jan 1;11(1–2):231–9.
- 164. Stroop JR. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 1935 Dec;18(6):643–62.
- 165. Wain O, Spinella M. Executive functions in morality, religion, and paranormal beliefs. International Journal of Neuroscience. 2007 Jan 1;117(1):135–46.
- 166. Pizzagalli D, Lehmann D, Brugger P. Lateralized direct and indirect semantic priming effects in subjects with paranormal experiences and beliefs. Psychopathology. 2001;34(2):75–80. pmid:11244378
- 167. Palmer J, Mohr C, Krummenacher P, Brugger P. Implicit learning of sequential bias in a guessing task: Failure to demonstrate effects of dopamine administration and paranormal belief. Consciousness and cognition. 2007 Jun 1;16(2):498–506. pmid:17329128
- 168. Irwin HJ, Green MJ. Schizotypal processes and belief in the paranormal: A multidimensional study. European Journal of Parapsychology. 1999;14:1–5.
- 169. Hedge C, Powell G, Sumner P. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior research methods. 2018 Jun;50(3):1166–86. pmid:28726177
- 170. Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, Sutton AJ, et al. Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC medical research methodology. 2009 Dec;9(1):1–4. pmid:19941636
- 171. Hergovich A, Schott R, Burger C. Biased evaluation of abstracts depending on topic and conclusion: Further evidence of a confirmation bias within scientific psychology. Current Psychology. 2010 Sep;29(3):188–209.
- 172. Laws KR. Psychology, replication & beyond. BMC psychology. 2016 Dec;4(1):1–8.
- 173. Bösch H, Steinkamp F, Boller E. Examining psychokinesis: The interaction of human intention with random number generators—A meta-analysis. Psychological bulletin. 2006 Jul;132(4):497–523. pmid:16822162
- 174. Radin D, Nelson R, Dobyns Y, Houtkooper J. Reexamining psychokinesis: comment on Bösch, Steinkamp, and Boller (2006). Psychological bulletin. 2006 Aug; 132(4):529–32. pmid:16822164
- 175. Irwin HJ, Marks AD, Geiser C. Belief in the Paranormal: A State, or a Trait?. Journal of Parapsychology. 2018 Mar 1;82(1):24–40.
- 176. Diamond A. Executive functions. Annual review of psychology. 2013 Jan 3;64:135–68. pmid:23020641
- 177. Duncan J. EPS Mid-Career Award 2004: brain mechanisms of attention. The Quarterly Journal of Experimental Psychology. 2006 Jan 1;59(1):2–7. pmid:16556554
- 178. Duncan J, Owen AM. Common regions of the human frontal lobe recruited by diverse cognitive demands. Trends in neurosciences. 2000 Oct 1;23(10):475–83. pmid:11006464
- 179. Nyberg L, Marklund P, Persson J, Cabeza R, Forkstam C, Petersson KM, et al. Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia. 2003 Jan 1;41(3):371–7. pmid:12457761
- 180. Duncan J. The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends in cognitive sciences. 2010 Apr 1;14(4):172–9. pmid:20171926
- 181. Greenland S, Robins J. Invited commentary: ecologic studies—biases, misconceptions, and counterexamples. American journal of epidemiology. 1994 Apr 15;139(8):747–60. pmid:8178788
- 182. Jüni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. Jama. 1999 Sep 15;282(11):1054–60. pmid:10493204
- 183. Greenland S, O’rourke K. On the bias produced by quality scores in meta‐analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001 Dec 1;2(4):463–71. pmid:12933636
- 184. Saher M, Lindeman M. Alternative medicine: A psychological perspective. Personality and individual differences. 2005 Oct 1;39(6):1169–78.
- 185. Lindeman M, Aarnio K. Paranormal beliefs: Their dimensionality and correlates. European Journal of Personality. 2006 Nov;20(7):585–602.
- 186. Ferguson HJ, Brunsdon VE, Bradford EE. The developmental trajectories of executive function from adolescence to old age. Scientific reports. 2021 Jan 14;11(1):1–7. pmid:33414495
- 187. Emmons CF, Sobal J. Paranormal beliefs: Testing the marginality hypothesis. Sociological Focus. 1981 Jan 1;14(1):49–56.
- 188. Murman DL. The impact of age on cognition. InSeminars in hearing 2015 Aug (Vol. 36, No. 03, pp. 111–121). Thieme Medical Publishers. pmid:27516712
- 189. Salthouse TA, Atkinson TM, Berish DE. Executive functioning as a potential mediator of age-related cognitive decline in normal adults. Journal of experimental psychology: General. 2003 Dec;132(4):566. pmid:14640849
- 190. Kennedy KM, Raz N. Aging white matter and cognition: differential effects of regional variations in diffusion properties on memory, executive functions, and speed. Neuropsychologia. 2009 Feb 1;47(3):916–27. pmid:19166865
- 191. Tisserand DJ, Jolles J. On the involvement of prefrontal networks in cognitive ageing. Cortex. 2003 Jan 1;39(4–5):1107–28. pmid:14584569