Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Leave the world(view) behind, but keep the words: The effect of conspiracism on writing

  • Alessandro Miani ,

    Roles Data curation, Formal analysis, Investigation, Methodology, Resources, Validation, Writing – original draft

    alessandro.miani@bristol.ac.uk

    Affiliations School of Psychological Science, University of Bristol, Bristol, United Kingdom, Department of Psychology, University of Fribourg, Fribourg, Switzerland

  • Ines Adornetti,

    Roles Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft

    Affiliation Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy

  • Daniela Altavilla,

    Roles Data curation, Formal analysis, Investigation, Methodology, Resources

    Affiliation Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy

  • Valentina Deriu,

    Roles Investigation, Resources

    Affiliation Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy

  • Alessandra Chiera,

    Roles Investigation, Resources

    Affiliation Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy

  • Francesco Ferretti

    Roles Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision

    Affiliation Cosmic Lab, Department of Philosophy, Communication and Performing Arts, Roma Tre University, Rome, Italy

Abstract

We investigate whether a stable predisposition to interpret events as the result of conspiracies—conspiracism—is associated with the spontaneous generation of conspiratorial content in writing when interpreting ambiguous information. Across two studies (N = 385), participants watched the apocalyptic thriller Leave the World Behind and wrote an essay interpreting its meaning. Each essay was rated by a Large Language Model for its conspiratorial narrative content, that is, the degree to which the text contains claims that the public is being pervasively lied to about aspects of reality, enabling some groups to enact a harmful, self-serving agenda. Contrary to our preregistered hypothesis, we did not find an association between participants’ conspiracism and their essays’ level of conspiratorial narrative content. Exploratory linguistic analyses revealed that conspiracism was associated with greater use of conspiracy-related vocabulary (e.g., deception, government), a disproportionate use of sophisticated words, and increased syntactic complexity. These results suggest that conspiracism may emerge more readily at the lexical level rather than through fully structured narratives. We discuss potential methodological and theoretical factors contributing to these unexpected results, including the roles of context, perceived relevance, motivation, and collective social dynamics. We also consider the possibility that conspiracism may not directly translate into conspiratorial narratives. If so, we recommend comparative research on online vs offline conspiratorial writing to clarify whether conspiracy theories emerge spontaneously from genuine beliefs or are constructed strategically, detached from genuinely held beliefs.

Introduction

Conspiracy theories (CTs) are narratives that interpret events and aspects of reality as covert operations orchestrated by powerful actors with malevolent intentions toward an unwitting population [13]. Such narratives are widely endorsed [4,5], persist across historical periods [6,7] and are relatively stable [8,9]. Far from trivial beliefs, endorsement of these alternative theories have mostly negative societal impacts, including decreased support for vaccination [10,11], reduced pro-environmental behavior [12,13], lower adherence to social norms [14,15], as well as increased endorsement of violence [1517] and radicalization [18,19] (but see also the potential benefits [2] and error in dismissing CTs a priori [20]). To limit their impact, researchers need to understand the psychological underpinnings and motivations behind belief in CTs, as well as the mechanisms underlying their emergence. Yet, although substantial research has explored cognitive and social factors contributing to their endorsement [2125], relatively little attention has been devoted to examining the content of these narratives, and virtually no work has investigated how they emerge or how individual psychological differences influence their construction.

Here, we tackle the question as to how conspiracy narratives emerge. We focus on conspiracism, a worldview marked by pervasive distrust of authorities and powerful actors [24,2628] that predisposes individuals to interpret and believe events as the outcome of a conspiracy [2729]. Note, however, that the this is a deliberate simplification. Conspiracism has been described and operationalized in multiple ways, including conspiracist worldview [29], monological belief system [30], generic conspiracist ideation [31], conspiracy mentality [27], conspiracy thinking [7], and conspiracist mindset [32]. Given the general and exploratory nature of this work, we use the term conspiracism (see, e.g., [3335]) to indicate both the general tendency to believe in CTs and the possession of a conspiracy worldview. Such distinction, however, is important and will be discussed in the limitations.

Reliance on conspiratorial explanations has been associated with fundamental psychological motives, including the epistemic need for understanding and certainty, the existential drive for control and safety, and the social desire to maintain positive personal or group identities [1,2,21,22]. Note that the need-based approach, advanced by [1,2], was initially related to belief in CTs. There is however research that has linked these needs also to conspiracism (meta-analyzed in [22]), although the strength slightly varies with the way conspiracy belief is measured. Individuals confronting uncertainty or threat frequently adopt conspiratorial explanations to mitigate feelings of unpredictability and vulnerability [36,37]. Individuals high in conspiracism are generally characterized by low tolerance for ambiguity [22,38], a pronounced tendency to jump to conclusions [39], and heightened sensitivity to perceiving meaningful connections among random events [4042]. As such, they are prone to actively seek hidden motives and ultimate truths to make sense of what happens around them [43,44]. They also frequently attribute intentionality [45] and agency to their environment [46,47]. Consequently, they are prone to build implausible associations and inferences, identifying conspiracies even where none exist [48].

The Language of Conspiracy Theories

Conspiracism might influence not only information processing, where uncertain or ambiguous information is interpreted through existing conspiratorial schemas, but also the way in which information is communicated. A way to explore how conspiracism emerges in communication is to focus on the linguistic characteristics of conspiracy narratives. Research in this area is limited, but some efforts have been made. Note that narratological theories traditionally distinguish between story (the content of events), discourse or plot (their textual organization), and narration (the enunciative act that produces them; see, e.g., [49,50]). However, these distinctions are not central to the present investigation, because our analyses focus on the presence of conspiracy-related content and linguistic features rather than on formal narrative structure. Therefore, the terms conspiratorial narratives and conspiratorial discourse are used largely interchangeably throughout this article for the sake of clarity and concision.

Content analyses show that conspiratorial rhetoric is typically characterized by mistrust and refutation of mainstream perspectives [51,52], frequently targeting political opponents in positions of power [53]. On social media platforms like Reddit and Twitter, conspiratorial narratives often use words associated with crime, death, power, dominance, aggression, and deception [5456]. Similar linguistic patterns have been found in conspiracy-related webpages compared to mainstream ones [57], suggesting a linguistic convergence between online comments and disseminated conspiracy narratives. Importantly, the use of such vocabulary predicts higher engagement and increased sharing behavior on social media [56,57]. Note, however, that these lexical patterns increase also in mainstream documents that mention conspiracies (real or theorized) in their text [57], meaning that they could indicate more of a genre rather than a psychological underpinning of conspiracism.

Network analyses on webpages show higher interconnectedness in conspiracy (vs mainstream) narratives via unrelated themes (e.g., Covid, 5G, Soros) [58]. A similar highly thematic heterogeneity was also found on social media such as Reddit and Twitter [5961], where conspiracy discussions are not focused on specific events or topics and users are instead organized around diverse interests with overlapping concerns. Different from actual conspiracy (e.g., Bridgegate), conspiracy theories (e.g., Pizzagate) exhibit narrative elements that span multiple domains and rely on loosely connected relationships [62]. Given such a heterogeneity of co-occurring themes, it is not surprising that conspiratorial (vs mainstream) texts tend to show loose semantic associations [63], lower topic specificity and lexical cohesion [58] as well as lower explanatory coherence [64].

The persuasive dimension of conspiracy narratives is also crucial [65]. Experimental evidence indicates that exposure to CTs influences belief formation and behavioral intentions [13,6668], highlighting their capacity to shape attitudes and actions. One explanation for their persuasiveness is that CTs are creatively engaging narratives [69,70], featuring spectacular, counterintuitive, and attention-grabbing rhetorical elements appealing particularly to conspiracy believers (but see [71]), who tend to be high in sensation-seeking [70] and need for uniqueness [72]. A large-scale lexical analysis confirmed that conspiratorial language employs creativity-related linguistic features such as novelty, originality, divergence, metaphoricity, and sophistication [63], features that have been associated with persuasion. Likewise, conspiracy narratives often mimic academic discourse through specialized jargon and pseudo-demonstrations [7377], perhaps to create an illusion of rigorous analysis, expertise, and therefore increasing the perceived veracity and truthfulness of the message content. Finally, conspiracy-related (vs mainstream) content on websites tends to be longer [57,7880], a characteristic generally associated with persuasive intent [81] (note that besides LOCO, in which differences in word count between conspiracy and mainstream texts are explicitly mentioned in the main text of the paper, the other corpora did not contain this information explicitly mentioned; here we report the correlation between word count and document-level assessment of conspiracism obtained via ChatGPT ratings in 2024: LOCO [57]: r = .26; DONALD [78]: r = .11; IRMA [79]: r = .16; and GERMA [80]: r = .15). A similar pattern was observed also on social media, where conspiracy posts on Reddit are longer [55] and tweets by conspiracy promoters are more frequent [61].

Our contribution

Although the features of conspiracy language reviewed above consistently appear in online conspiracy discourse, a critical question remains: to what extent can we map these linguistic features to individual psychological profiles? Do individual psychological differences influence the construction of conspiracy narratives? To date, no systematic study has investigated how individual differences specifically influence the emergence of conspiratorial narratives. Existing research examining contextual factors shows that exposure to conspiratorial statements decreases reliance on official information and promotes conspiratorial storylines [82,83]. However, these studies did not address the psychological precursors leaving this crucial link unexplored.

Moving beyond website-level linguistic analyses to individual-level text production is essential to clarify the psychological underpinnings of conspiratorial language. Here, we examine if conspiracism is associated with the interpretation and communication of narratives. In general, individuals selectively retain and transmit information that aligns with their pre-existing concerns, beliefs, and emotional states [84,85], and reinterpret complex or abstract information in ways congruent with familiar social schemas [86]. Thus, it is plausible that conspiracism may likewise lead individuals to reconstruct and communicate reality according to a conspiratorial schema, potentially fostering the emergence of conspiratorial narratives.

We hypothesize a link between conspiracism and the tendency to interpret and reconstruct ambiguous information as a result of a conspiracy. Specifically, we propose that conspiracy narratives result from top-down cognitive processes, where pre-existing beliefs and expectations guide how individuals interpret ambiguous events. We argue that cognitive biases typically associated with conspiracism lead individuals to impose conspiratorial interpretations onto ambiguous situations, particularly when epistemic needs are salient (e.g., interpreting an ambiguous situation). As a consequence, these individuals reconstruct complex or uncertain information to align it with their overarching conspiratorial frameworks.

To test this hypothesis, participants watched the apocalyptic psychological thriller Leave the World Behind [87] and subsequently were prompted to write an essay interpreting its meaning. This film was chosen specifically for its psychological ambiguity and the open-ended conclusion that could potentially trigger interpretations consistent with conspiracism [37]. Therefore, we measured participants’ conspiracism, their essays’ level of conspiracy narrative, and tested whether the two are positively associated. Our research comprises two studies: a preregistered primary study and a follow-up replication. Additionally, we performed a third exploratory analysis to examine whether participants’ conspiracism level is associated with linguistic markers previously identified in online conspiratorial discourse extracted from their essays.

Study 1

Methods

Procedure and participants.

The study was preregistered (https://aspredicted.org/y2kz-s7zk.pdf) and ethical approval was obtained from the Ethical Committee of Roma Tre University (Italy) in February 2024. No deviations from the approved study protocol occurred after approval was obtained (additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the Supporting Information S2 File). The data were collected in one day at the campus (20/05/2024). Participation was voluntary and anonymous: participants were given an identification code when they entered the classroom and received course credit as compensation. All participants provided written informed consent. Data were collected using pencil-and-paper questionnaires without computerized components and participants’ responses were later transcribed by three native Italian speakers.

In accordance with our preregistered plan to collect at least 250 essays, questionnaires were distributed to over 300 students to account for potential dropouts, exclusions, and missing data. A total of 328 Italian-speaking students participated. After applying exclusion criteria—removing participants who did not consent to use their data (N = 15), failed attention checks (N = 26), or did not complete the essay task (N = 2)—the final sample consisted of 285 participants (221 women and 64 men; mean age = 22.25, range = 19–50).

To control for potential order effects, participants were randomly assigned to one of two groups, counterbalanced such that the questionnaire to assess participants’ conspiracism was completed either before or after watching the film. Overall, the experiment lasted about 3 hours.

The film.

Participants watched the film Leave the World Behind [87] and later completed the essay-writing task in which they were prompted to interpret the film by providing their personal understanding and meaning (see Prompt S1 in S1 File). We choose Leave the World Behind because it is not overtly conspiratorial. A conspiratorial reading is only one among several possible interpretations afforded by its narrative. We deliberately chose a non-explicitly conspiratorial film because our aim was not to test the persuasiveness of conspiratorial argumentation (see, e.g., [68]), but to examine whether people high in conspiracy mentality “fill the gaps” of an ambiguous situation, in this case, a fictional narrative.

The film is deliberately open-ended: it features unexplained phenomena, a fractured narrative timeline, and an unresolved ending that invite viewers to infer what is happening “behind the scenes”. Prompting an interpretation of an overtly conspiratorial film would likely have constrained essays to a similar conspiratorial level. What interested us instead were the additional narrative and interpretative elements emerging from top-down cognitive processes, where pre-existing beliefs and expectations guide how individuals interpret ambiguous events.

In fact, our hypothesis is that individuals with a conspiratorial predisposition would be more likely to fill these narrative gaps with conspiratorial explanations, partly because of their higher need for structure and certainty and lower tolerance for ambiguity [22,3638]. In this sense, the film serves as a useful tool for observing how people project conspiratorial meanings onto an ambiguous story even in the absence of an explicitly conspiratorial storyline.

Participants watched the film dubbed in Italian without subtitles. Italian dubbing is professionally standardized and ubiquitous. For most Italians a dubbed Hollywood film feels as natural as a domestic one [88]. Italians have been steeped in US screen culture since the 1950s, and recent streaming has only deepened that familiarity. Themes such as institutional breakdown, technological fragility, and post-pandemic anxiety resonate well beyond the US context. Decades of media-globalization research [89] show that such cultural proximity allows viewers to appropriate foreign narratives without losing interpretive nuance.

Psychological measures.

To assess individual differences in conspiracism, we employed the Italian version of the 11-item Conspiracy Mentality Scale (CMS) [90], originally developed by [91]. The CMS was chosen because it measures a generic form of conspiracism like the Generic Conspiracist Beliefs scale by [31], but it is more generic (being freed from items involving specific conspiratorial themes such as the government concealing information about UFOs and aliens). Note that we also searched for the conspiracy mentality questionnaire [27] but we could not find a validated Italian translation. In our sample, the CMS demonstrated good internal consistency (). The CMS was originally developed to assess two specific dimensions: healthy skepticism (e.g., “Many things happen without the public’s knowledge”, ) and conspiratorial ideation (e.g., “The truth is known only to a secret powerful group that actively disseminates false information or misleads the public”, ).

To examine the factorial structure of the CMS in our sample, we conducted a confirmatory factor analysis (CFA) using the R package lavaan [92]. We compared a one-factor model—treating conspiracy mentality as a single construct—with a two-factor model distinguishing skepticism from conspiratorial ideation. Model fit indices indicated that the two-factor model provided an excellent and significantly better fit compared to the one-factor model ( = 119.88, p < .001). Although the CMS is best represented by a two-factor structure, this distinction did not substantially alter our main findings regarding the relationship between conspiracy mentality and linguistic measures. Therefore, to simplify the presentation, we report results using the single-dimension structure. Results based on the two-factor model are provided in the Appendix for completeness and transparency (as indicated in our preregistration, we also collected additional psychological measures for exploratory purposes, namely attachment style [93], translated into Italian by [94], and narcissism [95], translated into Italian by [96]; their correlations with conspiracy-related variables are reported in Table S4, see S1 File).

Assessing conspiratorial narrative using LLMs.

As preregistered, we used Large Language Models (LLMs), specifically ChatGPT, to evaluate the conspiratorial narrative content of participants’ essays. LLMs have become increasingly popular in psychological research [97,98], demonstrating their value for text classification outperforming crowd workers [99] even without task-specific training [98]. Indeed, LLMs have shown superior performance compared to traditional dictionary-based analyses when classifying short English texts such as tweets, Reddit comments, and news headlines [100], despite some limitations in differentiating genuine inquiries and sarcastic content from authentic conspiratorial beliefs [101]. Given these considerations, we implemented a two-step validation procedure: first, we assessed LLMs’ reliability in identifying conspiracy-related content in English texts and second, we validated its performance specifically on Italian-language texts.

To evaluate LLMs’ effectiveness to label English texts as conspiracy, we relied on textual material gathered from two corpora: LOCO (the Language Of COnspiracy corpus [57]) and DONALD (the 2M-document Dataset Of News Articles for studying the Language of Dubious information [78]). LOCO is an 88-million-word corpus comprising 23,937 conspiracy and 72,806 mainstream texts focused on 47 well-known themes that have generated CTs (e.g., Princess Diana’s death, the Moon landing, the 9/11 terrorist attacks). DONALD consists of approximately 2 billion words across 2,173,172 news articles—217,703 of which originated from conspiracy websites—covering 172 politically polarized topics (e.g., abortion, immigration, civil rights, climate change).

We prompted ChatGPT by OpenAI—one of the most popular and cost-effective LLMs at the time of writing (in July 2025, ChatGPT’s GPT–4o–mini model cost $0.15 + $0.6 (input + output) per one million tokens, compared to Claude claude–3–5-sonnet that costs $3 + $15 per one million tokens)—to assign a fine-grained conspiratorial language score to each document on a 0–10 Likert scale (from non-conspiratorial language [0] to highly conspiratorial language [10]). Results showed that conspiracy-related documents received significantly higher scores (M = 6.93, SD = 2.61) compared to mainstream texts (M = 1.58, SD = 1.80; , SE = .005, p < .001). Approximately 77% of conspiracy-related documents were rated by ChatGPT above 5, whereas 95% of mainstream documents were rated below 5.

To examine generalizability beyond LOCO’s prominent conspiracy-related themes, we selected from DONALD a subset of 144,000 documents (12,000 per ideological domain, each between 500 and 1,500 words). We then validated ChatGPT’s ratings by computing their correlation with website-level reliability scores obtained from [102]. This analysis revealed a strong negative correlation (r = −.67 [−.693, −.637], p < .001), indicating that texts from less reliable sources received higher conspiratorial scores.

While ChatGPT demonstrated reliability for English texts, its accuracy with Italian-language texts required further validation. We therefore tested its performance using IRMA [79], a corpus of over 600,000 Italian news articles (approximately 335 million words) from 56 websites categorized as unreliable by professional fact-checkers. We selected a subset of 2,714 documents from IRMA (up to 50 per website where available, each under 500 words), which likely included a mix of conspiratorial and non-conspiratorial texts. Each document was scored by ChatGPT for conspiratorial content (again using a 0–10 Likert scale).

To establish a human-coded benchmark, we randomly selected 180 documents spanning the entire range of ChatGPT scores, which were independently evaluated by three native Italian speakers (60 documents per rater) using the same scoring method. We compared ChatGPT ratings (using the gpt-4o-mini model) with those of another LLM, Claude by Anthropic (using the claude-3–5-sonnet model). The correlation between human ratings and LLMs’s ratings was overall robust, indicating strong agreement, although Claude (r176 = .701, [.617, .769], p < .001) performed slightly better than ChatGPT (r178 = .676, [.588, .748], p < .001; but note that the 95% confidence intervals overlap).

Based on these validation steps, we concluded that LLMs provide reliable evaluations of conspiracism and that Claude performed slightly better than ChatGPT for Italian texts. Thus, we employed Prompt S2 (see S1 File) to score participants’ essays via Claude (note that the use of Claude deviates from our preregistered use of ChatGPT), adopting the definition of CTs provided by [3]. Note that before being acknowledged with Claude, we used ChatGPT to evaluated 108 items previously rated by human coders—MTurk workers and master’s students—for conspiratorial content. The definition of CTs provided by [3] used as prompt for ChatGPT yielded the highest predictive accuracy (R2 = .644) compared to three alternative definitions (ChatGPT’s own definition: R2 = .639; [2]: R2 = .595; and [21]: R2 = .637).

Results

We started by assessing whether prior exposure to the film influenced key variables in our study. No significant differences emerged in conspiracism levels between participants who had previously watched the film (N = 43; t53.79 = 1.43, p = .158) and those who had not. Similarly, prior exposure did not significantly influence the conspiratorial content of participants’ narratives (t52.49 = −.59, p = .561). Next, we examined whether completing the questionnaire before (N = 161) or after (N = 124) watching the film affected conspiracism or the conspiratorial content of essays. Again, there were no significant differences between the two groups, neither in terms of conspiracism (t259.52 = .06, p = .949) nor regarding the conspiratorial content of essays (t277.91 = −.20, p = .845).

We had preregistered our hypothesis that participants with higher conspiracism would interpret ambiguous information in a conspiratorial manner, as reflected in the content of their essays. Contrary to our expectations, however, conspiracism was not significantly correlated with the conspiratorial content of essays (r283 = .06, [−.06, .18], p = .305). Since the CMS has two-factor structure, we also ran an additional regression treating the two CMS subdimensions (conspiratorial ideation and healthy skepticism) as separate predictors of the conspiratorial narrative scores. Again, there were no significant effects of either subscale (ideation: , [−.08, .19], SE = .07, t = .76, p = .446; skepticism: , [−.12, .15], SE = .07, t = .23, p = .818), indicating that the null finding observed in our original analysis does not depend on the aggregation of the two subcomponents of the CMS.

Discussion

One possible explanation for these unexpected results may relate to our measurement of conspiracism (i.e., the Conspiracy Mentality Scale, CMS) and the sociopolitical context in which it was administered (Italy). Specifically, the CMS might not fully capture conspiracism per se, but instead measure related constructs such as political knowledge or institutional skepticism.

In reconsidering the CMS items within the Italian sociopolitical context, several items do not appear inherently conspiratorial. Given Italy’s record on press freedom and political transparency [103,104], some statements included in the scale may reflect political awareness or legitimate skepticism rather than conspiracism (e.g., “There are people who don’t want the truth to come out” or “Many things happen without the public’s knowledge”). This interpretation aligns with cross-national research conducted in 26 countries by [105], which identified excessive cross-country variability in one item from the Conspiracy Mentality Questionnaire (CMQ; [27]): “I think that politicians usually do not tell us the true motives for their decisions”. Such variability likely reflects differing levels of institutional trust rather than actual conspiratorial mentality. Consequently, items that appear conspiratorial in transparent democracies might simply reflect legitimate political skepticism in less transparent environments.

Moreover, the Italian version of the CMS employed in this study [90] was validated within the Italian-speaking region of Switzerland—a context politically and culturally different from Italy. Although validation included assessments related to COVID-19 measures, it did not involve comprehensive convergent validation against well-established conspiracy belief scales such as the CMQ [27] or the Generic Conspiracist Beliefs scale (GCB; [31]). Thus, it remains uncertain whether the CMS effectively captures conspiracism within the Italian (as opposed to Swiss) context.

To address these methodological concerns, we conducted a follow-up replication study employing the Generic Conspiracist Beliefs scale (GCB), a measure specifically designed to capture generalized conspiratorial beliefs.

Study 2

To address methodological concerns identified in Study 1, we conducted a replication with a single modification: instead of the CMS [90], we used the Italian version of the 15-item Generic Conspiracist Beliefs scale (GCB; translated by [106]), originally developed by [31]. The GCB assesses generalized conspiratorial beliefs explicitly referencing specific conspirators, such as governments or scientists (e.g., “The government is involved in the murder of innocent citizens and/or well-known public figures, and keeps this a secret”). Given its explicit identification of conspirators, the GCB might offer improved sensitivity for conspiracism in our Italian sample and predictive validity to the emergence of conspiratorial narrative relative to the CMS.

Methods

All aspects of Study 2—including measures (apart from the conspiracy scale), recruitment strategy, and experimental tasks—remained identical to those in Study 1. The study took place over four days (between 30/10/2024 and 06/11/2024). A total of 117 Italian-speaking undergraduate and graduate students participated. After exclusions—lack of consent to use data (N = 3), failing attention checks (N = 14), or incomplete essays (N = 6)—the final sample consisted of 100 participants (66 women and 34 men; mean age = 24.56, range = 18–59). As in Study 1, we also collected exploratory measures of attachment style and narcissism; their correlations with conspiracy-related variables are reported in Supporting Information (see Table S5 in S1 File).

Results

Consistent with Study 1, prior exposure to the film did not significantly influence key variables. Participants who had already seen the film (N = 16) did not differ significantly in the GCB scores from those who had not (t24.01 = −1.06, p = .301), and prior watching similarly did not affect the conspiratorial content of their narratives (t22.41 = .23, p = .822). Additionally, no significant order effects emerged regarding whether participants completed the questionnaire before (N = 48) or after (N = 52) watching the film, neither for the GCB scores (t94.63 = 1.33, p = .186) nor for conspiratorial content in the essays (t97.71 = −1.11, p = .271).

Despite employing a different conspiracism scale (GCB rather than CMS), we again observed no significant correlation between participants’ conspiracism and the conspiratorial content of their narratives (r98 = .11, [−.09, .30], p = .262). As in Study 1, we ran an additional regression treating the five subdimensions of the GCB as separate predictors of the conspiratorial narrative scores. Again, there were no significant effects of individual subscale except one (Government malfeasance: , [.04, .58], SE = .14, t = 2.25, p = .027; Malevolent global conspiracies: , [−.29, .21], SE = .13, t = −.31, p = .754; Extraterrestrial cover-up: , [−.47, −.02], SE = .11, t = −2.15, p = .034; Personal wellbeing: , [−.18, .39], SE = .14, t = .72, p = .473; Control of information: , [−.31, .21], SE = .13, t = −.38, p = .705).

Discussion

In Study 2, we attempted to address potential limitations of Study 1 by implementing an alternative and more explicit measure of conspiracy beliefs (GCB). However, contrary to our hypotheses, we still did not find a significant association between conspiracism and the production of conspiratorial narratives, although the effect sizes were somehow larger. These findings mitigated our concerns related to the CMS’s appropriateness in the Italian context, as the anticipated relationship similarly failed to emerge with the GCB. It seems unlikely that both scales systematically failed to capture conspiracism, thus suggesting that the lack of association observed is not simply due to inadequate measurement. One interpretation of these findings is that the hypothesized relationship between conspiracism and conspiratorial narrative level genuinely does not exist, at least within the conditions of this study. Consequently, the LLM might have accurately detected the absence of explicitly conspiratorial narratives, reflecting a real absence of this link in spontaneous text production.

However, if this relationship does indeed exist but went undetected, other sources of measurement error should be considered. For instance, it is possible that the LLM reliably identifies conspiratorial narratives only when these narratives are clearly structured and cohesive. Participants holding conspiratorial beliefs might have expressed their ideas implicitly, fragmentarily, or incoherently—forms of expression that the LLM may have failed to classify as conspiratorial. Therefore, the reliance of our textual measure on narrative cohesion may represent a critical limitation. It is possible that alternative methods could identify conspiracism in writing without relying on well-structured texts, such as dictionary-based methods (bag-of-words analyses), which could capture conspiratorial elements in less structured texts without requiring narrative coherence.

Study 3: Exploring the individual language of conspiracism

Building upon the limitations discussed in Studies 1 and 2, we examined whether participants’ essays exhibited linguistic patterns previously associated with conspiratorial online discourse as reviewed above (see The Language of Conspiracy Theories). Specifically, we investigated linguistic features that provide an alternative assessment of textual conspiracism—i.e., the conspiratorial lexicon—along with measures related to textual quality, such as linguistic (lexical and syntactic) sophistication and textual cohesion.

These exploratory analyses served two primary aims. First, given that this study is—to our knowledge—the first to examine individual-level linguistic markers of conspiracism, we explored whether these theoretically grounded linguistic measures correlate with participants’ self-reported level of conspiracism. This allowed us to preliminarily assess whether linguistic patterns observed in online conspiracy discourse also manifest in offline, individual text production as a function of conspiracism. Second, because there is some evidence that LLMs rely on narrative coherence and stylistic cues [107,108] these measures serve also to assess LLM’s evaluations, and whether they are sensitive to textual quality.

Additional exploratory analyses using the Linguistic Inquiry and Word Count software (LIWC; [109], translated in Italian by [110]) are reported in S1 File (Table S6), including the correlation coefficients between conspiracism and conspiratorial content (lexical and narrative) identified in participants’ essays.

Material

Because participants from Studies 1 and 2 were provided the same essay prompt (i.e., interpreting the film Leave the World Behind, see Prompt S1 in S1 File), and data collection procedures were consistent across both studies, we aggregated essays from both studies into a single corpus to increase statistical power (total N = 385).

To create a unified measure of conspiracism, we combined the Conspiracy Mentality Scale (CMS) used in Study 1 and the Generic Conspiracist Beliefs scale (GCB) from Study 2 into a single composite index. This approach follows prior research practices where measures from separate studies, employing Likert scales with different response ranges, are standardized and aggregated [33]. Responses from each scale were rescaled to a common 0–1 range (0 = strongly disagree, 1 = strongly agree, with an unsure midpoint at .50) using the formula where z and x denote the rescaled and original response values on scale L, respectively, and and represent the theoretical minimum and maximum response values of the Likert scale.

No significant difference in conspiracism emerged between participants in Study 1 and Study 2 (t139.48 = 1.49, p = .14). Participants in Study 2, however, were older on average (t160.84 = −4.51, p < .001) and had higher educational levels (t163.39 = −8.34, p < .001) compared to those in Study 1.

Linguistic indices

Conspiratorial lexicon (dictionary).

Given the lexical convergence of conspiratorial discourse observed across social media [54,55] and online texts [57], we explored whether participants’ essays similarly reflected a conspiratorial lexicon corresponding to their level of conspiracism. To this end, as an alternative to LLM, we employed a dictionary-based approach for assessing lexical conspiracism. A dictionary is a predefined list of words associated with a specific psychological construct or phenomenon. Widely used in psychological research [98,109,111], this method consists in identifying and counting occurrences of words in a text that match entries in the dictionary. To ensure comparability across documents of varying lengths, the frequency of matches is expressed as a ratio ranging from 0 (no dictionary matches) to 1 (all words matched), allowing for standardized comparisons between texts.

We are currently developing a data-driven dictionary of conspiracism in English [112]. Unlike LLMs—which may depend on narrative structure and textual coherence—the dictionary method quantifies conspiratorial content solely based on word frequency, independent of word order or narrative cohesion [98]. The dictionary used in this study comprises 300 English lemmas derived from the topic-matched structure of LOCO [57]. Specifically, by contrasting conspiracy with mainstream texts within each seed topic in LOCO (e.g., the death of Princess Diana), we employed a term frequency–inverse document frequency (TF-IDF) approach to isolate conspiracy-specific terms while filtering out topic-specific terms treated as stop words (e.g., car crash and Paris in the context of the death of Princess Diana). As such, this method effectively identifies lexical markers of conspiracism that generalize across diverse topics. To validate our dictionary, we applied it to the subset of 144,000 documents from DONALD previously scored for conspiratorial content using ChatGPT (as described above). The dictionary-derived scores significantly correlated with ChatGPT ratings (r = .60, [.597, .605], p < .001).

For this work, the dictionary was translated into Italian using ChatGPT (see Prompt S3 in S1 File). The complete list of Italian dictionary terms is provided in the S1 File (see Section S2). The Italian dictionary was then applied to the lemmatized version of our combined corpus from Studies 1 and 2. Lemmatization was performed using the R package spacyr [113], a wrapper for the Python library spaCy [114], which provides accurate extraction of grammatical structure and prevents potential mapping errors (e.g., distinguishing between the Italian noun mondo [world] and the verb mondare [cleanse]). We selected the pretrained model it_core_news_lg due to its superior performance among available Italian models in spaCy.

After lemmatization, each text was reconstructed by concatenating its lemmatized tokens. Tokenization was subsequently performed using the R package quanteda [115], removing punctuation, symbols, and numbers. Two punctuation marks (? and !) were retained, as they appeared in the conspiratorial dictionary. Compound words were also combined (e.g., sito web became sito_web [website]). Finally, the dictionary was applied to the processed texts, computing the proportion of conspiratorial words in each document. The resulting scores ranged theoretically from 0 (no matches) to 1 (all words matched).

Sophistication.

Conspiracy narratives often feature high levels of linguistic sophistication, characterized by complex syntax and specialized vocabulary. Linguistic sophistication can serve persuasive goals [116,117], strategically creating impressions of truthfulness, competence, and expertise within conspiratorial discourse [73]. Empirical research suggests that conspiracy-related texts generally exhibit greater lexical and syntactic complexity compared to non-conspiracy texts [63,118,119]. It is thus plausible to expect similar linguistic sophistication in offline contexts, such as in the essays analyzed here, as a function of individual levels of conspiracism. However, considering that conspiracy believers typically demonstrate lower verbal intelligence scores [120] and tend to have less formal education [121], linguistic sophistication may not straightforwardly correlate with conspiratorial belief. In the following, we describe how we operationalize linguistic sophistication using two complementary dimensions: syntactic and lexical sophistication.

Note that beyond syntactic and lexical sophistication, previous research has also examined how semantic and pragmatic features (e.g., counterfactuals, hypotheticals, and evaluative language) contribute to conspiracy discourse, particularly in relation to refutational reasoning [122]. While these aspects fall outside the scope of our structurally focused analysis, we acknowledge their relevance and encourage future work to incorporate verb semantics and discourse-level patterns for a more comprehensive understanding of conspiratorial rhetoric.

Syntactic Sophistication.

To measure syntactic sophistication, we extracted three key metrics from the parsed texts of participants’ essays. First, we calculated the average sentence length by counting the number of tokens per sentence and computing the mean sentence length for each participant, with longer sentences indicating greater syntactic complexity. Second, we measured the average clauses per sentence, defined by counting all verbs (including auxiliaries) as proxies for clauses and computing the average number of clauses per sentence for each participant; a higher average suggests more complex clausal embedding. Third, we determined the average dependency tree depth by calculating the maximum depth of dependency structures in each sentence (i.e., the absolute positional difference between each token and its syntactic head) and averaging these maximum depths across all sentences per participant. Greater tree depth means greater hierarchical syntactic complexity.

To create a unified measure of syntactic sophistication, we combined these three metrics using Principal Component Analysis. The first principal component accounted for approximately 91% of the variance and served as our composite measure of syntactic sophistication, effectively capturing multiple aspects of syntactic complexity in a single score. We validated our syntactic sophistication measure using a synthetic corpus specifically created by ChatGPT (N = 50 documents) containing texts of explicitly high or low syntactic complexity (see Prompt S4 in S1 File) As expected, the high-complexity texts scored higher in syntactic sophistication compared to low-complexity texts (t44.61 = 9.18, p < .001, Cohen’s d = 2.60).

We applied our syntactic sophistication index to the LOCO corpus. We found that, consistent with prior findings on social media [118,119], conspiracy documents showed greater syntactic complexity than mainstream texts (, [.05, .08], SE = .01, t = 8.34, p < .001; using a multilevel regression models via the lme4 and lmerTest packages in R [123,124], including random intercepts for k = 200 topics, thus ensuring within-topic comparisons).

Lexical Sophistication.

To our knowledge, no existing word norms directly quantify lexical sophistication. Instead, researchers have often inferred lexical sophistication indirectly via metrics such as processing time that is known to correlate with age of acquisition (AoA; [125]), word frequency [126], and word length [127]. To address this gap, we generated lexical sophistication norms using LLMs, which have shown promising results in approximating human psycholinguistic judgments [128]. Specifically, we extracted a list of Italian lemmas and their part-of-speech tags from the IRMA corpus [79], and prompted ChatGPT to estimate the complexity of each word analogously to AoA ratings. We prompted ChatGPT to assess how sophisticated a word would appear to a native Italian speaker (see Prompt S5 in S1 File). We anchored the ratings in AoA norms because they reflect the developmental trajectory of lexical acquisition: early-acquired words (e.g., mum, food) are typically perceived as less complex, whereas later-acquired or specialized terms (e.g., syphilis, kerosene, neurotic) are typically perceived as more complex. AoA norms have previously been used as a proxy for linguistic sophistication [129] and have captured sophistication in nominal compounds in conspiracy texts [63].

We obtained sophistication ratings for 190,507 Italian lemmas. These ratings showed strong convergent validity with the existing Italian AoA dataset [130], yielding a correlation of r = .64 [.611, .665], p < .001 across the 1,912 overlapping lemmas. Using the same prompt, we also generated complexity norms for English and German (each with N > 100,000 lemmas). These norms similarly correlated well with established AoA datasets (in English: r = .74, based on 28,000 words from [125]; in German: r = .71, based on 2,958 words from [131]). Note that while these correlations are encouraging, the cognitive mechanisms underlying LLM-generated norms remain opaque. Future work should explore whether these norms exhibit the same cross-linguistic consistency as human AoA ratings [132]. Applying the English LLM-based norms to the LOCO corpus, we found that conspiracy texts exhibited significantly higher lexical sophistication than mainstream texts (, [.06, .09], SE = .01, t = 9.76, p < .001; using multilevel regression with a random intercept for k = 200 topics).

Here, for participant essays, we computed lexical sophistication by mapping the generated Italian AoA ratings onto each lemmatized text and calculating the mean per participant.

Megalalia.

It is possible that linguistic sophistication in conspiratorial discourse is not uniformly distributed. Rather, conspiracy authors might selectively employ specialized terminology across diverse and unrelated contexts [63], even within texts that are otherwise linguistically simple. In this view, conspiratorial linguistic sophistication could be defined by an uneven distribution of sophisticated terms within the discourse.

We are currently developing a metric designed specifically to quantify the disproportionate use of sophisticated vocabulary [77]. We label this phenomenon as megalalia (from Greek mega = “big” and lalia = “speech”), referring to the inappropriate use of highly sophisticated vocabulary without regard for context or necessity. This linguistic behavior often serves the strategic purpose of creating an illusion of intelligence, credibility, or authority [73,117].

The concept of megalalia relies on the assumption that sophisticated words, when used out of context or unnecessarily, should stand out as disproportionately complex relative to the surrounding lexical context. To quantify megalalia, we applied the Gini coefficient to the distribution of lexical sophistication scores within each text. Originally developed to measure economic inequality, the Gini coefficient quantifies disparity within a distribution. Applied to lexical sophistication, a Gini coefficient of zero would reflect complete uniformity indicating equal lexical sophistication across the entire text (e.g., pizza, pasta, table), whereas values approaching one would suggest high inequality, i.e., a small subset of sophisticated words contrasts sharply with otherwise simpler vocabulary (e.g., pizza, pasta, orbitofrontal cortex). This pattern of selectively inserting sophisticated terminology could reflect attempts to exaggerate perceived expertise or credibility, characteristic of conspiratorial arguments [73]. Testing this measure on the LOCO corpus revealed significantly higher levels of megalalia in conspiracy texts compared to mainstream texts (, [.25, .28], SE = .01, t = 32.78, p < .001; using the same multilevel model structure as described previously).

Cohesion.

A large-scale network analysis of conspiracy texts using LOCO revealed that conspiratorial discourse exhibits lower textual cohesion compared to mainstream texts, characterized by frequent co-occurrences of unrelated topics [58]. Similar fragmented and associative thought patterns have been observed among individuals on the schizophrenia spectrum [133135], which partially overlaps with conspiracism (schizotypy is the strongest psychopathological predictor of conspiracism [24]). Based on these observations, we examined whether textual cohesion in participants’ essays negatively correlated with their level of conspiracism. Because previous assessments of cohesion in conspiracy-related texts were conducted exclusively in English [58], here, we developed a novel approach to measure textual cohesion in other-than-English languages.

We quantified textual cohesion as the semantic similarity between consecutive sentences within each document. To this goal, we employed word embeddings from the Italian pre-trained fastText model [136,137] (300 dimensions; https://fasttext.cc/docs/en/crawl-vectors.html), which offer distinct advantages for morphologically rich languages like Italian by leveraging subword information [138]. Each sentence was represented as the mean vector of its constituent word embeddings, after removing stopwords (list of 279 stopwords from quanteda; [115]) and punctuation. Sequential textual cohesion scores were then computed by calculating the cosine similarity between embeddings of adjacent sentences within each text. Higher sequential cohesion values indicate greater semantic similarity between consecutive sentences, reflecting more coherent and structured discourse.

To validate our cohesion measure, we adopted a procedure similar to that used in previous research [58]. We compared cohesion scores of original texts to their scrambled counterparts, in which sentence order was randomized while preserving word order within sentences. Specifically, we selected a random sample of 100 documents, each containing at least 20 sentences, from the IRMA corpus [79]. As expected, the scrambled documents exhibited significantly lower sequential cohesion than the original documents (t186.51 = 5.63, p < .001, d = .80), confirming our metric’s sensitivity to disruptions in textual continuity. Further validation using the English fastText model was performed on a synthetic dataset (N = 770) from prior work (see section Testing cohesion metrics in [58]), yielding similarly results (t723.91 = 16.19, p < .001, d = 1.17).

Results

A correlation matrix summarizing the relationships among participants’ education level, conspiracism, and textual variables extracted from their essays is presented in Table 1. There were no significant differences between Study 1 and Study 2 regarding either conspiratorial narratives (assessed via the LLM; t172.39 = −.02, p = .982) or conspiratorial lexicon (assessed via the dictionary; t213.86 = 1.66, p = .098). Participants’ conspiracism scores were not correlated with conspiratorial narrative scores (r383 = .08, [−.02, .18], p = .134; Study 1: r283 = .06, [−.06, .18], p = .305, Study 2: r98 = .11, [−.09, .30], p = .262), but they were positively correlated with the use of conspiratorial lexicon (r383 = .19, [.09, .29], p < .001; Study 1: r283 = .18, [.06, .29], p = .003, Study 2: r98 = .23, [.04, .41], p = .02). This pattern suggests that individuals with stronger conspiracism tend to use conspiracy-related vocabulary, even if they do not produce clearly identifiable conspiratorial narratives. Scatterplots illustrating these relationships are provided in Fig 1.

thumbnail
Table 1. Correlation matrix of exploratory textual variables (Studies 1 and 2 aggregated).

https://doi.org/10.1371/journal.pone.0346496.t001

thumbnail
Fig 1. Scatterplots and distributions of conspiracy-related variables.

Participants’ conspiracism was assessed using the CMS and GCB. Essays’ conspiratorial narrative scores were evaluated via LLM, while conspiratorial lexicon scores were computed using the dictionary method. Data from Study1 (■) and Study2 (■) are combined (N = 385).

https://doi.org/10.1371/journal.pone.0346496.g001

As we mentioned, reliance on Claude (vs ChatGPT) represents a deviation from our preregistration. For transparency, we report the correlation between ChatGPT assessment and conspiracism level: r383 = .01, [−.09, .11], p = .821. Note that we also tested an alternative prompt to gather conspiracy scores from LLMs (see Section S3 in S1 File for details). Instead of using the definition of [3], we prompted Claude to rate in each essay different dimensions theoretically associated with conspiracism (e.g., deception, social relevance, etc, [2,139]). Scores following this new prompt were positively associated with Prompt S2 (in S1 File) based on the definition of [3] (r383 = .67, [.62, .72], p < .001), but they were not associated with conspiracism (r383 = −.01, [−.11, .09], p = .9).

Participants’ level of conspiracism was also positively—albeit weakly—associated with megalalia (r383 = .10, [.00, .20], p = .045), indicating that individuals with stronger conspiracism were more likely to employ sophisticated vocabulary disproportionately within otherwise simple texts, possibly reflecting strategic impression management. Megalalia was additionally correlated with conspiratorial lexicon (r383 = .13, [.03, .23], p = .011) but was not related to conspiratorial narratives (r383 = −.06, [−.16, .04], p = .254).

We next examined whether textual quality predicted conspiratorial narratives or lexicon. The level of conspiratorial narrative assessed via LLM correlated positively and significantly with lexical sophistication (r383 = .10, [.00, .20], p = .045), but not with syntactic sophistication (r383 = .06, [−.04, .16], p = .255), textual cohesion (r380 = .06, [−.04, .16], p = .258), and word count (r383 = .08, [−.02, .18], p = .124)—although these relationships were positive. Conversely, conspiratorial lexicon assessed via dictionary was negatively correlated with lexical sophistication (r383 = −.15, [−.25, −.05], p = .003) and word count (r383 = −.12, [−.22, −.02], p = .019), and was not associated with syntactic sophistication (r383 = .01, [−.09, .11], p = .833) nor textual cohesion (r380 = .03, [−.08, .13], p = .621).

Participants’ educational levels correlated positively with lexical sophistication (r381 = .17, [.07, .27], p < .001) and word count (r381 = .13, [.03, .23], p = .01), but, surprisingly, not with syntactic sophistication (r381 = −.03, [−.13, .07], p = .569) or cohesion (r378 = .02, [−.09, .12], p = .764). Thus, higher education levels are associated with richer vocabulary usage and longer essays, but not necessarily with more complex syntax or greater textual cohesion. Education was negatively associated with both conspiracism (r381 = −.23, [−.32, −.13], p < .001) and conspiratorial lexicon use (r381 = −.17, [−.27, −.07], p < .001), but not with conspiratorial narrative (r381 = .02, [−.08, .12], p = .675). This indicates that more educated individuals reported lower conspiracism and used fewer conspiracy-related terms, but were not more or less likely to produce a conspiratorial narrative.

To evaluate whether the textual quality of essays could affect the LLM’s assessments of conspiratorial levels (see, e.g., [107,108]), we conducted a series of regressions predicting essays’ conspiratorial scores from participants’ conspiracism, adding textual quality as moderator (for a thorough description of the methods with results, see Section S4 in S1 File). A significant interaction could indicate that the relationship between participants’ conspiracism and their essays is moderated by the quality of the texts. Note that because we preregistered the use of ChatGPT, but diverged from the preregistration because we discovered that Claude yielded slightly better results, we performed these tests for both ChatGPT and Claude along with our dictionary. We ran nine regression models in which we predict the essay conspiratorial scores (evaluated with dictionary, ChatGPT, and Claude) from conspiracism interacting with measures of textual quality (syntactic sophistication, lexical sophistication, and cohesion). Overall, conspiracy narratives evaluated via LLMs were consistently and positively associated with textual qualities (s > .10, ps < .056) but not with conspiracism (|s| < .04, ps > .05). When essays’ conspiracy level was assessed via dictionary, the reverse pattern was found, with a positive and consistent association with conspiracism (s > .17, ps < .001) but not with textual quality, which in fact showed a negative association (, p < .024). As for the interactions, on average, the absolute coefficients of the models for LLM-based assessments were larger than those for the dictionary-based assessment, although none (except one, see Model D in Table S3 and Figure S1 in S1 File) were significant at p < .05, suggesting an overall higher impact of textual quality on LLM’s assessments of conspiratorial narratives. Overall, this suggests that the dictionary-based scoring was less affected by textual quality compared to LLM’s scores.

General Discussion

In this work, we examined how individual worldviews shape the interpretation and communication of narratives. Specifically, we hypothesized that conspiracism would impact the interpretation of ambiguous information in a conspiratorial manner. To test this hypothesis, we conducted a preregistered study in which participants viewed the ambiguous, apocalyptic psychological thriller Leave the World Behind and subsequently wrote essays interpreting its meaning. Participants’ conspiracism was assessed using the Conspiracy Mentality Scale (CMS), and the level of conspiratorial narrative within their essays was evaluated using the LLM Claude (by Anthropic). Contrary to our preregistered hypothesis, we found no evidence that conspiracism predicted a conspiratorial interpretation of the film. We thus conducted a second (non-preregistered) study, replacing the CMS with the Generic Conspiracist Beliefs scale (GCB); again, we did not find the expected association.

In a third exploratory analysis, we aggregated data from the two Studies to investigate potential explanations for these unexpected results. We employed an alternative dictionary-based measure to quantify the use of conspiratorial lexicon in participants’ essays, which is less dependent on textual quality. Additionally, we applied various indices to assess the textual sophistication and coherence of essays. We found that the conspiratorial lexicon—but not the narrative assessments—correlated significantly with participants’ conspiracism and that LLM-assessed conspiratorial narratives were positively associated with measures of textual quality.

In the following sections, we provide an interpretation of why we did not find an association between conspiracism and the emergence of conspiratorial narratives and, in contrast, why the dictionary-based assessment successfully captured associations with conspiracism. We further discuss how the exploratory linguistic markers relate to existing knowledge of conspiratorial language patterns observed online.

Leave the worldview behind..

Across both Studies 1 and 2, as well as in the aggregated dataset, we found no evidence that participants’ conspiracism predicted the narrative conspiratorial level of their essays. In other words, their conspiratorial worldview seemed to be left behind, failing to emerge explicitly in their narrative interpretations. Below, we discuss possible explanations for this null result, considering methodological, motivational, and theoretical factors. We also address the possibility that the expected relationship genuinely does not exist, and we consider broader implications for understanding how CTs may originate.

The stimulus.

We observed no differences in conspiracism between participants who completed the questionnaire before versus after watching the film. This result aligns with previous research, which found that exposure to an episode of The X-Files did not increase participants’ conspiracy mentality [140]. As discussed by the authors, the lack of effect might reflect the perceived plausibility or factual grounding of the film (e.g., in contrast to JFK [141] or the Chernobyl nuclear power plant explosion [68], which did successfully influence conspiratorial beliefs). Our goal, however, was not to influence or measure participants’ attitudes, but rather to observe their sense-making activity. Fictional narratives are routinely used as vehicles for commenting on real-world institutions and power relations [142,143]. As such, regardless of the the perceived realism of the film, conspiratorial interpretations can arise at a literal or allegorical level, or through analogies with what participants perceive as the real world. Thus, if conspiracism shapes interpretation, it should remain detectable in essays about a fictional scenario, even though the openness of the task also allowed for non-conspiratorial readings and may therefore have lowered the base rate of explicitly conspiratorial narratives. Nevertheless, it is also possible that participants interpreted the film as a work of art rather than an account of actual or possible events. In this case, they may have understood the instructions as an invitation to provide a film-criticism-style response, focusing on the director’s intentions and overarching message instead of the narrative content per se.

Participants’ motivation.

A lack of participant motivation might also explain our null findings. Although we did not directly measure motivational factors, we indirectly examined participants’ motivation through essay length. We did not observe a positive relationship between conspiracism and essays’ word count. This is not trivial because prior research consistently shows that conspiracy-related content on websites [57,7880] and social media platforms (e.g., Reddit; [55]) tends to be longer. Longer texts are generally associated with greater persuasive intent [81].

One plausible explanation for this discrepancy relates to the context of our study. Unlike naturalistic online environments, where individuals are intrinsically motivated to persuade, our participants may have lacked a similar motivation. Online conspiracy promoters typically engage in persuasive communication: even when their personal belief in conspiracies might differ from their spreading behavior, their motivation to disseminate conspiratorial narratives remains strong [144,145]. In contrast, participants in our study likely viewed the task primarily as a requirement for earning course credit, rather than as an opportunity to persuade. Without the social context or intrinsic desire to persuade others, participants may have been less motivated to elaborate their thoughts or write extensively about conspiratorial themes.

This motivational limitation is consistent with broader challenges in eliciting conspiratorial texts within controlled laboratory environments. Settings such as classrooms or online research platforms (e.g., Prolific, MTurk) typically lack the social dynamics crucial to the formation and propagation of CTs [1,2,146]. In naturalistic settings, such as social media, personal blogs, or conspiracy websites, individuals are strongly motivated to construct and disseminate elaborate conspiratorial content, especially on topics they perceive as personally meaningful. For example, anti-vaccination advocates often create detailed narratives emphasizing perceived dangers, systemic corruption, or threats to personal freedom, driven by the desire to persuade, connect with like-minded individuals, or influence public opinion. In contrast, when participants are asked, as in our lab-based study, to interpret a film for course credit, their motivation likely shifts toward minimal task completion rather than engaging deeply or persuasively with conspiratorial content—even if they hold conspiratorial beliefs.

Finally, it cannot be ruled out that the fear of social stigma often associated with conspiracy beliefs [147] may have played a role among the motivational factors. Our participants were indeed students who were aware that their texts would be eventually read by others. Despite the anonymity of the data collection process, which was meticulously designed to avert such risks, the possibility of self-censorship cannot be discounted [148].

The prompt.

Beyond motivational factors, another potential explanation for our results relates to the nature of the prompt used to elicit participants’ essays (see Prompt S1 in S1 File). The prompt was intentionally designed to trigger cognitive biases associated with conspiracy mentality, known to thrive under conditions of ambiguity and uncertainty [36,37]. However, CTs also fulfill broader epistemic, existential, and social needs [1,2]. Notably, CTs frequently provide explanatory frameworks helping individuals make sense of complex, uncertain, or threatening situations, thus satisfying an epistemic need. It is possible that our prompt, which merely asked participants to “interpret” the film, did not effectively activate this explanatory need. Would our results have differed if the prompt had explicitly engaged participants’ epistemic motivations? For instance, rather than asking participants to interpret the film, prompting them to causally explain the depicted events (e.g., “Why did these events occur?” or “Who or what is responsible for these events?”) might have shifted their responses from subjective interpretation toward structured explanations, potentially eliciting narratives more consistent with conspiracism.

...but keep the words

Although participants’ conspiratorial worldview did not significantly shape their narratives, we did observe an association between conspiracism and the use of conspiratorial lexicon. In other words, while participants might have left their worldview behind, they retained conspiratorial vocabulary. This finding suggests that conspiracism might emerge more readily at the lexical level rather than through fully structured narratives.

Cognitive demand.

One plausible explanation involves the cognitive demands associated with narrative construction. The use of familiar or emotionally charged words—particularly those with ideological significance—can occur relatively automatically. In contrast, crafting a coherent narrative involves higher-order cognitive processes such as causal reasoning, temporal sequencing, and thematic integration, all of which are more cognitively demanding [149]. Constructing a narrative necessitates satisfying multiple constraints, including maintaining cohesion and coherence and providing plausible causal explanations for events. Moreover, a high conspiracy mentality does not necessarily imply the ability or motivation to produce fully elaborated conspiracy narratives. Creating such narratives requires advanced causal reasoning and narrative-construction skills that individuals high in conspiracism may not consistently possess or be motivated to apply [120,121]. Thus, participants with higher conspiracy mentality in our study may have used conspiratorial vocabulary (e.g., deception, cover-up, elite) without integrating these words into complete conspiratorial storylines.

Exposure.

Alternatively, our dictionary-based approach may have picked participants’ habitual exposure to conspiratorial discourse. This interpretation aligns with prior evidence indicating higher lexical similarity among conspiratorial webpages across widely disparate topics [58]. Participants might have used recurring idiomatic expressions or lexical patterns (e.g., “The truth is hidden”, “Pulling the strings” [118]) that, despite not being sufficient individually to form a coherent conspiratorial narrative recognizable by the LLM, still contributed to the conspiratorial lexicon score. Therefore, individuals high in conspiracism may naturally adopt linguistic patterns regularly encountered in online conspiracy communities. Indeed, lexical overlap exists between conspiratorial discourse observed on social media [54,55] and conspiracy-focused websites [57]. Thus, it is reasonable to expect that these lexical patterns could transfer into offline, spontaneous writing, even when participants are neither addressing an audience nor actively trying to persuade.

Exploratory textual measures

We developed several indices related to textual quality, including lexical and syntactic sophistication, cohesion, and megalalia (the disproportionate use of sophisticated vocabulary). These measures not only served to potentially clarify the LLM’s classification outcomes but also provided exploratory insights into whether linguistic patterns found in online conspiratorial discourse manifest in offline texts produced by participants varying in conspiracism. Below, we discuss our findings regarding associations between these linguistic markers and conspiracism in participants’ essays, contextualizing our results within existing evidence on online conspiratorial language. In Table 2, we summarize our findings in relation to existent literature.

thumbnail
Table 2. Summary of associations between conspiracism and language features identified in participants’ essays in relation to other types of textual sources.

https://doi.org/10.1371/journal.pone.0346496.t002

Cohesion.

Previous research has found lower cohesion in conspiratorial webpages compared to mainstream ones [58]. Here, however, we observed no such association in participants’ short essays. One explanation for lower cohesion in online conspiracy narratives is that they typically jump between multiple unrelated themes, creating highly interconnected but topically scattered narratives [58]. Some scholars have characterized this accumulation of disparate “evidence” (also known as the mille-feuille argumentative style or the Gish gallop) as a rhetorical strategy designed to overwhelm readers by rapidly presenting multiple arguments without concern for their accuracy or logical coherence [73,150152]. Therefore, persuasive motivation likely plays a key role in reducing textual cohesion online: to convince others, online conspiratorial narratives strategically connect diverse and unrelated topics, which results in a decreased cohesion.

As previously suggested, our participants may have lacked similar persuasive motivations. The task (writing essays prompted by an ambiguous film), the context (lab study rather than online), or the prompt itself may not have triggered sufficient motivation to persuade. Consequently, participants were unlikely to employ the wide-ranging rhetorical strategies characteristic of online conspiratorial discourse, resulting in similar cohesion levels irrespective of their conspiracy beliefs.

Alternatively, the absence of association may be language-specific. Italian is less prevalent online than English, potentially resulting in fewer resources for pre-training Italian embeddings and possibly impacting embedding quality. Additionally, Italian’s highly inflectional nature could have influenced the sensitivity of our cohesion metric. Consistent with this interpretation, our validation procedure yielded a larger effect size (the Cohen’s d) for cohesion in English than in Italian, suggesting potential language-specific limitations.

Lexical sophistication.

In previous analyses on LOCO, conspiracy-related texts employed more lexically sophisticated terms compared to mainstream documents [63]. However, in our study, lexical sophistication negatively correlated with participants’ conspiracism. This unexpected result might relate to prior research indicating that higher conspiracism often correlates with lower levels of education [121], potentially leading individuals to use less sophisticated vocabulary. However, our participants typically had at least some university-level education, and cohesion was not associated with education levels within our sample, suggesting that this explanation might not fully apply here. An alternative interpretation is that in online conspiracy discourse, sophisticated vocabulary might be employed strategically to signal expertise and authority [73], reflecting persuasive intent. As mentioned above, such motivation was possibly absent among our participants. Without persuasive intents, our participants might not have felt compelled to use sophisticated terminology. This could tentatively explain the observed negative correlation.

Megalalia.

Megalalia—the uneven distribution of sophisticated vocabulary—was positively associated with higher conspiracism in both our participants’ essays and in LOCO. This preliminary finding suggests that megalalia might be a consistent linguistic feature of conspiratorial discourse, appearing across both online and offline contexts. However, it is somewhat puzzling to observe megalalia in our sample, given participants’ presumed lack of clear persuasive motivation. One potential explanation, beyond persuasion, taps into the need for uniqueness, a trait consistently associated with conspiracism (e.g., [72]) and closely linked to personal and social identity [18]. Individuals may thus selectively employ unnecessarily sophisticated vocabulary not only to persuade but also to project an image of exceptional intelligence, competence, or uniqueness [153]. Megalalia might similarly serve to evoke impressions of depth or hidden knowledge, paralleling the pseudo-profound style associated with bullshit receptivity and conspiracism [154]. Consequently, even without overt persuasive intent or an explicitly conspiratorial narrative, individuals with higher conspiracism might still exhibit megalalia as a form of identity signaling or uniqueness-seeking behavior.

Syntactic sophistication.

Syntactic sophistication correlated positively with conspiracism in our sample, aligning with previous findings that conspiracy-related accounts on social media [118,119] and conspiracy documents in LOCO also exhibit greater syntactic complexity. Given that a lack of motivation could plausibly explain other unexpected findings (e.g., cohesion), it is intriguing that syntactic sophistication was higher among participants with greater conspiracism. One possibility is that higher syntactic complexity emerges from longer, convoluted sentences characteristic of a stream-of-consciousness style. This interpretation could be consistent with evidence linking conspiracism to thought disorders typically associated with schizophrenia [24,133135]. However, this interpretation contrasts with findings that individuals with schizophrenia generally exhibit reduced syntactic complexity [155]. Thus, even assuming limited motivation among participants, it remains unclear why syntactic complexity would positively associate with conspiracism and which cognitive or motivational mechanisms might drive this relationship. Impression management, as we discussed for megalalia, could be a tentative explanation.

Limitations and future directions

In the discussion, we provided an interpretation of why, in our study, conspiracism was associated with conspiratorial lexicon but not with conspiratorial narrative content. In the following, we broaden the focus to additional constraints that were only partially discussed above. In particular, we examine limitations related to our materials, including potential biases in LLM-based classification and the construction of the LOCO and DONALD corpora. Finally, we outline several open questions intended to guide future research directions. Specifically, we highlight plausible predictors not captured in our design and consider whether conspiratorial narratives may emerge cumulatively.

LLMs’ classification.

A first limitation concerns the prompt we used to elicit LLM judgments, which did not include explicit anchors. Unlike our lexical sophistication norms, where we provided concrete exemplars (e.g., mum vs syphilis), here neither the models nor human raters were given examples illustrating different levels of conspiratorial strength. In the absence of such anchors, it is difficult to calibrate what counts as a “very strong conspiratorial narrative”. Future work relying on LLMs’ classification could test whether adding explicit anchors or benchmark texts improves the reliability and validity of these assessments.

A second limitation is inherent to the classification method itself. LLMs may reliably detect conspiratorial narratives primarily when they appear in clearly structured and coherent texts [107,108]. If models are more effective when narratives are cohesive, they may fail to identify conspiratorial content expressed in fragmented, tangential, or loosely organized ways. This interpretation is indirectly supported by our exploratory analyses, which indicated that LLM-based assessments were moderated by indices of textual quality. By contrast, a simpler dictionary measure based on word count successfully captured the relationship between conspiracism and conspiratorial vocabulary use, suggesting that LLM classifications may be more sensitive to narrative coherence than to conspiratorial content alone.

A third additional concern is that LLMs can themselves exhibit a degree of conspiratorial bias [156]. If a model treats conspiratorial claims as relatively plausible, an essay containing such content might receive a lower conspiratorial rating than warranted, resulting in a biased assessment. Such bias, however, should be evenly distributed across the rating continuum and thus systematically decrease conspiratorial ratings for all texts. This would not be problematic unless an artificial floor effect is detected.

Finally, there is a broader concern on how LLMs have been trained. Depending on training material, topics and genres representations are skewed or biased (see, e.g., differences between Google books corpus and British periodicals [157]). This is a well known problem of representability in corpus construction [158], but it can represents a major problem in regard to LLMs, which are often trained on large proprietary corpora whose exact composition is undisclosed, complicating replication and comparability across studies [159].

LOCO and DONALD’s limitations.

Because the conspiracy dictionary used to extract the conspiratorial lexicon was built on LOCO, which also served as a validation corpus for our measures, some aspects of LOCO’s construction need to be clarified to flag potential limitations. First, the conspiracy dictionary is not fully independent of the LLM-based assessment. Although it was constructed in a data-driven fashion from LOCO extracting conspiracy-related terms (e.g., deception) stripped from event-base ones (Paris, Diana), its validation relied on individual documents from DONALD, whose ground truth labels were defined by ChatGPT. This partial overlap may introduce a degree of circularity between feature construction (the dictionary) and the evaluation criterion (LLM judgments).

Second, LOCO was built around events that are known to have generated CTs, gathered from surveys measuring belief in specific CTs [160,161]. However, it could be argued that not all of these events can be unequivocally considered CTs. For example, there is nothing inherently harmful about Bigfoot itself [162], or about the (alleged) faked deaths of popular figures such as Elvis Presley and Princess Diana (or Paul McCartney). Although these events can be considered as facts allegedly known by authorities but hidden from the public (consistent with the definition proposed by [163]), they seem to lack the clear malicious intent that is often taken to characterize CTs [2,3]. This ambiguity in the seed events may limit the content validity of LOCO as a corpus of genuine conspiratorial documents and, by extension, of the dictionary derived from it.

Third, beyond content validity, not all documents in LOCO actually contain CTs, with a portion of documents that mention the event’s keyword (e.g., “Sandy Hook”) without any conspiratorial discussion of the event itself (the Sandy Hook school shooting) [164]. This reflects a well-known problem in corpus collection [101], namely a trade-off between sample size and accuracy. On the one hand, gathering all documents from a source known to deliver CTs ensures a large sample size but increases the likelihood of false positives, as it also includes documents that do not mention CTs [164]. On the other hand, false positives are minimized if documents are carefully selected to include CTs, but this results in a costly and time-consuming endeavour, with limited sample size and heterogeneity, and may introduce sample bias if documents are selected based on the same lexical features that are later used to analyze their lexical content. These corpus-level constraints should be kept in mind when interpreting the performance and generalizability of both the dictionary and the measures derived from it.

Other psychological contributors.

In this work, we focused on conspiracism as a unique predictor of the emergence of conspiratorial narratives. However, it must be acknowledged that other psychological traits may also contribute to the production of such narratives. Below, we highlight several candidates that future research could explicitly incorporate alongside conspiracism.

Schizotypy—a subclinical form of schizophrenia characterized by paranoia, magical ideation, and unconventional thinking—has been identified as one of the strongest psychopathological predictors of conspiracism [24]. Language on the schizo spectrum, including both schizophrenia and schizotypy, has been described as vague, circumstantial, metaphorical, overelaborate, and stereotyped, with loose semantic associations [165,166], heightened sensitivity to associative and irrelevant stimuli [167,168], reduced semantic cohesion, and deficits in temporal, causal-motivational, and thematic coherence [133135]. Schizotypy could therefore help explain some narrative and linguistic features frequently observed in CTs, such as high thematic heterogeneity, hyper-interconnectedness, loose semantic associations, and lower topic specificity and lexical cohesion [5861,63].

Magical thinking, which overlaps with both schizotypy and conspiracism [169], is particularly connected to causality and agency—two core ingredients of conspiracy theorizing [4547]. This trait may facilitate the endorsement of highly improbable or counterintuitive CTs, such as the belief in omnipotent and omniscient agents (e.g., Reptilian shape-shifters [43]). In the mind of magical thinkers, conspirators may be perceived not only as malevolent but also as virtually flawless in their activity. Even socially marginalized or low-status groups such as immigrants are sometimes portrayed in conspiratorial narratives as possessing disproportionate power to destabilize entire nations, “impose their rules”, or orchestrate demographic takeovers. This perceived omnipotence, applied to groups typically seen as relatively powerless, exemplifies the magical and counterintuitive logic underlying many CTs and coherently sustains a narrative of a vast and hidden agenda.

Conspiracy believers are also more susceptible to probabilistic reasoning errors and have a biased conception of randomness in which coincidences are seen as suspicious rather than expected [33,42,170172]. As such, they may disregard the probabilistic reality that large-scale conspiracies are prone to failure due to human error and increasing exposure risks as the number of participants grows [173]. One probabilistic error in particular such as the overestimation of the likelihood of co-occurring events—the conjunction fallacy—may be especially relevant for the generation of conspiratorial narratives, as it could help explain the construction of hyperconnected and thematically heterogeneous narratives that bind together otherwise unrelated events.

Another potential predictor of the emergence of conspiratorial narratives is belief in specific CTs. Although conceptually related, correlated, and often used interchangeably by researchers [174], belief in CTs and conspiracy mentality are distinct psychological constructs [22,174177]. Conspiracy mentality reflects a general tendency to distrust authority and suspect hidden motives, typically assessed with abstract or non-specific statements, and may be more closely associated with general skepticism, political cynicism, and information-seeking motives. Conversely, belief in specific CTs involves endorsement of concrete and often epistemically risky claims and is more strongly linked to pathological cognitive tendencies, such as magical thinking, schizotypy, or reduced analytic reasoning [177,178]. Given their more concrete and assertive nature, specific CTs may represent a stronger predictor than conspiracy mentality for the generation of conspiratorial narratives, which typically require elaboration, detail, and a degree of belief commitment that aligns more naturally with endorsement of specific, risky claims than with generalized skepticism alone. Our effect sizes support this conjecture: the relationship between the essays’ level of conspiratorial narrative and conspiracism was higher for the more epistemically risky generic conspiracist scale [31] compared to the conspiracy mentality scale [91].

Finally, motivated cognition, partisanship, and affective polarization may also be important contributors. CTs frequently denigrate political or social opponents [53,179,180]; this applies, for example, to QAnon [181184] and to partisan-oriented CTs about climate change endorsed on both the left and the right [185]. Importantly, CTs can be endorsed or propagated for reasons of convenience [179]. They are not necessarily generated only by sincere believers but can be deliberately constructed and disseminated to achieve strategic goals, such as influencing political behavior or generating profit [144,145]. For instance, many of Donald Trump’s conspiratorial claims lack evidentiary support and may represent a type of reasoning that is irrational, internally inconsistent, or performative rather than sincerely believed [186,187]. An important avenue for future research is thus to determine the extent to which CTs are produced intentionally, independent of genuinely held beliefs, versus emerging organically from cognitive biases and dispositional tendencies such as conspiracism, schizotypy, magical thinking, and probabilistic reasoning errors.

Cumulative effect.

As our data appear to suggest, the actual impact of conspiracism on the emergence of conspiratorial narratives may be relatively small and, as discussed, contingent on other traits and motivations. Given that CTs do exist, how do they originate and propagate? It is possible that individual contributions to conspiratorial discourse might be modest, yet fully fledged CTs could emerge cumulatively [188]. Initial ideas may gradually develop into elaborated narratives through repeated social interactions among multiple individuals over time [59,62,189,190]. The QAnon movement illustrates this process, encapsulating many specific beliefs—even conflicting ones—without a single official version [181,182]; narratives are generated, recombined, and personalized, where individual believers assemble their own conspiratorial “package”. Even before QAnon, analyses of the subreddit r/conspiracy similarly suggested a decentralized, cumulative pattern of narrative construction [59]. Future research could investigate this cumulative hypothesis by employing paradigms inspired by cultural transmission studies [86,191], allowing researchers to examine how narratives evolve and transform across repeated social exchanges.

Future directions.

Although our work paves the way for understanding the potential link between conspiracism and language production, several questions remain open, particularly in light of the methodological limitations of our current approach. A recurrent issue in our discussion concerns the extent to which conspiracism overlaps with, or diverges from, persuasive intent. Future experimental studies could systematically manipulate participants’ motivation to persuade (e.g., explicitly instructing participants to convince an audience) and examine how persuasive intent interacts with conspiracism. Such studies would help clarify whether the linguistic markers identified here (e.g., megalalia, sophistication) emerge specifically due to persuasive motives, conspiratorial cognition, or a combination of both.

A related open question is whether certain linguistic features are context-dependent—activated only by relevant topics or motivational factors—or whether they reflect stable traits of individuals high in conspiracism across diverse settings. Experimental manipulations of topic relevance or incentives to persuade could shed light on whether conspiratorial language emerges only in specific contexts or consistently characterizes the language production of highly conspiratorial individuals, regardless of situational factors.

More comparative research is also necessary to examine how conspiratorial discourse emerges across different venues (e.g., online webpages, social media platforms, lab-based tasks). Applying consistent analytic approaches across these various contexts—as we did by comparing LOCO documents with our participants’ essays—may reveal whether linguistic markers of conspiracism remain stable or differ systematically across communication environments.

Although we did not find evidence that conspiracism leads to overt conspiracy narratives, we observed that participants high in conspiracism used a greater amount of conspiratorial lexicon. This suggests that conspiracism might manifest primarily at the lexical level, rather than through well-structured narratives. If explicit conspiracy narratives do not strongly originate from individuals with high conspiracy beliefs, then who is actually authoring the CTs widely encountered online?

It remains plausible that at least some CTs are strategically constructed—often for political, ideological, or economic gain—rather than arising solely from genuine believers’ worldviews and subsequently amplified via social interactions. Future studies employing paradigms such as cultural transmission experiments, collaborative storytelling approaches, or longitudinal analyses could help determine whether conspiratorial narratives emerge cumulatively from individuals’ cognitive biases or whether they are predominantly products of intentional persuasion strategies by strategically motivated actors.

Conclusions

We investigated how conspiracism shapes narrative interpretation and communication by analyzing participants’ essays interpreting an ambiguous movie. Contrary to our expectations, individuals with higher conspiratorial beliefs did not produce more explicitly conspiratorial narratives, but they did show greater use of conspiracy-related vocabulary, suggesting that conspiratorial cognition might manifest more readily at a lexical rather than narrative level. Exploratory linguistic analyses revealed that conspiracism was associated with selective use of sophisticated vocabulary (megalalia) and higher syntactic complexity, highlighting parallels between offline and online conspiracy discourse. Future studies should investigate the contexts and motives driving the emergence of conspiratorial narratives, as well as the extent to which conspiracy theories are produced intentionally for strategic purposes, independently of genuine conspiratorial beliefs.

Supporting information

S1 File. Prompts used to elicit film interpretations, evaluate Italian essays for conspiratorial content, translate the conspiracy dictionary, generate syntactic complexity corpora, and extract lexical sophistication norms; complete list of Italian dictionary terms used for lexical conspiracism assessment; alternative three-dimensional prompt for evaluating conspiratorial content with correlation matrix comparing metrics; moderated regression analyses examining whether textual quality affects conspiracy content detection; correlation matrices between conspiracy-related variables and narcissism and attachment style measures for Studies 1 and 2; and correlation coefficients between conspiracism, conspiratorial content, and LIWC dictionaries.

https://doi.org/10.1371/journal.pone.0346496.s001

(PDF)

S2 File. Inclusivity-in-global-research-questionnaire.

https://doi.org/10.1371/journal.pone.0346496.s002

(PDF)

Acknowledgments

We would like to thank Elisa Valiante, Lorenzo Eugeni, and Lorenzo Picca for their contribution to data collection and coding.

References

  1. 1. Douglas KM, Sutton RM, Cichocka A. The psychology of conspiracy theories. Curr Dir Psychol Sci. 2017;26(6):538–42. pmid:29276345
  2. 2. Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, et al. Understanding conspiracy theories. Polit Psychol. 2019;40(S1):3–35.
  3. 3. Nera K, Schöpfer C. What is so special about conspiracy theories? Conceptually distinguishing beliefs in conspiracy theories from conspiracy beliefs in psychological research. Theory Psychol. 2023;33(3):287–305.
  4. 4. Oliver JE, Wood T. Medical conspiracy theories and health behaviors in the United States. JAMA Intern Med. 2014;174(5):817–8. pmid:24638266
  5. 5. Klofstad C, Christley O, Diekman A, Enders A, Funchion J, Hemm A, et al. The New Satanic Panic. Polit Sci Q. 2024;140(2):249–68.
  6. 6. Brotherton R. Suspicious Minds: Why We Believe Conspiracy Theories. paperback edition ed. New York London: Bloomsbury Sigma; 2016.
  7. 7. Uscinski JE, Parent JM. American Conspiracy Theories. Oxford University Press; 2014. https://doi.org/10.1093/acprof:oso/9780199351800.001.0001
  8. 8. Uscinski J, Enders A, Klofstad C, Seelig M, Drochon H, Premaratne K, et al. Have beliefs in conspiracy theories increased over time? PLoS One. 2022;17(7):e0270429. pmid:35857743
  9. 9. Williams MN, Ling M, Kerr JR, Hill SR, Marques MD, Mawson H, et al. People do change their beliefs about conspiracy theories-but not often. Sci Rep. 2024;14(1):3836. pmid:38360799
  10. 10. Jolley D, Douglas KM. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS One. 2014;9(2):e89177. pmid:24586574
  11. 11. Taubert F, Meyer-Hoeven G, Schmid P, Gerdes P, Betsch C. Conspiracy narratives and vaccine hesitancy: a scoping review of prevalence, impact, and interventions. BMC Public Health. 2024;24(1):3325. pmid:39609773
  12. 12. Biddlestone M, Azevedo F, van der Linden S. Climate of conspiracy: a meta-analysis of the consequences of belief in conspiracy theories about climate change. Curr Opin Psychol. 2022;46:101390. pmid:35802986
  13. 13. Jolley D, Douglas KM. The social consequences of conspiracism: Exposure to conspiracy theories decreases intentions to engage in politics and to reduce one’s carbon footprint. Br J Psychol. 2014;105(1):35–56. pmid:24387095
  14. 14. Bierwiaczonek K, Kunst J o n a s R, Pich O. Belief in COVID-19 conspiracy theories reduces social distancing over time. Appl Psychol: Health Well-Being. 2020;12(4):1270–85.
  15. 15. Imhoff R, Dieterle L, Lamberty P. Resolving the puzzle of conspiracy worldview and political activism: belief in secret plots decreases normative but increases nonnormative political engagement. Soc Psychol Personal Sci. 2020;12(1):71–9.
  16. 16. Jolley D, Paterson JL. Pylons ablaze: examining the role of 5G COVID-19 conspiracy beliefs and support for violence. Br J Soc Psychol. 2020;59(3):628–40. pmid:32564418
  17. 17. Bilewicz M, Winiewski M, Kofta M, Wójcik A. Harmful ideas, the structure and consequences of anti-semitic beliefs in Poland. Polit Psychol. 2013;34(6):821–39.
  18. 18. Sternisko A, Cichocka A, Van Bavel JJ. The dark side of social movements: social identity, non-conformity, and the lure of conspiracy theories. Curr Opin Psychol. 2020;35:1–6. pmid:32163899
  19. 19. Di Cicco G, Molinario E, Contu F, Pierro A, Douglas KM, Kruglanski AW. From conspiracies to insurgence: understanding the path from conspiracy beliefs to political engagement through the 3N model of radicalization. Curr Psychol. 2025;44(10):8339–57.
  20. 20. Vermeulen N. Seeing conspiracy theorists everywhere as a conspiracy paradox. Commun Psychol. 2025;3(1):115. pmid:40721488
  21. 21. Douglas KM, Sutton RM. What are conspiracy theories? A definitional approach to their correlates, consequences, and communication. Annu Rev Psychol. 2023;74:271–98. pmid:36170672
  22. 22. Biddlestone M, Green R, Douglas KM, Azevedo F, Sutton RM, Cichocka A. Reasons to believe: a systematic review and meta-analytic synthesis of the motives associated with conspiracy beliefs. Psychol Bull. 2025;151(1):48–87. pmid:39913483
  23. 23. Pilch I, Turska-Kawa A, Wardawy P, Olszanecka-Marmola A, Smołkowska-Jędo W. Contemporary trends in psychological research on conspiracy beliefs. A systematic review. Front Psychol. 2023;14:1075779. pmid:36844318
  24. 24. Bowes SM, Costello TH, Tasimi A. The conspiratorial mind: A meta-analytic review of motivational and personological correlates. Psychol Bull. 2023;149(5–6):259–93. pmid:37358543
  25. 25. Goreis A, Voracek M. A systematic review and meta-analysis of psychological research on conspiracy beliefs: field characteristics, measurement instruments, and associations with personality traits. Front Psychol. 2019;10:205. pmid:30853921
  26. 26. Imhoff R, Lamberty P. How paranoid are conspiracy believers? Toward a more fine‐grained understanding of the connect and disconnect between paranoia and belief in conspiracy theories. Euro J Social Psych. 2018;48(7):909–26.
  27. 27. Bruder M, Haffke P, Neave N, Nouripanah N, Imhoff R. Measuring individual differences in generic beliefs in conspiracy theories across cultures: conspiracy mentality questionnaire. Front Psychol. 2013;4:225. pmid:23641227
  28. 28. Imhoff R, Bruder M. Speaking (Un-)Truth to Power: Conspiracy Mentality as a Generalised Political Attitude. Eur J Personal. 2013;28(1):25–43.
  29. 29. Dagnall N, Drinkwater K, Parker A, Denovan A, Parton M. Conspiracy theory and cognitive style: a worldview. Front Psychol. 2015;6:206. pmid:25762969
  30. 30. Goertzel T. Belief in conspiracy theories. Polit Psychol. 1994;15(4):731.
  31. 31. Brotherton R, French CC, Pickering AD. Measuring belief in conspiracy theories: the generic conspiracist beliefs scale. Front Psychol. 2013;4:279. pmid:23734136
  32. 32. Nera K, Wagner‐Egger P, Bertin P, Douglas KM, Klein O. A power‐challenging theory of society, or a conservative mindset? Upward and downward conspiracy theories as ideologically distinct beliefs. Euro J Social Psych. 2021;51(4–5):740–57.
  33. 33. Miani A, Cruz N, Lewandowsky S. Still very much dead and alive: Incoherence in conspiracism as a departure from bayesian rationality. 2025. https://doi.org/10.31219/osf.io/t6a54_v3
  34. 34. Enders A, Klofstad C, Littrell S, Miller J, Theocharis Y, Uscinski J, et al. Left–right political orientations are not systematically related to conspiracism. Polit Psychol. 2024;46(S1):56–79.
  35. 35. Barkun M. A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. 2 ed. University of California Press; 2013.
  36. 36. van Prooijen J-W. An existential threat model of conspiracy theories. Eur Psychol. 2020;25(1):16–25.
  37. 37. van Prooijen J, Jostmann NB. Belief in conspiracy theories: the influence of uncertainty and perceived morality. Euro J Social Psych. 2012;43(1):109–15.
  38. 38. Marchlewska M, Cichocka A, Kossowska M. Addicted to answers: need for cognitive closure and the endorsement of conspiracy beliefs. Euro J Soc Psych. 2017;48(2):109–17.
  39. 39. Kuhn SAK, Lieb R, Freeman D, Andreou C, Zander-Schellenberg T. Coronavirus conspiracy beliefs in the German-speaking general population: endorsement rates and links to reasoning biases and paranoia. Psychol Med. 2022;52(16):4162–76. pmid:33722315
  40. 40. van Prooijen J-W, Douglas KM, De Inocencio C. Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. Eur J Soc Psychol. 2018;48(3):320–35. pmid:29695889
  41. 41. Müller P, Hartmann M. Linking paranormal and conspiracy beliefs to illusory pattern perception through signal detection theory. Sci Rep. 2023;13(1):9739. pmid:37328598
  42. 42. van der Wal RC, Sutton RM, Lange J, Braga JPN. Suspicious binds: Conspiracy thinking and tenuous perceptions of causal connections between co-occurring and spuriously correlated events. Eur J Soc Psychol. 2018;48(7):970–89. pmid:30555189
  43. 43. Franks B, Bangerter A, Bauer MW. Conspiracy theories as quasi-religious mentality: an integrated account from cognitive science, social representations theory, and frame theory. Front Psychol. 2013;4:424. pmid:23882235
  44. 44. Bangerter A, Wagner-Egger P, Delouvée S. How Conspiracy Theories Spread. Butter M, Knight P, editors. Routledge Handbook of Conspiracy Theories. New York, NY: Routledge; 2020. pp. 206–18. https://doi.org/10.4324/9780429452734-2_5
  45. 45. Brotherton R, French CC. Intention seekers: conspiracist ideation and biased attributions of intentionality. PLoS One. 2015;10(5):e0124125. pmid:25970175
  46. 46. Douglas KM, Sutton RM, Callan MJ, Dawtry RJ, Harvey AJ. Someone is pulling the strings: hypersensitive agency detection and belief in conspiracy theories. Think Reason. 2015;22(1):57–77.
  47. 47. van der Tempel J, Alcock JE. Relationships between conspiracy mentality, hyperactive agency detection, and schizotypy: Supernatural forces at work? Personal Indiv Diff. 2015;82:136–41.
  48. 48. van Prooijen J-W, van Vugt M. Conspiracy theories: evolved functions and psychological mechanisms. Perspect Psychol Sci. 2018;13(6):770–88.
  49. 49. Genette G. Figures III. Editions du Seuil; 1972.
  50. 50. Chatman SB. Story and Discourse. Cornell University Press; 1980.
  51. 51. Wood MJ, Douglas KM. “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Front Psychol. 2013;4:409. pmid:23847577
  52. 52. Wood MJ, Douglas KM. Online communication as a window to conspiracist worldviews. Front Psychol. 2015;6:836. pmid:26136717
  53. 53. Uscinski JE, Parent JM. Conspiracy theories are for losers. Conspiracy Theories: A Critical Introduction. Oxford University Press; 2014. pp. 130–53. https://doi.org/10.1093/acprof:oso/9780199351800.003.0006
  54. 54. Fong A, Roozenbeek J, Goldwert D, Rathje S, van der Linden S. The language of conspiracy: A psychological analysis of speech used by conspiracy theorists and their followers on Twitter. Group Process Intergroup Relat. 2021;24(4):606–23.
  55. 55. Klein C, Clutton P, Dunn AG. Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit’s conspiracy theory forum. PLoS One. 2019;14(11):e0225098. pmid:31738787
  56. 56. Cosgrove T, Bahr M. The language of conspiracy theories: negative emotions and themes facilitate diffusion online. Sage Open. 2024;14(4).
  57. 57. Miani A, Hills T, Bangerter A. LOCO: The 88-million-word language of conspiracy corpus. Behav Res Methods. 2022;54(4):1794–817. pmid:34697754
  58. 58. Miani A, Hills T, Bangerter A. Interconnectedness and (in)coherence as a signature of conspiracy worldviews. Sci Adv. 2022;8(43):eabq3668. pmid:36288312
  59. 59. Klein C, Clutton P, Polito V. Topic modeling reveals distinct interests within an online conspiracy forum. Front Psychol. 2018;9:189. pmid:29515501
  60. 60. Samory M, Mitra T. Conspiracies online: user discussions in a conspiracy community following dramatic events. 12th International AAAI Conference on Web and Social Media, ICWSM. 2018.
  61. 61. Batzdorfer V, Steinmetz H, Biella M, Alizadeh M. Conspiracy theories on Twitter: emerging motifs and temporal dynamics during the COVID-19 pandemic. Int J Data Sci Anal. 2022;13(4):315–33. pmid:34977334
  62. 62. Tangherlini TR, Shahsavari S, Shahbazi B, Ebrahimzadeh E, Roychowdhury V. An automated pipeline for the discovery of conspiracy and conspiracy theory narrative frameworks: Bridgegate, Pizzagate and storytelling on the web. PLoS One. 2020;15(6):e0233879. pmid:32544200
  63. 63. Miani A, van der Plas L, Bangerter A. Loose and tight: creative formation but rigid use of nominal compounds in conspiracist texts. J Creat Behav. 2024;58(1):114–27.
  64. 64. Meuer M, Oeberst A, Imhoff R. How do conspiratorial explanations differ from non‐conspiratorial explanations? A content analysis of real‐world online articles. Euro J Social Psych. 2022;53(2):288–306.
  65. 65. Adornetti I. Investigating conspiracy theories in the light of narrative persuasion. Front Psychol. 2023;14:1288125. pmid:38022962
  66. 66. Greene CM, Murphy G. Quantifying the effects of fake news on behavior: Evidence from a study of COVID-19 misinformation. J Exp Psychol Appl. 2021;27(4):773–84. pmid:34110860
  67. 67. Lyons B, Merola V, Reifler J. Not just asking questions: effects of implicit and explicit conspiracy information about vaccines and genetic modification. Health Commun. 2019;34(14):1741–50. pmid:30307753
  68. 68. Adornetti I, Altavilla D, Chiera A, Deriu V, Gerna A, Picca L, et al. Testing the persuasiveness of conspiracy theories: a comparison of narrative and argumentative strategies. Cogn Process. 2025;26(4):903–20. pmid:40481918
  69. 69. Bonetto E, Arciszewski T. The creativity of conspiracy theories. J Creat Behav. 2021;55(4):916–24.
  70. 70. van Prooijen J-W, Ligthart J, Rosema S, Xu Y. The entertainment value of conspiracy theories. Br J Psychol. 2022;113(1):25–48. pmid:34260744
  71. 71. Klofstad C, Uscinski J. Does rhetoric drive conspiracy theory beliefs? Genealogy. 2024;8(4):149.
  72. 72. Imhoff R, Lamberty PK. Too special to be duped: Need for uniqueness motivates conspiracy beliefs. Eur J Soc Psychol. 2017.
  73. 73. Oswald S. Conspiracy and bias: Argumentative features and persuasiveness of conspiracy theories. In: OSSA Conference Archive. 2016. pp. 1–16.
  74. 74. Landrum AR, Olshansky A. The role of conspiracy mentality in denial of science and susceptibility to viral deception about science. Polit Life Sci. 2019;38(2):193–209. pmid:32412208
  75. 75. Schmid P, Altay S, Scherer LD. The psychological impacts and message features of health misinformation: a systematic review of randomized controlled trials. Eur Psychol. 2023;28(3):162–72.
  76. 76. Grant L, Hausman BL, Cashion M, Lucchesi N, Patel K, Roberts J. Vaccination persuasion online: a qualitative study of two provaccine and two vaccine-skeptical websites. J Med Internet Res. 2015;17(5):e133. pmid:26024907
  77. 77. Miani A, Lewandowsky S. Megalalia: an empirical inquiry into the disproportionate vocabulary lexical sophistication utilized in conspiracy texts in the absence of necessity.
  78. 78. Miani A, Carrella F, Lewandowsky S. DONALD: the 2M-document Dataset Of News Articles for studying the Language of Dubious information. 2024. https://doi.org/10.31219/osf.io/j3ma9
  79. 79. Carrella F, Miani A, Lewandowsky S. IRMA: the 335-million-word Italian coRpus for studying MisinformAtion. In: Vlachos A, Augenstein I, editors. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023; pp. 2339–49. https://doi.org/10.18653/v1/2023.eacl-main.171
  80. 80. Carrella F, Miani A. GERMA: a comprehensive corpus of untrustworthy German news. Linguist Vanguard. 2025.
  81. 81. Ta VP, Boyd RL, Seraj S, Keller A, Griffith C, Loggarakis A, et al. An inclusive, real-world investigation of persuasion in language and verbal behavior. J Comput Soc Sci. 2022;5(1):883–903. pmid:34869936
  82. 82. Raab MH, Auer N, Ortlieb SA, Carbon C-C. The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Front Psychol. 2013;4(JUL):1–8.
  83. 83. Raab MH, Ortlieb SA, Auer N, Guthmann K, Carbon C-C. Thirty shades of truth: conspiracy theories as stories of individuation, not of pathological delusion. Front Psychol. 2013;4:406. pmid:23847576
  84. 84. Jagiello RD, Hills TT. Bad news has wings: dread risk mediates social amplification in risk communication. Risk Anal. 2018;38(10):2193–207. pmid:29813185
  85. 85. Hills T. The dark side of information proliferation. Perspect Psychol Sci. 2019;14(3):323–30.
  86. 86. Bangerter A. Transformation between scientific and social representations of conception: the method of serial reproduction. Br J Soc Psychol. 2000;39 Pt 4:521–35. pmid:11190683
  87. 87. Esmail S. Leave the world behind. Netflix. 2023.
  88. 88. Antonini R, Chiaro D. The perception of dubbing by Italian audiences. Palgrave Macmillan UK; 2009. pp. 97–114. https://doi.org/10.1057/9780230234581_8
  89. 89. Straubhaar JD. Beyond media imperialism: Assymetrical interdependence and cultural proximity. Crit Stud Mass Commun. 1991;8(1):39–59.
  90. 90. Stojanov A, Hannawa A. Toward french and italian language validations of the conspiracy mentality scale (cms). 2023. https://doi.org/10.23668/PSYCHARCHIVES.13270
  91. 91. Stojanov A, Halberstadt J. The Conspiracy Mentality Scale: distinguishing between irrational and rational suspicion. Soc Psychol. 2019;50(4):215–32.
  92. 92. Rosseel Y. lavaan: An R package for structural equation modeling. J Stat Softw. 2012;48(2).
  93. 93. Feeney JA, Noller P, Hanrahan M. Assessing adult attachment. Adv Personal Relationsh. 1994;5:128–52.
  94. 94. Fossati A, Feeney JA, Donati D, Donini M, Novella L, Bagnato M, et al. On the dimensionality of the attachment style questionnaire in italian clinical and nonclinical participants. J Soc Pers Relationsh. 2003;20(1):55–79.
  95. 95. Pincus AL, Ansell EB, Pimentel CA, Cain NM, Wright AGC, Levy KN. Initial construction and validation of the Pathological Narcissism Inventory. Psychol Assess. 2009;21(3):365–79. pmid:19719348
  96. 96. Fossati A, Somma A, Borroni S, Markon KE, Krueger RF. The personality inventory for DSM-5 brief form: evidence for reliability and construct validity in a sample of community-dwelling Italian adolescents. Assessment. 2017;24(5):615–31. pmid:26676917
  97. 97. Demszky D, Yang D, Yeager DS, Bryan CJ, Clapper M, Chandhok S, et al. Using large language models in psychology. Nat Rev Psychol. 2023.
  98. 98. Feuerriegel S, Maarouf A, Bär D, Geissler D, Schweisthal J, Pröllochs N, et al. Using natural language processing to analyse text data in behavioural science. Nat Rev Psychol. 2025;4(2):96–111.
  99. 99. Gilardi F, Alizadeh M, Kubli M. ChatGPT outperforms crowd workers for text-annotation tasks. Proc Natl Acad Sci U S A. 2023;120(30):e2305016120. pmid:37463210
  100. 100. Rathje S, Mirea D-M, Sucholutsky I, Marjieh R, Robertson CE, Van Bavel JJ. GPT is an effective tool for multilingual psychological text analysis. Proc Natl Acad Sci U S A. 2024;121(34):e2308950121. pmid:39133853
  101. 101. Diab A, Nefriana Rr, Lin Y-R. Classifying Conspiratorial Narratives at Scale: False Alarms and Erroneous Connections. Proceedings of the International AAAI Conference on Web and Social Media. 2024;18:340–53.
  102. 102. Lin H, Lasser J, Lewandowsky S, Cole R, Gully A, Rand DG, et al. High level of correspondence across different news domain quality rating sets. PNAS Nexus. 2023;2(9):pgad286. pmid:37719749
  103. 103. Marina Adami. Under a far-right government, journalists fear press freedom in Italy is heading down a slippery slope. Reuters Institute for the Study of Journalism. 2024. https://reutersinstitute.politics.ox.ac.uk/news/under-far-right-government-journalists-fear-press-freedom-italy-heading-down-slippery-slope
  104. 104. Transparency International. Italy – Corruption Perceptions Index. Transparency International, 2025. https://www.transparency.org/en/countries/italy
  105. 105. Imhoff R, Zimmer F, Klein O, António JHC, Babinska M, Bangerter A, et al. Conspiracy mentality and political orientation across 26 countries. Nat Hum Behav. 2022;6(3):392–403. pmid:35039654
  106. 106. Antichi L, Olcese M, Prestia D, Barbagallo G, Migliorini L, Giannini M. Italian validation of the generic conspiracist beliefs scale (gcbs). BPA Appl Psychol Bull. 2023;81:99–114.
  107. 107. Wu M, Aji AF. Style over substance: Evaluation biases for large language models. In: Rambow O, Wanner L, Apidianaki M, Al-Khalifa H, Di Eugenio B, Schockaert S, editors. In: Proceedings of the 31st International Conference on Computational Linguistics. Abu Dhabi, UAE; Association for Computational Linguistics: 2025. pp. 297–312. https://aclanthology.org/2025.coling-main.21/
  108. 108. Su J, Zhuo TY, Mansurov J, Wang D, Nakov P. Fake news detectors are biased against texts generated by large language models. 2023.
  109. 109. Tausczik YR, Pennebaker JW. The psychological meaning of words: LIWC and computerized text analysis methods. J Lang Soc Psychol. 2009;29(1):24–54.
  110. 110. Agosti A, Rellini A. The Italian LIWC dictionary. Austin, TX: LIWC. Net; 2007.
  111. 111. Hills T, Miani A. A short primer on historical natural language processing. In: Hills T, Pogrebna G, editors. The Cambridge Handbook of Behavioural Data Science. Cambridge University Press; 2026.
  112. 112. Miani A, Lewandowsky S. A lexicon-based method for the automatic detection of conspiratorial language in texts. In: Talk presented at the British Association for Applied Linguistics: “The language of fake news symposium. York (UK); 2023.
  113. 113. Benoit K, Matsuo A. spacyr: Wrapper to the ’spaCy’ ’NLP’ Library. 2023. https://CRAN.R-project.org/package=spacyr.Rpackageversion1.3.0
  114. 114. Montani I, Honnibal M, Honnibal M, Van Landeghem S, Boyd A, Peters H, et al. Explosion/spacy: v3.2.1: doc_cleaner component, new matcher attributes, bug fixes and more. 2021.
  115. 115. Benoit K, Watanabe K, Wang H, Nulty P, Obeng A, Müller S, et al. quanteda: An R package for the quantitative analysis of textual data. J Open Source Softw. 2018;3(30):774.
  116. 116. Bennett EM, McLaughlin PJ. Neuroscience explanations really do satisfy: a systematic review and meta-analysis of the seductive allure of neuroscience. Public Underst Sci. 2024;33(3):290–307. pmid:37906516
  117. 117. Brown ZC, Anicich EM, Galinsky AD. Compensatory conspicuous communication: Low status increases jargon use. Organ Behav Hum Decis Processes. 2020;161:274–90.
  118. 118. Recordare A, Cola G, Fagni T, Tesconi M. Unveiling online conspiracy theorists: a text-based approach and characterization. 2024.
  119. 119. Gambini M, Tardelli S, Tesconi M. The anatomy of conspiracy theorists: unveiling traits using a comprehensive Twitter dataset. Comput Commun. 2024;217:25–40.
  120. 120. Fiagbenu ME. The stock market is rigged? Conspiracy beliefs and distrust predict lower stock market participation. Appl Cogn Psychol. 2022;36(5):978–95.
  121. 121. van Prooijen J-W. Why education predicts decreased belief in conspiracy theories: Education and conspiracy beliefs. Appl Cogn Psychol. 2017;31(1):50–8.
  122. 122. Catenaccio P. A corpus-driven exploration of conspiracy theorising as a discourse type: Lexical indicators of argumentative patterning. John Benjamins Publishing Company; 2022. pp. 25–48. https://doi.org/10.1075/dapsac.98.02cat
  123. 123. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Soft. 2015;67(1).
  124. 124. Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest package: tests in linear mixed effects models. J Statist Softw. 2017;82(13):1–26.
  125. 125. Kuperman V, Stadthagen-Gonzalez H, Brysbaert M. Age-of-acquisition ratings for 30,000 English words. Behav Res Methods. 2012;44(4):978–90. pmid:22581493
  126. 126. Brysbaert M, New B. Moving beyond Kucera and Francis: a critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behav Res Methods. 2009;41(4):977–90. pmid:19897807
  127. 127. Lewis ML, Frank MC. The length of words reflects their conceptual complexity. Cognition. 2016;153:182–95. pmid:27232162
  128. 128. Green C, Kong APH, Brysbaert M, Keogh K. Crowdsourced and ai-generated age of acquisition (aoa) norms for vocabulary in print: Extending the kuperman et al. (2012) norms. 2025. https://doi.org/10.31234/osf.io/698mw_v2
  129. 129. Kim M, Crossley SA, Kyle K. Lexical sophistication as a multidimensional phenomenon: relations to second language lexical proficiency, development, and writing quality. Modern Lang J. 2017;102(1):120–41.
  130. 130. Montefinese M, Vinson D, Vigliocco G, Ambrosini E. Italian age of acquisition norms for a large set of words (ItAoA). Front Psychol. 2019;10:278. pmid:30814969
  131. 131. Birchenough JMH, Davies R, Connelly V. Rated age-of-acquisition norms for over 3,200 German words. Behav Res Methods. 2017;49(2):484–501. pmid:26944578
  132. 132. Łuniewska M, Haman E, Armon-Lotem S, Etenkowski B, Southwood F, Anđelković D, et al. Ratings of age of acquisition of 299 words across 25 languages: Is there a cross-linguistic order of words? Behav Res Methods. 2016;48(3):1154–77. pmid:26276517
  133. 133. Elvevåg B, Foltz PW, Weinberger DR, Goldberg TE. Quantifying incoherence in speech: an automated methodology and novel application to schizophrenia. Schizophr Res. 2007;93(1–3):304–16. pmid:17433866
  134. 134. Willits JA, Rubin T, Jones MN, Minor KS, Lysaker PH. Evidence of disturbances of deep levels of semantic cohesion within personal narratives in schizophrenia. Schizophr Res. 2018;197:365–9. pmid:29153448
  135. 135. Paulsen JS, Romero R, Chan A, Davis AV, Heaton RK, Jeste DV. Impairment of the semantic network in schizophrenia. Psychiatry Res. 1996;63(2–3):109–21. pmid:8878307
  136. 136. Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of Tricks for Efficient Text Classification. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Valencia, Spain: 2017. pp. 427–31. https://doi.org/10.18653/v1/e17-2068
  137. 137. Grave E, Bojanowski P, Gupta P, Joulin A, Mikolov T. Learning word vectors for 157 languages. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). 2018.
  138. 138. Teitelbaum L, Simchon A. Neural text embeddings in psychological research: A guide with examples in R. Psychol Methods. 2025. pmid:40504661
  139. 139. Deschrijver C. On the metapragmatics of ‘conspiracy theory’: Scepticism and epistemological debates in online conspiracy comments. J Pragmatics. 2021;182:310–21.
  140. 140. Nera K, Pantazi M, Klein O. “These Are Just Stories, Mulder”: exposure to conspiracist fiction does not produce narrative persuasion. Front Psychol. 2018;9:684. pmid:29875710
  141. 141. Butler LD, Koopman C, Zimbardo PG. The psychological impact of viewing the film “JFK”: emotions, beliefs, and political behavioral intentions. Polit Psychol. 1995;16(2):237.
  142. 142. Yazell B, Petersen K, Marx P, Fessenbecker P. The role of literary fiction in facilitating social science research. Humanit Soc Sci Commun. 2021;8(1).
  143. 143. Martin C a t h i e J o. Imagine all the people: Literature, society, and cross-national variation in education systems. World Polit. 2018;70(3):398–442.
  144. 144. Bohr J. The ‘climatism’ cartel: why climate change deniers oppose market-based mitigation policy. Environ Polit. 2016;25(5):812–30.
  145. 145. Van den Bulck H, Hyzen A. Of lizards and ideological entrepreneurs: Alex Jones and Infowars in the relationship between populist nationalism and the post-global media ecology. Int Commun Gazette. 2019;82(1):42–59.
  146. 146. Albarracín D. Processes of persuasion and social influence in conspiracy beliefs. Curr Opin Psychol. 2022;48:101463. pmid:36215908
  147. 147. Lantian A, Muller D, Nurra C, Klein O, Berjot S, Pantazi M. Stigmatized beliefs: Conspiracy theories, anticipated negative evaluation of the self, and fear of social exclusion. Euro J Social Psych. 2018;48(7):939–54.
  148. 148. Smallpage SM, Enders AM, Drochon H, Uscinski JE. The impact of social desirability bias on conspiracy belief measurement across cultures. Polit Sci Res Methods. 2022;11(3):555–69.
  149. 149. Graesser AC, Singer M, Trabasso T. Constructing inferences during narrative text comprehension. Psychol Rev. 1994;101(3):371–95. pmid:7938337
  150. 150. Břízová L, Gerbec K, Šauer J, Šlégr J. Flat Earth theory: an exercise in critical thinking. Phys Educ. 2018;53(4):045014.
  151. 151. Goodwin J. Sophistical refutations in the climate change debates. J Argument Context. 2019;8(1):40–64.
  152. 152. Wagner-Egger P, Bronner G, Delouvée N, Dieguez N, Gauvrit S. Why ‘healthy conspiracy theories’ are (oxy)morons. Soc Epistemol Rev Reply Collect. 2019;8(3):50–67.
  153. 153. Oppenheimer DM. Consequences of erudite vernacular utilized irrespective of necessity: problems with using long words needlessly. Appl Cognit Psychol. 2006;20(2):139–56.
  154. 154. Pennycook G, Allan Cheyne J, Barr N, Koehler DJ, Fugelsang JA. On the reception and detection of pseudo-profound bullshit. Judgm Decis Mak. 2015;10(6):549–63.
  155. 155. Silva AM, Limongi R, MacKinley M, Ford SD, Alonso-Sánchez MF, Palaniyappan L. Syntactic complexity of spoken language in the diagnosis of schizophrenia: a probabilistic Bayes network model. Schizophr Res. 2023;259:88–96. pmid:35752547
  156. 156. Corso F, Pierri F, De Francisci Morales G. Do androids dream of unseen puppeteers? Probing for a conspiracy mindset in large language models. 2025. https://doi.org/10.48550/ARXIV.2511.03699
  157. 157. Lansdall-Welfare T, Sudhahar S, Thompson J, Lewis J, FindMyPast Newspaper Team, Cristianini N. Content analysis of 150 years of British periodicals. Proc Natl Acad Sci U S A. 2017;114(4):E457–65. pmid:28069962
  158. 158. Atkins S. Corpus Design Criteria. Literary Linguist Comput. 1992;7(1):1–16.
  159. 159. Bandy J, Vincent N. Addressing “documentation debt” in machine learning: A retrospective datasheet for bookcorpus. In: Vanschoren J, Yeung S, editors, In: Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. 2021. https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/54229abfcfa5649e7003b83dd4755294-Paper-round1.pdf
  160. 160. Jensen T. Democrats and Republicans differ on conspiracy theory beliefs. Public Policy Polling. 2013. https://www.publicpolicypolling.com/polls/democrats-and-republicans-differ-on-conspiracy-theory-beliefs/
  161. 161. Douglas KM, Sutton RM. Does it take one to know one? Endorsement of conspiracy theories is influenced by personal willingness to conspire. Br J Soc Psychol. 2011;50(3):544–52. pmid:21486312
  162. 162. Uscinski JE. Defining conspiracy theory and related terms. New York, NY: Oxford University Press; 2025. pp. 139–55. https://doi.org/10.1093/9780197760222.003.0008
  163. 163. Lantian A, Muller D, Nurra C, Douglas KM. Measuring belief in conspiracy theories: validation of a French and English Single-Item Scale. Int Rev Soc Psychol. 2016;29(1):1.
  164. 164. Mompelat L, Tian Z u o y u, Kessler A, Luettgen M, Rajanala A, Kübler S, et al. How “loco” is the loco corpus? Annotating the language of conspiracy theories. In: Proceedings of the 16th linguistic annotation workshop (LAW-XVI) within LREC2022. 2022.
  165. 165. Mohr C, Graves RE, Gianotti LR, Pizzagalli D, Brugger P. Loose but normal: a semantic association study. J Psycholinguist Res. 2001;30(5):475–83. pmid:11529423
  166. 166. Kiang M. Schizotypy and language: a review. J Neurolinguist. 2010;23(3):193–203.
  167. 167. Acar S, Sen S. A multilevel meta-analysis of the relationship between creativity and schizotypy. Psychol Aesthet Creativ Arts. 2013;7(3):214–28.
  168. 168. Wang L, Long H, Plucker JA, Wang Q, Xu X, Pang W. High schizotypal individuals are more creative? The mediation roles of overinclusive thinking and cognitive inhibition. Front Psychol. 2018;9.
  169. 169. Barron D, Furnham A, Weis L, Morgan KD, Towell T, Swami V. The relationship between schizotypal facets and conspiracist beliefs via cognitive processes. Psychiatry Res. 2018;259:15–20. pmid:29024855
  170. 170. Brotherton R, French CC. Belief in conspiracy theories and susceptibility to the conjunction fallacy. Appl Cogn Psychol. 2014;28(2):238–48.
  171. 171. Dagnall N, Denovan A, Drinkwater K, Parker A, Clough P. Statistical bias and endorsement of conspiracy theories. Appl Cogn Psychol. 2017;31(4):368–78.
  172. 172. Wabnegger A, Gremsl A, Schienle A. The association between the belief in coronavirus conspiracy theories, miracles, and the susceptibility to conjunction fallacy. Appl Cogn Psychol. 2021;35(5):1344–8. pmid:34518736
  173. 173. Grimes DR. On the viability of conspiratorial beliefs. PLoS One. 2016;11(1):e0147905. pmid:26812482
  174. 174. Nera K e n z o. Analyzing the causation between conspiracy mentality and belief in conspiracy theories: potential pitfalls and leads to address them. Zeitschrift für Psychologie. 2024;232(1):44–9.
  175. 175. Imhoff R, Bertlich T, Frenken M. Tearing apart the “evil” twins: A general conspiracy mentality is not the same as specific conspiracy beliefs. Curr Opin Psychol. 2022;46:101349. pmid:35537265
  176. 176. Sutton RM, Douglas KM, Trella C. Conspiracy mentality versus belief in conspiracy theories: response to nera and some recommendations for researchers. Zeitschrift für Psychologie. 2024;232(1):50–4.
  177. 177. Trella C, Sutton RM, Douglas KM. Semantic and causal relations between the conspiracy mentality and belief in conspiracy theories. Zeitschrift für Psychologie. 2024;232(1):7–17.
  178. 178. Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020;34:118–22.
  179. 179. Lewandowsky S. Conspiracist cognition: chaos, convenience, and cause for concern. J Cult Res. 2021;25(1):12–35.
  180. 180. Wagner-Egger P, Bangerter A, Delouvée S, Dieguez S. Awake together: Sociopsychological processes of engagement in conspiracist communities. Curr Opin Psychol. 2022;47:101417. pmid:35970097
  181. 181. Enders AM, Uscinski JE, Klofstad CA, Wuchty S, Seelig MI, Funchion JR, et al. Who Supports QAnon? A Case Study in Political Extremism. J Polit. 2022;84(3):1844–9.
  182. 182. Bromley DG, Richardson JT. The QAnon conspiracy narrative: Understanding the social construction of danger. Cambridge University Press; 2023. pp. 159–75. https://doi.org/10.1017/9781009052061.014
  183. 183. Joseph P. Down the Conspiracy Theory Rabbit Hole: How Does One Become a Follower of QAnon? Cambridge University Press; 2023. pp. 17–32. https://doi.org/10.1017/9781009052061.005
  184. 184. Enders A, Klofstad C, Stoler J, Uscinski JE. How anti-social personality traits and anti-establishment views promote beliefs in election Fraud, QAnon, and COVID-19 conspiracy theories and misinformation. Am Polit Res. 2023;51(2):247–59. pmid:38603388
  185. 185. de Gourville D, Douglas KM, Sutton RM. Denialist vs. warmist climate change conspiracy beliefs: Ideological roots, psychological correlates and environmental implications. Br J Psychol. 2025. pmid:41084221
  186. 186. Lewandowsky S, Lloyd E, Brophy S. When THUNCing Trumps thinking: What distant alternative worlds can tell us about the real world. Argumenta. 2018;3:217–31.
  187. 187. Lewandowsky S, Ecker UKH, Cook J, van der Linden S, Roozenbeek J, Oreskes N, et al. Liars know they are lying: differentiating disinformation from disagreement. Humanit Soc Sci Commun. 2024;11(1).
  188. 188. Koper AJ. More than a theory: conspiracy theorising as a practical tradition. Polit Stud Rev. 2024;23(4):994–1010.
  189. 189. Weigmann K. The genesis of a conspiracy theory: Why do people believe in scientific conspiracy theories and how do they spread? EMBO Rep. 2018;19(4).
  190. 190. Schatto-Eckrodt T, Clever L, Frischlich L. The seed of doubt: examining the role of alternative social and news media for the birth of a conspiracy theory. Soc Sci Comput Rev. 2024;42(5):1160–80.
  191. 191. Stubbersfield J, Tehrani J, Flynn E. Faking the news: intentional guided variation reflects cognitive biases in transmission chains without recall. Cult Sci J. 2018;10(1):54–65.