Many individuals who engage with conspiracy theories come to do so through a combination of individual and social factors. The interaction between these factors is challenging to study using traditional experimental designs. Reddit.com is a large connected set of online discussion forums, including one (r/conspiracy) devoted to wide-ranging discussion of conspiracy theories. The availability of large datasets of user comments from Reddit give a unique opportunity to observe human behavior in social spaces and at scale. Using a retrospective case control study design, we analyzed how Reddit users who would go on to engage with a conspiracy-related forum differed from other users in the language they use, differences in the social environments where they posted, and potential interactions between the two factors. Together, the analyses provide evidence for self-selection into communities with a shared set of interests which can feed into a conspiratorial world-view, and that these differences are detectable relative to controls even before users begin to post in r/conspiracy. We also suggest that survey-based and experimental studies may benefit from differentiating between passive private endorsement by individuals and active engagement with conspiracy theories in social spaces.
Citation: Klein C, Clutton P, Dunn AG (2019) Pathways to conspiracy: The social and linguistic precursors of involvement in Reddit’s conspiracy theory forum. PLoS ONE 14(11): e0225098. https://doi.org/10.1371/journal.pone.0225098
Editor: Mikolaj Morzy, Poznan University of Technology, POLAND
Received: May 20, 2019; Accepted: October 26, 2019; Published: November 18, 2019
Copyright: © 2019 Klein et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The analysis uses a publicly available dataset; access instructions are at: (https://www.reddit.com/r/datasets/comments/3bxlg7/i_have_every_publicly_available_reddit_comment/). A torrent of the dataset can also be found and downloaded here: (http://academictorrents.com/details/7690f71ea949b868080401c749e878f98de34d3d). The authors confirm that they had no special access privileges for the dataset.
Funding: This work was supported by Australian Research Council Grant DP190101507 (C.K.) and by an Australian Government Research Training Program (RTP) Scholarship (P.C.). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Conspiracy theories—beliefs attributing agency over important world events to the secret plotting of powerful, malevolent groups—have been common in our population over a sustained period [1–5]. Conspiracy beliefs have the potential to cause harm both to the individual and the community. Conspiracy endorsement is associated with lowered intention to participate in social and political causes , unwillingness to follow authoritative medical advice, increased willingness to seek alternative medicine [7–8], and a tendency to reject important scientific findings [9–10].
There are a variety of attitudes individuals might have towards conspiracy theories. Many people passively endorse conspiracy theories, in the sense that they will assent to one or more conspiracy-related beliefs if asked. Conspiracy endorsement can be a relatively weak attitude, reflecting a general suspicion of the powerful . Measurement of assent also appears to be strongly influenced by contextual and partisan cues . A subset of individuals who endorse conspiracy theories also actively engage with conspiracy theories by, for example, discussing and spreading them online.
Many researchers take people who endorse or engage with conspiracy theories to depart from the ordinary norms of belief formation. As such, there has been a search for psychological factors which explain why particular individuals find conspiracy theories attractive. The psychological literature offers two types of explanation. Cognitive explanations cite mental processes that are adaptive in some contexts (including evolutionary ones), but which go awry in contemporary political situations . Such processes might include Bayesian  or abductive [15–16] inference about hidden causes, or over-enthusiastic pattern completion . Such processes are in some sense universal, but exaggerated instances play an especially important role in explaining conspiracy endorsement in individuals. Trait explanations, by contrast, focus on factors which explain individual differences in endorsement of conspiracy theories. While both are important for understanding why people engage with conspiracy beliefs, trait explanations have received more attention: the wide variation in acceptance of conspiracy theories among individuals, combined with the negative consequences of that acceptance, is a natural explanatory target.
An early and influential set of trait theories focused on the attraction of conspiracy theories to the powerless. Hofstadter wrote of the attraction of conspiracy theories to those who “…see only the consequences of power—and this through distorting lenses—and have no chance to observe its actual machinery” [18, p. 86]. More recent literature has focused on a positive relationship with measures of powerlessness and external locus of control , the relationship between feelings of powerlessness and cognitive factors such as illusory pattern perception , and the role of stressful life events . These accounts are not always critical of conspiracy endorsement: some assign it an important role in institutional critique by the politically disadvantaged  or emphasize the role that conspiracy theorizing may play in masking more salient social tensions .
A general role for distrust of and defiance towards authority has also been posited . Goetzel  identified lack of interpersonal trust as a key predictor of conspiratorial belief. Goetzel also noted a close relationship between endorsing conspiracy theories and being a member of a racial minority. The relevant conspiracy theories often resemble legitimate reasons for distrust by a minority community—for example, the theory that human immunodeficiency virus (HIV) was engineered to decimate African-American communities appears to be more popular among those aware of the Tuskegee Syphilis experiments and other historical medical abuses of African-Americans [25–27]. In such theories, the driving role is often standing negative emotions directed towards the powerful: anger, disgust, or paranoia.
A further cluster of theories focuses on the role of factors such as individual self-esteem in the face of difficult life circumstances [31,32] or in positive individuation from others , emphasizing the role that conspiracy endorsement can play in these processes. Conspiracy theories can provide exculpatory narratives for individual hardship. The narrative of how one came to believe in conspiracy theories can also be a powerful anchor for identity, functioning for the individual as a kind of “transformative experience” . Belief in conspiracy theories appears to correlate with a need for uniqueness . Drawing on qualitative work, Franks, Bangerter, Bauer, Hall, & Noort  suggest that conspiracy engagement may be part of an optimistic worldview that focuses on personal and social growth.
Recent literature also suggests that conspiracy endorsement may constitute a distinct construct which correlates with a variety of more traditional personality traits [37,38]. One popular way of cashing out this construct is in terms of a conspiratorial worldview, in which people who endorse one conspiracy theory are more likely to endorse others . Goetzel  suggested that conspiracy endorsers tend towards a “monological belief system” in which beliefs in any two conspiracy theories tend to be incorporated under a common umbrella [39–42].
Psychological theories of conspiracy endorsement tend to focus on the individual abstracted from their social context. While it is clear that social context plays a role in shaping conspiracy belief endorsement in individuals, studies examining social factors associated with conspiracy belief are comparatively rare . Yet social effects undoubtedly exist. Social groups affect whether ambiguous information is interpreted in a conspiratorial manner [45–46]. Studies examining the structure of communication patterns within social networks have considered how homophily can affect the way beliefs spread and persist [47–48], how beliefs can be distorted through collective memory , and how it can exacerbate the spread of misinformation in particular .
These studies focus on the effect of network structure rather than individual differences in personality and psychology. This should not be construed as a process whereby individuals are passively embedded in a social space. Individuals who endorse conspiracy beliefs are known to seek out others with shared beliefs , which means that active social self-selection may provide a plausible mechanism for how people assimilate multiple conspiracy theories within a conspiratorial worldview.
The relationship between self-selection and stable traits taps into an old debate in both personality and social psychology. An individual’s behavior depends both on their intrinsic dispositions and on the situations in which they find themselves. At short timescales, the interaction between personality and situation is widely accepted . Longer timescales present opportunities for more complex interactions. As Allport  noted, personality determines which situations people will embrace and which they will avoid. In Funder’s [54, p. 575] pithy formulation, “…while a certain kind of bar may tend to generate a situation that creates fights around closing time, only a certain kind of person will choose to go to that kind of bar in the first place.”
Buss  distinguished three processes at work in long-term interactions. Individuals select a social milieu, which in turn evokes certain responses from them given their traits, and over the long run they manipulate their social surroundings to create and reinforce a niche. Emmons, Diener, and Larson  similarly distinguished choice mechanisms and affect mechanisms. In the former, individuals’ personalities lead them to consciously seek or avoid certain kinds of situation, while in the latter, people merely prefer situations that fit with their personalities and so are reinforced for choosing appropriately.
While these longer-term interactions are important, studying them presents unique challenges . Continuous recording of reactions is only possible over relatively short time periods in the lab. The study of longer-term interactions has primarily been approached by intermittent experience sampling [58,59] or varieties of retrospective self-report [60,61]. Both techniques provide valuable evidence but face well-known methodological challenges Individuals who actively engage with conspiracy theories in social spaces are also challenging to study using experimental designs. Conspiracy engagement often comes with skepticism about official motives, making it difficult to recruit participants. There is also a risk of selection bias in recruitment, as a small subset of conspiracy engagers tend to be disproportionately visible .
Methodological innovation: Online datasets
The availability of large datasets from online social media offers a unique opportunity to observe longitudinal interactions between social groups and individual traits . Participation in online forums is typically open and voluntary, allowing individuals considerable latitude in selecting their social environment.
In addition, online forums provide a much larger source of data for analysis, providing enough power to examine a larger number of factors at once. The sheer size of some corpora allows for effective unsupervised analyses , avoiding the coding issues present in traditional survey designs. While they are restricted to studying associations rather than experimentally manipulated effects, large observational datasets can be used to generate new hypotheses and guide future research designs.
Studies examining or simulating the behaviors of people expressing conspiracy beliefs online have primarily been focused on how the spread of conspiracy beliefs are facilitated by network structure [65–67]. Social reinforcement and homophily play an important role in this spread, a fact that has been demonstrated both by modeling  and observational studies [69,70] of social networks. Community feedback and reinforcement also plays an important role in shaping users’ actions in online forums [71,72].
One important source of online conspiracy theorizing is the website Reddit.com (or ‘Reddit’). Reddit is a network of around 1.2 million online forums (known as subreddits), with around 330 million monthly active users. Reddit data have been used to examine the structure of conversations and propagation of information , and hateful and offensive speech [74–76], in addition to general linguistic analyses [77,78].
Reddit has also been identified as a key part of the “propaganda pipeline”  that amplifies conspiracy theories on their way to more visible websites (such as Facebook) and mainstream media . Reddit includes a dedicated subreddit (r/conspiracy) for discussing conspiracy theories. Examination of the comments that Reddit users who post in r/conspiracy therefore provides a unique window into a socially significant subset of individuals who actively engage with conspiracy theories in a social space.
The present research
There is at least one study about how Reddit users interact within the r/conspiracy subreddit after salient events , and one which examines the diversity of interests among Reddit users who posted to an online forum for conspiracy beliefs . However, we know of no studies examining their behavior over time and before they first post in r/conspiracy. Our aim was to examine what makes Reddit users who would go on to engage with conspiracy theories different from other Reddit users.
We undertook an exploratory analysis using a case control study design, examining the language use and posting patterns of Reddit users who would go on to post in r/conspiracy. (the r/conspiracy group). We analyzed where and what they posted in the period preceding their first post in r/conspiracy to understand how personal traits and social environment combine as potential risk factors for engaging with conspiracy beliefs.
Our goal was to identify distinctive traits of the r/conspiracy group, and the social pathways through which they travel to get there. We compared the r/conspiracy group to matched controls who began by posting in the same subreddits at the same time, but who never posted in the r/conspiracy subreddit. We conducted three analyses. First we examined whether r/conspiracy users were different from other users in terms of what they said. Our hypothesis was that users eventually posting in r/conspiracy would exhibit differences in language use compared to those who do not post in r/conspiracy, suggesting differences in traits important for individual variation. Second, we examined whether the same set differed from other users in terms of where they posted. We hypothesized that engagement with certain subreddits is associated with a higher risk of eventually posting in r/conspiracy, suggesting that social environments play a role in the risk of engagement with conspiracy beliefs. Third, we examined language differences after accounting for the social norms of where they posted. We hypothesized that some differences in language use would remain after accounting for language use differences across groups of similar subreddits, suggesting that some differences are not only a reflection of the social environment but represent intrinsic differences in those users.
Materials and methods
Users and ethics approval.
Our study participants were Reddit users who posted comments to online forums between 2007 and 2015. They were selected from a publicly available dataset comprising 1.10 billion comments from 1,419,406 users posted to 224,625 subreddits between October 2007 and May 2015 (see S1 Appendix).
Participants whose data was used were not contacted. The data were originally collected and made available under the terms permitted by the Reddit Terms of Service. As we were using publicly available data examined and reported in aggregate, ethics exemptions were granted by both Macquarie University and the Australian National University.
Selection of users.
Reddit allows posts by automated programs known as bots, which post comments in ways that can skew descriptive statistics. To remove accounts associated with bots, we first looked at each poster in a target set of subreddits (including r/conspiracy) and calculated the number of other subreddits in which they posted (their forum diversity). A list was compiled of usernames whose forum diversity was more than 15 standard deviations above the mean. Manual inspection revealed that every member of this list was probably a bot, whereas more aggressive cuts also included posters who were clearly human. This was combined with a list of usernames corresponding to known bots posted on Reddit itself (See S1 Appendix). This process identified 466 bots, which were excluded from subsequent analyses.
The r/conspiracy group was defined as the set of users posting at least 3 comments in r/conspiracy, and at least 4 times in each of the six contiguous 30-day periods immediately prior to their first post in r/conspiracy. Users who posted in r/conspiracy but did not meet both criteria were excluded from the analysis. All comments posted by included users made before their first post to r/conspiracy were included in the subsequent analysis, and these were used to characterize their language use and topics of interest. Posts by this group after their first post to r/conspiracy were not used in the analysis.
Reddit has its own culture with a complex set of norms. Some users post in r/conspiracy because they want to debunk conspiracy theories, while others enjoy “trolling” by deliberately provoking conspiracy theorists. Previous work on r/conspiracy suggested that between 4% and 12% of posters in r/conspiracy might fall into one of these categories . We manually examined the posts of 100 identified r/conspiracy users and determined that at most 9 of them were consistently either skeptical or non-serious. We took this to be an acceptable noise rate. Posters in r/conspiracy who do not engage seriously with conspiracy theories should be expected to be more like other posters on Reddit; at most, then, the presence of such individuals in our dataset would only reduce sensitivity, rather than create false positives.
Reddit is a diverse community. Many differences between the average Reddit user and a user posting in r/conspiracy may simply reflect that diversity. To minimize spurious differences, we constructed a control group by matching target users to posters whose first post was in the same forum at nearly the same time. Each user’s matched control thus “enters” our dataset at the same place and time but ends up on a different trajectory. To construct the matched control group, we first created a candidate control group by identifying users who never posted in r/conspiracy, and who had posted at least 4 times in any 6 contiguous 30-day periods. From the candidate control group, we then constructed matched pairs for each user in the target group. For each r/conspiracy poster, we identified their first post on Reddit. We then identified users from the candidate group whose first post was in the same subreddit within 24 hours of the r/conspiracy poster. We then iteratively assigned the user whose first post was closest in time to the first post of the r/conspiracy poster, under the constraint that matches had to be unique. We examined only comments between their first post and the final post of their matched control. To ensure matched controls had enough posts to reliably compare, we eliminated 480 matched pairs in which the control user had not posted at least 35 comments in that restricted timespan. This process identified 15,370 users in each group (30,740 total), which were used for all subsequent analyses.
Reddit posts consist of an initial post followed by nested comments underneath. The dataset includes only the nested sets of comments that follow the original (“link”) posts, not the link posts themselves. These comments and their associated metadata were the basis for analysis.
When analyzing the r/conspiracy users, we examined only comments from their first post to their final post before posting in r/conspiracy. When analyzing matched controls, only comments posted in the period between their first post and their matched partner’s first post in r/conspiracy were considered.
We pre-processed comments posted by included users to remove escape characters, URLs, and any lines which began with a ‘>‘ (which is typically used to mark text quoted from another author). Comments with fewer than 3 words after processing were omitted. We then concatenated each user’s comments, with subsequent analyses performed on a per-user basis.
Psychological traits shape the language that individuals use. Computational analysis of language usage has been successfully used to investigate personality traits [82,83] as well as individual differences in emotionality and social relationships . Linguistic features are a good marker of whether a discussion will be constructive, both experimentally  and on Reddit . Computational analysis of word use within the r/conspiracy forum has provided evidence about common narrative structures of conspiracy engagement  and individual differences in user’s interests .
To measure word use across particular language categories we used Empath [88,89], an open-source Python package which extracts linguistic characteristics from written text. Empath categories are built in a multi-stage process. First, categories and corresponding seed words are derived from pre-existing semantic knowledge bases such as ConceptNet. A vector space model is then trained on a large corpus of text, including Reddit comment data, and each category is expanded to include terms which occur near seed terms in the vector space model, indicating semantic similarity. Finally, categories are pruned via human inspection to eliminate intruders [82,83]. Although we used pre-defined Empath categories, the library allows for expansion to other user-defined categories via the same procedure, making it a flexible tool for examining textual data.
Empath scores themselves are weighted word frequencies across members of the category. Empath normalizes for aggregate comment length, returning frequency counts per category as the primary data. Empath scores are highly correlated (r = 0.91, see ) with those of the more widely-known LIWC Linguistic Inquiry and Word Count (LIWC) package where they overlap. Empath has a broader range of empirically derived lexical categories than LIWC. Further, its categorization scheme was partly trained on Reddit data, so it has found significant use in linguistic analysis of online discussion, including the spread of hate speech on Twitter [90–92] and YouTube , and the interface between media and technology [94,95]. Empath has also been used specifically to study Reddit, including community growth  and self-expressions of mental illness .
Empath evaluates a large number (194) of lexical categories, many of which are irrelevant to the present study. We focused on a subset of 85 categories corresponding to 6 different psychological theories about the antecedents of conspiracy belief (Table 1). To determine which lexical categories were included for each factor, the three authors each independently chose candidates; categories chosen by at least 2 of the three raters were included. Each of the proposed theories could themselves be operationalized in a variety of ways, and so the procedure was designed to err on the side of inclusion.
For each category, we examined whether there was a significant difference in posting frequency of terms in that category between the r/conspiracy group and the group of matched controls, using Welch’s t-test with an alpha of 0.01 (corrected for multiple comparisons). To estimate the magnitude of the difference in effect, we calculated Cohen’s |d| (hereafter ‘d’). For this and subsequent analyses, we considered only terms which showed a significant difference and had d>0.2. These high-d factors were the bases for subsequent analysis.
To determine which subreddits might represent important pathways through which users travel to reach r/conspiracy, we looked for over-representation of r/conspiracy users in the subreddits relative to the control group. To do this we examined each subreddit and counted the number of r/conspiracy users and matched control group users that had posted at least one comment. To avoid spurious results and potential re-identification of individual users, we analyzed the set of subreddits in which r/conspiracy and control group users had posted at least once during matched timespans, and a minimum of 100 users across both groups had posted at least once.
To be able to examine how language use differed within certain communities on Reddit, we grouped similar subreddits by theme community. We constructed a similarity network based on simple co-posting behaviors, without considering the chronology of the posts. Each subreddit was represented by a node in the network, with undirected edges between the nodes defined by the number of shared users (users who posted in both subreddits at least once) divided by the total number of users who posted in either subreddit (i.e. the Jaccard similarity ).
We then applied a community detection algorithm to the similarity network to group subreddits by theme based on the number of users they shared. Community detection algorithms are used to identify clusters of well-connected nodes in a network. Most algorithms aim to identify clusters by maximizing the number of connections within each community compared to the number of connections between communities. We applied the greedy modularity optimization method , which is commonly used for large networks. In this application, the algorithm starts with all subreddits in separate communities and then merges according to a gain in modularity—a measure of the density of connections within versus between communities. The number of communities is not specified in advance; rather, the algorithm stops when no further merging of communities improves modularity.
For each group of subreddits within a theme community we calculated two measures characterizing the differences between r/conspiracy users and the matched control group users. The user-count ratio was defined as the number of r/conspiracy users posting at least once in the constituent subreddits relative to the matched controls. The post-count ratio was defined by the total number of posts from r/conspiracy users in the constituent subreddits of the theme community compared to the number of posts from the matched control group users. Each value can be interpreted as a signal of the risk associated with posting to that subreddit.
Interactions between language use and social environment.
For each high-d Empath factor identified in the language use experiment, and each theme community identified in the social environment experiment, we examined the contrast between language use by control and r/conspiracy users but restricted just to posts made in any of the subreddits in that theme community. As before, differences were tested by Welch’s t-test with an alpha of 0.01 (corrected), and Cohen’s d was used as an estimate of the magnitude of effect, where we again used 0.2 as a threshold for indicating a substantial difference.
The 15,370 r/conspiracy users and the 15,370 matched controls posted to a set of 38,797 unique subreddits, and of these 1,834 met our inclusion criteria for analysis. Within this set of subreddits, r/conspiracy users posted a median of 743 (interquartile range 333 to 1,749) comments, with a median of 20,599 (IQR 8,302 to 52,321) total postprocessed words of comments. Their matched controls posted a median of 300 (IQR 142 to 707) comments, and a median of 8,082 (IQR 3,407 to 21,068) total postprocessed words of comments. The results indicate that r/conspiracy users posted at a substantially higher rate. We were unable to measure whether the groups were posting at different times of day and we did not measure whether comments were spread across a broad set of posts or concentrated within longer conversations on individual posts.
From the set of 91 Empath categories included in the analysis, 75 exhibited a significant difference between r/conspiracy users and their matched controls. Among those with significant differences, 26 differences had d>0.2 (Fig 1a).
Differences between the r/conspiracy group and the matched control group by: (a) language use, including all Empath categories with significant positive (red) and negative (blue) differences and Cohen’s d>0.2; (b) posting differences by theme community given by user-count ratio and post-count ratio; and (c) differences in language use accounting for theme community. Positive (red) and negative (blue) differences are colored where Cohen’s d>0.2. Grey circles indicate significance which did not reach the effect size threshold.
Where r/conspiracy group users posted certain terms more frequently than the matched control group users, the most prominent differences were in the Empath categories ‘crime’ (d = 0.45), ‘stealing’ (d = 0.43), and ‘law’ (d = 0.38). The categories of ‘dispute’, ‘dominant_hierarchical’, ‘power’, ‘government’, and ‘terrorism’ each produced d>0.35.
There were relatively few significant negative differences: the only categories where matched control group users posted certain terms substantially more frequently were in the Empath categories ‘friends’ (d = -0.31), ‘optimism’ (d = -0.22) and ‘affection’ (d = -0.21). These terms were drawn from the “Maintenance of self-esteem” and “Personal values and individuation” categories (Table 1), though the higher frequencies among the matched control group users is suggestive of alienation rather than positive bonding in r/conspiracy group users.
Other categories suggested by theories in Table 1 did not make the threshold for inclusion, typically because their effect size was too low. Notably absent are Empath categories encompassing specific negative affects like ‘fear’ (n.s.), ‘sadness’ (d = 0.09), and ‘nervousness’ (d = -0.1), for which the differences are either small or in the wrong direction from what would be expected given the theory.
When applied to the similarity network of 1,834 subreddits, the community detection algorithm identified 16 theme communities. The communities included between 44 (the ‘Basic Reddit’ theme community) and 334 subreddits (the “Gaming & Television” theme community). The communities were characterized and named by examining their top 10 subreddits and choosing names that indicated the typical contents of those subreddits (Table 2), revealing differences in the topics of interest. Some of the theme communities covered more than one area of interest (for example, the “Guns & Cars” theme community), while others were relatively closely related, which is apparent in a visualization of the complete network (S1 Fig).
The user-count ratios varied by theme community (Fig 1b). The highest user-count ratio was in the “Politics” theme community, where there were 2.4 times as many r/conspiracy users as control users that posted in at least one subreddit in the group. The lowest user-count risk ratio was for the “Basic Reddit” theme community, which included several of the most popular subreddits including “r/AskReddit”, “r/pics”, and “r/funny”, where nearly equal numbers of users from each group had posted at least once. For some theme communities, r/conspiracy users were consistently over-represented across most of the constituent subreddits, whereas other theme communities included smaller clusters of over-representation of r/conspiracy users among a larger set where there was less over-representation (S1 Fig).
The r/conspiracy group users tended to post more frequently across all theme communities (Fig 1b) but there were notable differences relative to the user-count ratios. For example, r/conspiracy users were over-represented in the “Pornography”, “Tech Culture”, and “Music” theme communities (posting across many of the subreddits at least once) but had relatively low post-count ratios suggesting they may have been less engaged with those communities. Conversely, in the “Internet Culture” and “Toxic Reddit” theme communities, r/conspiracy users were not only over-represented in the constituent subreddits but also had relatively high post-count ratios, suggesting ongoing and stronger engagement with the themes and with other users in the subreddits in those theme communities.
In the “Basic Reddit” theme community there were clear differences between the language use of the r/conspiracy group relative to the matched controls, and for 16 of the 26 Empath categories the difference was substantial (d>0.2). For example, users who would go on to post in r/conspiracy were much more likely to use “aggression” terms in the constituent subreddits in the “Basic Reddit” theme community.
Examining differences in language use within each of the theme communities indicates that there were significant differences between r/conspiracy users and matched control group users within most of the theme communities (Fig 1c), but these differences were not as clear as the overall differences in language use. The results suggest that many of the clear differences observed between the two groups in the overall language use analysis were likely to have been because of where the r/conspiracy users were posting rather than what they were posting.
For example, in the “Politics” and “Pornography” theme communities, where r/conspiracy users were heavily over-represented relative to their matched controls, there were no significant differences in any of the 26 Empath categories. The results were similar in the “Tech Culture” and “Social Support” theme communities, where r/conspiracy users tended to closely match the control group in language use.
Language and social factors
The r/conspiracy group users exhibited clear differences from other similar Reddit users in terms of both where they posted and what they posted. These differences in language use and social environment provide support for some of the theories of conspiracy belief.
First, there were clear differences in overall language use between r/conspiracy group and the matched control group. Most of the Empath factors exhibiting strong differences were associated with prior literature suggesting that a “conspiratorial mindset” leads to endorsement of conspiracy theories (Table 1). In general, Empath factors for which we observed a clear positive difference were aligned with issues of hierarchy and abuses of power. Also notable were Empath categories like “deception” and “terrorism”, which can be linked to an idea that is central to many conspiracy theories: that of hidden enemies among us.
Some have argued that the key psychological feature of conspiracy theorists is a “monological belief system” in which everything connects to everything else [3,39,42]. Recent work on r/conspiracy suggests that users with monological belief systems are responsible for the majority of posts but make up only a small percentage of users . However, these results are not necessarily incompatible. A monological belief system may simply be the most extreme, and most salient, version of a more general conspiratorial mindset. Further, one lesson from these results might be the need to distinguish factors which lead people to engage with conspiracy theories in the first place from the factors which distinguish more and less extreme engagement with conspiracy theories. This would fit well with recent work emphasizing the multidimensionality of conspiracy constructs .
We did not find evidence to support previous literature observing differences in personality traits or varieties of compensation or psychopathology. Where previous literature focused on negative emotional states as drivers of conspiracy theory endorsement, we only found evidence for the non-specific ‘negative_emotion’ Empath category (d = 0.24). Equally striking was the lack of difference in use of language related to anger, disaffection, or other compensatory emotions. This contradicts some of the accounts that focus on the hostility of conspiracy endorsers , but concords with more recent work that highlights the lack of hostility in comments from conspiracy endorsers. For example, Wood and Douglas  carried out a study of conspiracy related comments on a news website. Comments were divided into "conspiricist" (those arguing for a conspiratorial explanation of events) and "conventionalist" (those arguing for a conventional account of events) comments, with a focus on comments judged to be aimed at persuading others. These comments were then rated for tone. Interestingly, comments from conspiricists were rated as less hostile than the comments from "conventionalists". Our findings lend support to this conclusion. We found evidence that r/conspiracy users were less likely than the control group to use terms from Empath categories “affection”, “optimism”, and “friends”, which might be suggestive of alienation or social isolation [18,22].
Some of the divergence from previous findings may come from the use of matched controls. Our study compared Reddit users who would go on to post in r/conspiracy with users who began posting on Reddit at the same time and in the same subreddits. People who endorse conspiracy theories may appear angrier or more disaffected compared to a general population, but this may be more common across online discourse and Reddit users in general.
Importantly, Wood and Douglas  point out the need to distinguish the target and type of hostility: to whom and regarding what features is a comment hostile? Conspiracy theorists might often be hostile towards others for being "dupes" of the system; non-conspiracy believers might be hostile towards the perceived paranoia of conspiricists, or their propensity to creative, ad-hoc additions in order to shore up their theories, and so on. This is a potential confound in our study. Whereas Wood and Douglas first selected comments as either conspiricist or conventionalist, our study of conspiracy posters includes those who go on to argue against conspiracy theorists as well as for them. If non-conspiricists tend to be angrier towards those who forward conspiracy theories, this may affect what we found in the tone of users who ended up in the conspiracy forum versus those who did not. Moreover, as we have suggested, there may be a greater effect of anger in general on reddit, which could make communities look more similar on this variable. That said, we looked at hostile language across a variety of subreddits, not just conspiracy-focused ones, suggesting that hostility is not being driven solely by conspiracy-related factors.
There may also be important differences between the phenomena we have focused on and those that have been the focus of previous studies. As we discussed, we examined people who have sought out a forum dedicated to conspiracy theories who actively discuss and share thoughts on the topic. This might be a different phenomena to simply passively endorsing conspiracy theories when questioned about them. This might be relevant to our findings on powerlessness. One possibility is that the type of sharing and active engagement seen in the forum is itself a type of reclaiming of power, a place to put forward ones thoughts, help out one’s peers and the wider community to see the truth, and so on. Passive engagement, by contrast, may stem from or promote powerlessness (and would be difficult for this method to detect). This might potentially be a source of difference when it comes to results regarding feelings of powerlessness.
There was also a clear difference in the risk profiles of different theme communities. The highest risk by far was in the “Politics” theme community, where there were 2.4 times as many r/conspiracy users posting in the subreddits compared to the control group, and they posted 5 times as many comments overall. Though there was the appearance of a skew to the political right in the subreddits included in the “Politics” theme community, this group also includes subreddits such as r/progressive; as well as relatively neutral subreddits such as r/PoliticalDiscussion and debate-oriented subreddits like r/DebateReligion, which cater to a wide variety of political leanings. Some of the spread is likely due to the vigorous debate across political positions that characterizes Reddit, but it appears that political debate (broadly construed) is especially attractive to users who would go on to post in r/conspiracy.
A useful framework that encompasses both the language use and social environments was suggested by Douglas and Wood [103,102,40], who note that endorsement of certain first-order conspiracy beliefs seems to be mediated by higher-order beliefs about the existence of cover-ups. Similarly, McCauley and Jacques  suggest that individuals believe, on Bayesian grounds, that conspiracies are more likely to be successful. As has been emphasized in the past (including by members of r/conspiracy), some conspiracy theories have proven to be true. As we noted above, for example, there is a relationship between conspiracy endorsement about medical experimentation among the African American community and awareness of actual abuses and cover-ups around the same issue. The conspiratorial mindset need not be read as wholly irrational: it may instead reflect awareness of actual past abuses of power. This is consistent with the “conspiratorial mindset” markers noted in the broad language analysis.
Notable over-representation by both user-count and post-count also occurred in the theme communities we labeled “Drugs and Bitcoin” and “Toxic Reddit”. The former includes a heterogeneous set of topics (including UFO and paranormal speculation) but can be characterized by a willingness to engage with socially “fringe” ideas of many kinds. The “Toxic Reddit” theme community also represents fringe engagement, but instead on the edges of acceptable taste. The most popular subreddits appear comparatively innocuous, but include r/KotakuInAction, which is a known hotbed of sexism and racism. Further, the subreddits in which r/conspiracy posters are also most over-represented include several that have since been banned for questionable content, such as r/WhiteRights and r/fatpeoplehate.
The inclusion of these subreddits suggests that the “conspiratorial mindset” tag may be in need of further refinement. On the one hand, it skirts tautology if read literally: claiming that people find a particular conspiracy attractive because they find conspiracy theories generally attractive carries relatively little explanatory power. On the other hand, the label may be overly restrictive. The more general affective consideration may be that conspiracy theories are outside of the mainstream of ordinary thinking, and that some people are attracted to a range of non-mainstream beliefs. That would assimilate conspiracy endorsement to a broader range of endorsements, which may in turn suggest novel lines of research.
Some of the discrepancies between our results and previous experimental studies may be due to differences in the population under study. In our analyses, we observed conspiracy engagement—users who were actively posting comments on stories in the r/conspiracy subreddit. Most experimental studies focus on willingness to endorse conspiracy theories, which appears to be more prevalent . General powerlessness may make acceptance of conspiracy theories more attractive—but it requires a conspiratorial mindset to engage with and spread conspiracy theories in a social context. Taxonomizing individuals by the contents of their belief (i.e. by discussing “conspiracy theorists”) may thus be too coarse a cut for scientific purposes, and more fine-grained categorizations may be needed to capture the full dynamics of conspiracy endorsement. Our results suggest that people who are willing to discuss conspiracy theories in a social context are different from, or a special subset of the relatively broad populations who would endorse conspiracy theories when asked in isolation.
In the first two analyses, we identified the personal traits and social factors associated with future engagement with conspiracy beliefs. But these analyses are unable to shed light on whether, for example, r/conspiracy users appear angrier because they happen to be posting in subreddits which host particularly vigorous debates, or whether they exhibit anger in their posts even in the context of the social environments they inhabit.
A primary goal of the study was to disentangle self-selection effects from other cohort effects. Users from the r/conspiracy group differed from their matched cohort both in where they post and in the language they use in their posts. We interpreted the observed interaction between language and social factors as showing that this difference is primarily due to self-selection, rather than to the effect of either invariant traits or situational evocation.
Several patterns of interaction are theoretically possible. A complete lack of significant differences, especially in high-risk theme communities, would suggest that people self-select: that language use by r/conspiracy subjects is different because they tend to post in communities where that language finds a welcoming home. Conversely, consistent differences in the same linguistic factors across themes that vary in their association with eventual r/conspiracy posting (that is, in their user-count or post-count ratios) would suggest the importance of traits regardless of social communities. Differences in certain theme communities across factors would suggest that certain theme communities selectively enhance traits, possibly as part of a “radicalization” phenomenon. Finally, more complex patterns would suggest a more complicated causal story incorporating multiple processes.
The evidence for self-selection is twofold. First, across nearly all theme communities, there were relatively few significant differences in language use, and even fewer that met the pre-specified criterion we used in the first language use analysis. The lack of effect was most striking in the highest-risk theme communities like “Politics”. For example, r/conspiracy users discussed terrorism more than the control group, were more likely to post in political subreddits than the control group (and more often), but within the political subreddits their focus on terrorism is unremarkable. If the differences between the groups were due solely to extreme individual traits, we would expect to see language use differences persist when analyzed within the theme community.
Second, the exception to this general pattern is what we have dubbed the “Basic Reddit” theme. These are subreddits in which nearly everyone posts (i.e. the user-count ratio is close to 1.0). They are among the most popular subreddits (such as r/AskReddit, r/funny, and r/pics), and are generally innocuous in nature. Within this group, the language differences observed in the first analysis remained significant and strong in the third analysis. This suggests that the differences in the first analysis cannot be explained by situational factors because the differences between r/conspiracy users and the matched controls are still apparent within the subreddits that are most general.
The picture that best fits these observations is situational self-selection. In situational self-selection, individuals with a conspiratorial mindset select and post in subreddits where they appear relatively unremarkable. Further, this appears to be a process closer to the “choice mechanism” of Emmons et al. : individuals appear to post just as stridently in subs where this would not necessarily be reinforced.
Of course, to say that the language within the politically themed subreddits is generally aligned with the social setting does not rule out the possibility that what is asserted is more conspiratorial in nature. Consider the following (deliberately obfuscated) comments from around the same time in politics-themed subreddits from an r/conspiracy user and their matched control:
r/conspiracy user: “Do you really deny that a politician might make decisions, after winning the race, that would help people who funded their campaign (or even to hurt people funded their opponents?) I’m not saying that only rich people win elections, I’m saying that money can corrupt political decision-making.”
matched control user: “The Tea Party movement took off when Glenn Beck began endorsing them. I am sure that MSNBC would cover a thoughtful left-wing counter-movement. What we would really need is enough push to make the movement credible, and then have some attractive faces in the media to promote it.”
Both quotes are concerned with questions of power, influence, and public perception. But the former is intuitively more suggestive of a conspiratorial mindset than the latter.
Finally, we note that the social environment analysis was relatively coarse-grained. Within each theme, there are certain subreddits where r/conspiracy posters are substantially over-represented when measured by user-count ratio. For example, the “Geek culture” theme includes subreddits such as r/collapse (devoted to discussion of “Resource depletion and ecological breakdown leading to the end of civilization”), r/WikiLeaks, and r/Anarchism. There were 8 times as many r/conspiracy group users posting in these subreddits as there were users from the matched control group. While the topics of the subreddits are aligned with traditional geek culture, they are more amenable to discussions related to conspiracy theories. The “Basic Reddit” theme community included subreddits such as r/Libertarian and r/MensRights, where there were 5 times as many r/conspiracy users as matched controls. The overall grouping still makes sense (as these are popular subreddits), but topics of discussion were also more likely to align with known conspiracy theories.
This raises the intriguing possibility of more fine-grained “gradients” within theme communities. Even when agents’ affiliations are driven entirely by self-selection, they face a discovery problem: it is not always obvious, especially in a crowded field, which groups will be most welcoming. To continue Funder’s  analogy: even if I like seedy bars, I might have trouble finding appropriately violent ones when I move to a new city. A good solution is word of mouth: I seek out the roughest bar I can find, and I find what the patrons already there say about other bars in the city. If there’s one that sounds more exciting, I try it. By iterating this process, I can eventually find action sufficient to my tastes.
Alfano, Carter, and Cheong  have dubbed this process “self-radicalization”. We suspect that a similar self-radicalization process may be at work in online forums. There is considerable traffic between, and discussion about, different subreddits. This word of mouth should aid the discovery process of new subreddits. Consider Fig 2, which shows chronological pathways of both r/conspiracy users and control group users through selected subreddits in the “Guns and Cars” theme community. Although the numbers are small, movement through increasingly risky subreddits towards r/conspiracy occurs more often in the r/conspiracy group compared to the control group.
Examples of chronological posting patterns among r/conspiracy (orange, above) and control group (cyan, below) users for selected subreddits in the “Guns & Cars” theme community. Numbers within semi-circles are the total number of users posting in the subreddit; numbers above and below arcs are the number of users with contiguous chronological posts in two subreddits (from left to right).
Limitations and future directions
Our study was subject to several methodological limitations. The dataset on which this study was based may have gaps in the availability of some of the comments from users, indicating that there is per-user risk of approximately 4% that one or more comments may be missing (the rate of missing comments is highest in 2009) . However, since the study design relies on identifying differences between groups of users, missing data of this structure and magnitude would be unlikely to affect the results.
The dataset tracks Reddit users rather than individuals, and individuals can have multiple user accounts. To minimize this limitation, we constrained our set of participants to include only those users who posted with a minimum frequency over an extended period. Similarly, we only used information about users who were active participants in subreddits and could not determine whether users were reading forums without commenting (“lurking”). We think it is reasonable to take commenting to indicate active participation. However, we cannot rule out the possibility that some of our user accounts represent “alts”: that is, accounts made by individuals specifically to hide their less socially acceptable activities from searches. The presence of alts is a plausible explanation for the pattern observed in the “Pornography” theme community, which has a relatively high user-count ratio and a relatively low post-count ratio. However, we think the presence of alts would not fundamentally change our conclusions. Indeed, the maintenance of multiple social identities online might serve to aid self-selection of social groups, by reducing the need to mediate conflicts [106,107]. Large-scale online work may thus represent a valuable tool for looking at the negotiation of social identities.
Differences in language are useful but noisy proxies for psychological states. We note that other linguistic analyses have been used to study straightforwardly psychological phenomena. There are a range of other studies that use language markers from social media users to predict behavior changes relative to mental health conditions [108–110]. We think that the same logic readily extends to other, non-pathological psychological states. The robustness of our findings suggests that the results are a reasonable signal of differences in psychology.
Lexical analyses using a bag-of-words approach omit important context and can overlook subtle differences in how topics are discussed. The emotion-based Empath categories serve as something of a proxy for sentiment analysis, but proper sentiment analysis might give further distinguishing information. More powerful unsupervised methods such as topic modelling can also pick up differences in rhetorical and narrative style which differentiate different attitudes. Previous work on r/conspiracy suggests that skeptics could be differentiated by conspiracy endorsers by such means . Developing more principled means of manual analyses of identified posters and comments might similarly aid interpretability.
Reddit is a global phenomenon with around 330 million monthly active users. As such, the sample is likely more diverse than many smaller experimental studies. The demographics of Reddit are skewed, however. Around 63% of Reddit users self-identify as male; 80% are between 18 and 35 years old; 82% are white; and 59% are single (See S1 Appendix). As such, while our findings are a reliable characterization of the population of Reddit users, we may not have reliably characterized engagement with conspiracy theories among minority populations . However, further diversity would likely support rather than undermine our findings. In addition, Reddit is known to have played a role in spreading and amplifying misinformation from other parts of the web , suggesting that the importance of studying Reddit goes beyond the community itself.
We note again that because we studied Reddit users, we examined only people who have engaged with conspiracy theories in a social space, rather than the broader set of people who endorse or accept conspiracy theories. Both populations are important. We have suggested that some of the divergence between our findings and experimental results might reflect differences between these two groups. Experimental studies may be able to incorporate some of these insights by focusing on willingness to disseminate and discuss, rather than merely endorse, conspiratorial theories.
Large-scale data analyses of online forums can shed light on how and why people engage with conspiracy theories. Results from analyses of what Reddit users post and where show that there are consistent language use differences between users who will eventually become engaged in a conspiracy theory forum compared to similar users who do not. The results also suggest that many of these differences in language are related to users actively selecting to engage with social groups whose interests and motives tend to fit with an incipient conspiratorial mindset. This does not rule out the possibility that further engagement with those groups ultimately helps to enhance conspiratorial leanings, but this would suggest amplification of existing biases rather than a de novo radicalization process. Further research would benefit from better understanding of the differences between people who endorse or accept conspiracy theories relative to those who engage with conspiracy theories in social spaces, as well as a deeper understanding of the confluence of personal traits and social circumstance that precedes engagement with conspiracy theories.
S1 Appendix. Appendix: Dataset availability and supporting details.
S1 Fig. The complete co-posting network.
Node color represents degree of over-representation (blue = less, red = more). Node size represents absolute numbers of posters in the subreddit from the matched samples. Subreddits are organized via heuristic so that subreddits are closer to other subreddits with whom they are connected by larger overlap of users.
- 1. Byford J. Beyond belief: The social psychology of conspiracy theories and the study of ideology. In: Antaki C, Condor S, editors. Rhetoric Ideology and Social Psychology: Essays in Honour of Michael Billig. London: Routledge; 2014. pp. 83–93.
- 2. Byford J, Billig M. The emergence of antisemitic conspiracy theories in Yugoslavia during the war with NATO. Patterns of Prejudice. 2001; 35(4): 50–63.
- 3. Goertzel T. Belief in conspiracy theories. Political Psychology. 1994; 15: 731–742.
- 4. Ross MW, Essien EJ, Torres I. Conspiracy beliefs about the origin of HIV/AIDS in four racial/ethnic groups. Journal of Acquired Immune Deficiency Syndromes. 2006; 41(3): 342–344. pmid:16540935
- 5. Williams J. PC wars: Politics and theory in the academy. New York, NY: Routledge; 2013.
- 6. Jolley D, Douglas KM. The social consequences of conspiracism: Exposure to conspiracy theories decreases intentions to engage in politics and to reduce one’s carbon footprint. British Journal of Psychology. 2014; 105(1): 35–56. pmid:24387095
- 7. Jolley D, Douglas KM. The effects of anti-vaccine conspiracy theories on vaccination intentions. PLoS ONE. 2014; 9(2): e89177. pmid:24586574
- 8. Oliver JE, Wood T. Medical conspiracy theories and health behaviors in the United States. JAMA Internal Medicine. 2014; 174(5): 817–818. pmid:24638266
- 9. Lewandowsky S, Gignac GE, Oberauer K. The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science. PLoS ONE. 2013; 8(10): e75637. pmid:24098391
- 10. Simmons WP, Parsons S. Beliefs in conspiracy theories among African Americans: A comparison of elites and masses. Social Science Quarterly. 2005; 86(3): 582–598.
- 11. Wood MJ. Conspiracy suspicions as a proxy for beliefs in conspiracy theories: Implications for theory and measurement. British Journal of Psychology. 2017; 108(3): 507–527. pmid:28677916
- 12. Enders AM, Smallpage SM. On the measurement of conspiracy beliefs. Research & Politics. 2018; 5(1): 2053168018763596.
- 13. van Prooijen JW, Van Vugt M. Conspiracy theories: Evolved functions and psychological mechanisms. Perspectives on psychological science. 2018; 13(6): 770–788. pmid:30231213
- 14. McCauley C, Jacques S. The popularity of conspiracy theories of presidential assassination: A Bayesian analysis. Journal of Personality and Social Psychology. 1979; 37(5): 637–644.
- 15. Keeley BL. Of conspiracy theories. Journal of Philosophy. 1999; 96: 109–126.
- 16. Vitriol JA, Marsh JK. The illusion of explanatory depth and endorsement of conspiracy beliefs. European Journal of Social Psychology. 2018; 48(7): 955–969.
- 17. van Prooijen JW, Douglas KM, De Inocencio C. Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European journal of social psychology. 2018; 48(3): 320–335. pmid:29695889
- 18. Hofstadter R. The paranoid style in American politics. Harper’s magazine. 1964; 229(1374): 77–86.
- 19. Abalakina-Paap M, Stephan WG, Craig T, Gregory WL. Beliefs in Conspiracies. Political Psychology. 1999; 20(3): 637–647.
- 20. Whitson JA, Galinsky AD. Lacking control increases illusory pattern perception. Science. 2008; 322(5898): 115–117. pmid:18832647
- 21. Swami V, Furnham A, Smyth N, Weis L, Lay A, Clow A. Putting the stress on conspiracy theories: Examining associations between psychological stress, anxiety, and belief in conspiracy theories. Personality and Individual Differences. 2016; 99: 72–76.
- 22. Miller S. Conspiracy theories: public arguments as coded social critiques: a rhetorical analysis of the TWA flight 800 conspiracy theories. Argumentation and Advocacy. 2002; 39(1): 40–56.
- 23. Swami V. Social psychological origins of conspiracy theories: the case of the Jewish conspiracy theory in Malaysia. Frontiers in Psychology. 2012; 3 (280).
- 24. Swami V, Chamorro-Premuzic T, Furnham A. Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology. 2010; 24(6): 749–761.
- 25. Thomas SB, Quinn SC. The tuskegee syphilis study, 1932 to 1972: implications for hiv education and aids risk education programs in the black community. American journal of public health. 1991; 81(11): 1498–1505. pmid:1951814
- 26. Jones JH. Bad blood: The tuskegee syphilis experiment. NY: Simon & Schuster; 1993.
- 27. Bogart LM, Thorburn S. Are hiv/aids conspiracy beliefs a barrier to hiv prevention among african americans? Journal of Acquired Immune Deficiency Syndromes. 2005; 38(2): 213–218. pmid:15671808
- 28. Barron D, Morgan K, Towell T, Altemeyer B, Swami V. Associations between schizotypy and belief in conspiracist ideation. Personality and Individual Differences. 2014; 70: 156–159.
- 29. Darwin H, Neave N, Holmes J. Belief in conspiracy theories. The role of paranormal belief, paranoid ideation and schizotypy. Personality and Individual Differences. 2011; 50(8): 1289–1293.
- 30. Freeman D, Bentall RP. The concomitants of conspiracy concerns. Social psychiatry and psychiatric epidemiology. 2017; 52(5): 595–604. pmid:28352955
- 31. Robins RS, Post JM. Political paranoia. New Haven, CT: Yale University Press; 1999.
- 32. Young TJ. Cult violence and the identity movement. Cultic Studies Journal. 1990; 7: 150–159.
- 33. Raab MH, Ortlieb SA, Auer N, Guthmann K, Carbon CC. Thirty shades of truth: Conspiracy theories as stories of individuation, not of pathological delusion. Frontiers in Psychology. 2013; 4(406): 1–9.
- 34. Paul LA. Transformative experience. Oxford: Oxford University Press; 2014.
- 35. Lantian A, Muller D, Nurra C, Douglas KM. I know things they don’t know! Social Psychology. 2017; 48(3): 160–173.
- 36. Franks B, Bangerter A, Bauer MW, Hall M, Noort MC. Beyond “monologicality”? exploring conspiracist worldviews. Frontiers in psychology. 2017; 8, 861. pmid:28676768
- 37. Brotherton R, French CC, Pickering AD. Measuring belief in conspiracy theories: The generic conspiracist beliefs scale. Frontiers in psychology. 2013; 4, 279. pmid:23734136
- 38. Bruder M, Haffke P, Neave N, Nouripanah N, Imhoff R. Measuring individual differences in generic beliefs in conspiracy theories across cultures: Conspiracy Mentality Questionnaire. Frontiers in psychology. 2013; 4, 225. pmid:23641227
- 39. Swami V., Coles R., Stieger S., Pietschnig J., Furnham A., Rehim S., & Voracek M. (2011). Conspiracist ideation in Britain and Austria: Evidence of a monological belief system and associations between individual psychological differences and real-world and fictitious conspiracy theories. British Journal of Psychology, 102(3), 443–463. pmid:21751999
- 40. Wood MJ, Douglas KM, Sutton RM. Dead and alive: Beliefs in contradictory conspiracy theories. Social Psychological and Personality Science. 2012; 3(6): 767–773.
- 41. Sutton RM, Douglas KM. Examining the monological nature of conspiracy theories. In: van Prooijen JW, van Lange PAM, editors. Power, Politics, and Paranoia: Why People are Suspicious of their Leaders. Cambridge: Cambridge University Press; 2014. pp. 254–272.
- 42. van der Linden S. The conspiracy-effect: exposure to conspiracy theories (about global warming) decreases pro-social behavior and science acceptance. Personality and Individual Differences. 2015; 87, 171–173.
- 43. Backstrom L, Huttenlocher D, Kleinberg J, Lan X. Group formation in large social networks. In: Ungar L, Craven M, Gunopulos D, Eliassi-Rad T, editors. Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining—KDD’06, New York, NY: ACM; 2006. pp. 44–54.
- 44. Douglas KM, Sutton RM, Cichocka A. The Psychology of Conspiracy Theories. Current Directions in Psychological Science. 2017; 26(6), 538–542. pmid:29276345
- 45. Binning KR, Sherman DK. Categorization and communication in the face of prejudice: When describing perceptions changes what is perceived. Journal of personality and social psychology. 2011; 101(2): 321. pmid:21463076
- 46. Bantimaroudis P. “chemtrails” in the sky: Toward a group-mediated delusion theory. Studies in Media and Communication. 2016; 4(2): 23–31.
- 47. Centola D, Gonzlez-Avella JC, Eguíluz VM, San Miguel M. Homophily, cultural drift, and the co-evolution of cultural groups. Journal of Conflict Resolution. 2007; 51(6): 905–929.
- 48. Holme P, Newman ME. Nonequilibrium phase transition in the coevolution of networks and opinions. Physical Review E. 2006; 74(5): 056108.
- 49. Coman A, Momennejad I, Drach RD, Geana A. Mnemonic convergence in social networks: The emergent properties of cognition at a collective level. Proceedings of the National Academy of Sciences. 2016; 113(29): 8171–8176.
- 50. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359(6380): 1146–1151. pmid:29590045
- 51. Sunstein CR, Vermeule A. Conspiracy theories: Causes and cures. Journal of Political Philosophy. 2009; 17: 202–227.
- 52. Doris JM. Lack of character: Personality and moral behavior. Cambridge: Cambridge University Press; 2002.
- 53. Allport GW. Pattern and growth in personality. Oxford, England: Holt, Reinhart & Winston; 1961.
- 54. Funder DC. Persons, situations, and person-situation interactions. In: John OP, Robins RW, Pervin LA, editors. Handbook of Personality, New York, NY: Guilford Press; 2008. pp. 658–580.
- 55. Buss DM. Selection, evocation, and manipulation. Journal of personality and social psychology. 1987; 53(6): 1214–1221. pmid:3320336
- 56. Emmons RA, Diener E, Larsen RJ. Choice and avoidance of everyday situations and affect congruence: Two models of reciprocal interactionism. Journal of Personality and social Psychology. 1986; 51(4): 815–826.
- 57. Bakeman R, Gottman JM. Observing interaction: An introduction to sequential analysis. Cambridge: Cambridge University Press; 1997.
- 58. Gundogdu D, Finnerty AN, Staiano J, Teso S, Passerini A, Pianesi F, et al. Investigating the association between social interactions and personality states dynamics. Royal Society open science. 2017; 4(9): 170194. pmid:28989732
- 59. Sherman RA, Rauthmann JF, Brown NA, Serfass DG, Jones AB. The independent effects of personality and situations on real-time expressions of behavior and emotion. Journal of Personality and Social Psychology. 2015; 109(5): 872–888. pmid:25915131
- 60. Frederickx S, Hofmans J. The role of personality in the initiation of communication situations. Journal of Individual Differences. 2014; 35(1): 30–37.
- 61. Rauthmann JF, Sherman RA, Nave CS, Funder DC. Personality-driven situation experience, contact, and construal: How people’s personality traits predict characteristics of their situations in daily life. Journal of Research in Personality. 2015; 55: 98–111.
- 62. Klein C, Clutton P, Polito V. Topic modeling reveals distinct interests within an online conspiracy forum. Frontiers in Psychology. 2018; 9: 189. pmid:29515501
- 63. Gosling SD, Mason W. Internet Research in Psychology. Annual Review of Psychology, 2015; 66(1): 877–902.
- 64. Halevy A, Norvig P, Pereira F. The unreasonable effectiveness of data. IEEE Intelligent Systems. 2009; 24(2): 8–12.
- 65. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, Stanley HE, Quattrociocchi W. The spreading of misinformation online. Proceedings of the National Academy of Sciences. 2016; 113(3): 554–559.
- 66. Del Vicario M, Vivaldo G, Bessi A, Zollo F, Scala A, Caldarelli G, Quattrociocchi W. Echo Chambers: Emotional Contagion and Group Polarization on Facebook. Scientific Reports. 2016; 6:3 7825.
- 67. Quattrociocchi W, Caldarelli G, Scala A. Opinion dynamics on interacting networks: media competition and social influence. Scientific reports. 2014; 4, 4938. pmid:24861995
- 68. Centola D. The spread of behavior in an online social network experiment. Science. 2010; 329(5996): 1194–1197. pmid:20813952
- 69. Weng L, Menczer F, Ahn YY. Virality prediction and community structure in social networks. Scientific reports. 2013; 3, 2522. pmid:23982106
- 70. Zhang J, Hamilton W, Danescu-Niculescu-Mizil C, Jurafsky D, Leskovec J. Community identity and user engagement in a multi-community landscape. In International AAAI Conference on Web and Social Media 2017; Retrieved from https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15706
- 71. Cheng J, Danescu-Niculescu-Mizil C, Leskovec J. How community feedback shapes user behavior. arXiv preprint arXiv. 2014; 1405.1429.
- 72. Zhang J, Danescu-Niculescu-Mizil C, Sauper C, Taylor SJ. Characterizing online public discussions through patterns of participant interactions. In: Karahalios K, Monroy-Hernandez A, Lampinen A, Fitzpatrick G, editors. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW); 2018. 198.
- 73. Choi D, Han J, Chung T, Ahn Y.-Y, Chun B.-G, Kwon T. Characterizing conversation patterns in Reddit: From the perspectives of content properties and user participation behaviors. In Sharma A, editor. Proceedings of the 2015 ACM on Conference on Online Social Networks, New York, NY: ACM; 2015. pp. 233–243.
- 74. Mohan S, Guha A, Harris M, Popowich F, Schuster A, Priebe C. The impact of toxic language on the health of reddit communities. In Mouhab M, Langlais P, editors. Advances in Artificial Intelligence, 30th Canadian Conference on Artificial Intelligence, LNAI 10233, Cham: Springer International Publishing; 2017. pp. 51–56.
- 75. Nithyanand R, Schaffner B, Gill P. Online Political Discourse in the Trump Era. arXiv preprint arXiv. 2017; 1711.05303.
- 76. Saleem HM, Dillon KP, Benesch S, Ruths D. A web of hate: Tackling hateful speech in online social spaces. arXiv preprint arXiv. 2017; 1709.10159.
- 77. Cole JR, Ghafurian M, Reitter D. Is word adoption a grassroots process? An analysis of Reddit communities. In: Lee F, Lin Y-R, Osgood N, Thomson R, editors. Social, Cultural, and Behavioral Modeling (10th International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation), Springer International Publishing; 2017. pp. 236–241.
- 78. Volske M, Potthast M, Syed S, Stein B. Mining Reddit to Learn Automatic Summarization. Proceedings of the Workshop on New Frontiers in Summarization. Stroudsburg, PA: Association for Computational Linguistics (ACL). 2017; pp. 59–63.
- 79. Benkler Y, Faris R, Roberts H. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press; 2018.
- 80. Zannettou S, Caulfield T, De Cristofaro E, Kourtelris N, Leontiadis I, Sirivianos M, et al. The web centipede: understanding how web communities influence each other through the lens of mainstream and alternative news sources. In: Uhlig S, Maenel O, editors. Proceedings of the 2017 Internet Measurement Conference New York, NY: ACM. 2017; pp. 405–417.
- 81. Samory M, Mitra T. Conspiracies online: User discussions in a conspiracy community following dramatic events. In International AAAI Conference on Web and Social Media. 2018. Retrieved from https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17907
- 82. Yarkoni T. Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of research in personality. 2010; 44(3): 363–373. pmid:20563301
- 83. Haber EM. On the stability of online language features: How much text do you need to know a person? arXiv preprint arXiv. 2015; 1504.06391.
- 84. Tausczik YR, Pennebaker JW. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology. 2010; 29(1): 24–54.
- 85. Niculae V, Danescu-Niculescu-Mizil C. Conversational markers of constructive discussions. arXiv preprint arXiv. 2016; 1604.07407.
- 86. Tan C, Niculae V, Danescu-Niculescu-Mizil C, Lee L. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In: Proceedings of the 25th international conference on world wide web Switzerland: International World Wide Web Conferences Steering Committee; 2016. pp. 613–624.
- 87. Samory M, Mitra T. ‘the government spies using our webcams’: The language of conspiracy theories in online discussions. In: Karahalios K, Monroy-Hernandez A, Lampinen A, Fitzpatrick G, editors. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 152; 2018.
- 88. Fast E, Chen B, Bernstein MS. Empath: Understanding topic signals in large-scale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM; 2016. 4647–4657.
- 89. Fast E, Chen B, Bernstein MS. Lexicons on demand: neural word embeddings for large-scale text analysis. In: Sierra S, editor. Proceedings of the 26th International Joint Conference on Artificial Intelligence, ijcai.org; 2017. pp. 4836–4840.
- 90. Mathew B, Kumar N, Goyal P, Mukherjee A, et al. Analyzing the hate and counter speech accounts on twitter. arXiv preprint arXiv. 2018; 1812.02712.
- 91. Ribeiro MH, Calais PH, Santos YA, Almeida VA, Meira Jr W. " Like Sheep Among Wolves": Characterizing Hateful Users on Twitter. arXiv preprint arXiv. 2017; 1801.00317.
- 92. Ribeiro MH, Calais PH, Santos YA, Almeida VA, Meira Jr W. Characterizing and Detecting Hateful Users on Twitter. arXiv preprint arXiv. 2018; 1803.08977.
- 93. Ottoni R, Cunha E, Magno G, Bernadina P, Meira Jr W, Almeida V. Analyzing right-wing youtube channels: Hate, violence and discrimination. In Proceedings of the 10th ACM Conference on Web Science New York, NY: ACM; 2018. pp. 323–332.
- 94. Caetano JA, Magno G, Cunha E, Meira Jr W, Marques-Neto HT, Almeida V. Characterizing the public perception of whatsapp through the lens of media. arXiv preprint arXiv. 2018; 1808.05927.
- 95. Cunha E, Magno G, Caetano J, Teixeira D, Almeida V. Fake news as we feel it: perception and conceptualization of the term “fake news” in the media. In Staab S, Koltsova O, Ignatov IG, editors. International Conference on Social Informatics, Springer International Publishing; 2018. pp. 151–166.
- 96. Lin Z, Salehi N, Yao B, Chen Y, Bernstein MS. Better when it was smaller? community content and behavior after massive growth. In ICWSM; 2017. 132–141.
- 97. Sekulić I, Gjurković M, Šnajder J. Not just depressed: Bipolar disorder prediction on reddit. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Association for Computational Linguistics; 2018.
- 98. Levandowsky M, Winter D. Distance between sets. Nature. 1971; 234(5323): 34–35.
- 99. Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment. 2008; 10.
- 100. Swami V, Barron D, Weis L, Voracek M, Stieger S, Furnham A. An examination of the factorial and convergent validity of four measures of conspiracist ideation, with recommendations for researchers. PloS One. 2017; 12(2): e0172617 pmid:28231266
- 101. Byford J. Conspiracy Theory and Antisemitism. In: Conspiracy Theories. Palgrave Macmillan. London, UK; 2011. pp. 95–119.
- 102. Wood MJ, Douglas KM. “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Frontiers in Psychology. 2013; 4: 409. pmid:23847577
- 103. Wood MJ, Douglas KM. Online communication as a window to conspiracist worldviews. Frontiers in psychology. 2015; 6: 836. pmid:26136717
- 104. Alfano M, Carter JA, Cheong M. Technological seduction and self-radicalization. Journal of the American Philosophical Association. 2018; 4(3): 298–322.
- 105. Gaffney D, Matias JN. Caveat Emptor, Computational Social Science: Large-Scale Missing Data in a Widely-Published Reddit Corpus. PLOS ONE. 2018; 13(7): e0200162. pmid:29979741
- 106. Gilbert R, Thadani V, Handy C, Andrews H, Sguigna T, Sasso A, et al. The psychological functions of avatars and alt (s): A qualitative study. Computers in Human Behavior. 2014; 32: 1–8.
- 107. Haimson OL, Brubaker JR, Dombrowski L, Hayes GR. Digital footprints and changing networks during online identity transitions. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM; 2016. pp. 2895–2907.
- 108. De Choudhury M, Kiciman E, Dredze M, Coppersmith G, Kumar M. Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems—CHI ‘16, 2098–2110. New York, NY: ACM; 2016. pp. 2098–2110.
- 109. De Choudhury M, Counts S, Horvitz E. Predicting postpartum changes in emotion and behavior via social media. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY: ACM; 2013. pp. 3267–3276.
- 110. De Choudhury M, Gamon M, Counts S, Horvitz E. Predicting depression via social media. ICWSM. 2013; 13: 1–10.