Figures
Abstract
Misinformation is a growing concern worldwide, particularly in public health following the COVID-19 pandemic in which misinformation has been attributed to tens of thousands of unnecessary deaths. Therefore a search for effective interventions against misinformation is underway, with widely varying proposed interventions, measures of efficacy, and groups targeted for intervention. This realist systematic review of proposed interventions against COVID-19 misinformation assesses the studies themselves, the characteristics and effectiveness of the interventions proposed, the durability of effect, and the circumstances and contexts within which these interventions function. We searched several databases for studies testing interventions published from 2020 onwards. The search results were sorted by eligibility, with eligible studies then being coded by themes and assessed for quality. Thirty-five studies were included, representing eight types of intervention. The results are promising to the advantages of game-type interventions, with other types scoring poorly on either scalability or impact. Backfire effects and effects on subgroups were reported on intermittently in the included studies, showing the advantages of certain interventions for subgroups or contexts. No one intervention appears sufficient by itself, therefore this study recommends the creation of packages of interventions by policymakers, who can tailor the package for contexts and targeted groups. There was high heterogeneity in outcome measures and methods, making comparisons between studies difficult; this should be a focus in future studies. Additionally, the theoretical and intervention literatures need connecting for greater understanding of the mechanisms at work in the interventions. Lastly, there is a need for work more explicitly addressing political polarisation and its role in the belief and spread of misinformation. This study contributes toward the expansion of realist review approaches, understandings of COVID-19 misinformation interventions, and broader debates around the nature of politicisation in contemporary misinformation.
Citation: Dickinson R, Makowski D, van Marwijk H, Ford E (2025) Interventions for combating COVID-19 misinformation: A systematic realist review. PLoS ONE 20(4): e0321818. https://doi.org/10.1371/journal.pone.0321818
Editor: Osmond Ekwebelem,, University of Wyoming, UNITED STATES OF AMERICA
Received: November 12, 2024; Accepted: March 12, 2025; Published: April 24, 2025
Copyright: © 2025 Dickinson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Misinformation has been a societal issue throughout history [1]. This phenomenon can be seen in many areas, but perhaps most clearly in public health, where alongside the introduction of many major advances in medicine came movements of resistance and misinformation [2]. In the contemporary, systemic misinformation is a well-established by-product of increasing reliance on the internet and social media for the dissemination of news and information [3,4]. Public concern about misinformation appeared to reach a new height in 2016 in relation to the US Presidential Election, particularly around perceptions of misinformation campaigns supporting Donald Trump’s bid for the presidency [5,6]. By many accounts, this period resulted in the development of an infrastructure of misinformation, accelerated by social media algorithms to reach new and greater audiences [7,8]. In 2020, when the COVID-19 pandemic began, misinformation began and continued to punctuate the public understanding of the pandemic and the public health response thereto [9].
For the purposes of this study, misinformation is broadly defined as misleading news, media content, or other information which directs viewers towards understandings of the world that do not align with a socio-politically derived consensus of legitimacy, which may be intentional or unintentional, consciously or unconsciously biased, and may or may not contain mechanisms of psychological manipulation. This definition arises out of a growing literary shift towards viewing misinformation as “that which contradicts the best expert evidence available at the time” [10].
The pre-COVID misinformation intervention landscape appears to focus on fact-checking [11–15]. This is not to suggest that this literature exclusively focused on fact-checking, as a rich literary body nonetheless existed examining with depth and alacrity understandings of the effectiveness and appeal of misinformation based in cognitive ability, emotional appeal, partisanship, sensationalism, and fear-mongering [14–20]. Fact-checking can be described as a form of debunking in which information is retroactively checked for veracity and if found to be inaccurate changed. Importantly, an additional step in fact-checking and other forms of debunking is attempting to reach the audience initially exposed to the misinformation and retroactively change their internalised understanding of the information [21]. Recently, accuracy nudges have been championed as a new, primary intervention-type [14]. Accuracy nudges refer to a variety of interventions that ‘nudge’ people to consider the veracity of the information they are seeing or are about to see. This can include prompts that appear on-screen next to links to news articles, or fact-checks that appear alongside social media posts but can also take on a wide variety of forms. Below is an example from X/Twitter highlighted by a red circle (Fig 1), that shows crowdsourced fact-checking to appear next to suspected misinformation [22].
Although championed through seminal studies like [23], accuracy nudge interventions have since garnered significant criticism on their effectiveness and the potential impact of partisan bias in participants [24–26] including replication studies that did not replicate the initial findings [27].
In the theoretical literature, discussion of misinformation interventions focused on inoculation, backfire effects, and the importance of worldview in intervention effectiveness [14,15,17,19]. Inoculation refers to the idea of priming people before they might encounter misinformation to make them more aware of it with the goal of building resilience against it. The ‘backfire’ effect is an issue widely theorised about in the literature around misinformation, typically centred on the idea that an intervention seeking to combat misinformation might end up reinforcing ‘in-group’ thinking among those most conspiracy-minded or most politically polarised [12,14,28]. For these people, it is speculated that an intervention (e.g., labeling their favoured sources as false or untrustworthy) could further entrench them in their distrust of legitimate public health messaging. This has the potential to make the intervention not only less effective, but potentially negative in impact. This is known as the ‘backfire effect’ and will be evaluated in the included studies. A concept arising from the policy and psychology disciplines that could contribute to addressing potential backfire effects is framing [29,30]. Framing refers to the use of strategic messaging that is created with the intent of aligning with the extant worldview of the target audience to make new ideas or information as congruent as possible. In practice, framing has been found to improve fact-checking and accuracy nudge interventions [31].
There are studies testing interventions, and many reviews of the theory surrounding misinformation, but as yet no reviews attempting to achieve a broader overview and evaluation of the various interventions which emerged in the COVID-19 context. This project aims to contribute both toward the expansion and application of realist review approaches, while simultaneously contributing toward better understandings of interventions against COVID-19 misinformation. As COVID-19 continues to spread and the possibility of a new pandemic lurks as an ever-present threat, developing the best understanding of interventions to effectively combat COVID-19 misinformation will serve to help prepare policymakers and public health apparatuses for the next pandemic.
Research Question: Which interventions are most effective in combating spread of and belief in COVID misinformation?
Sub-questions:
- RQ1: Which types of interventions work best?
- RQ2: Which groups of people do they work for?
- RQ3: Under which circumstances are the interventions most effective?
- RQ4: What is the quality of studies testing interventions to combat spread of and belief in misinformation?
Theoretical framework
This study takes the theoretical framing of a realist review, focusing on uncovering the mechanisms that explain how interventions produce effects in specific contexts, working to ensure that findings can be generalized or applied to similar circumstances. These mechanisms function as the underlying processes that drive change in an intervention. They could be cognitive, as is the case for accuracy nudge interventions, where a brief reminder provides a cognitive ‘nudge’ to temporarily boost cognition when clicking on a link or news article. They could be social, as is the case for community engagement interventions, where extended and deep interaction with a community works to identify and slowly change misinformed group narratives. They could also be behavioural, as is the case for game interventions, where the act of playing a game itself is the intervention, and the interaction between player and game functions to educate and inoculate against misinformation. Table 1a in the results section provides a description of the mechanism at work within each included study for this review.
The other key piece of a realist review puzzle, contexts, are the environmental, cultural, economic, or other situational factors that influence how, why, where, and when interventions are effective. For some interventions like accuracy nudges, their use is limited to digital spaces, fitting contexts like social media platforms. For others like educational or message framing interventions, fitting contexts for use might include difficult to reach or distrustful communities. For debunking interventions, context becomes a question of time and popularity, best fitting situations where misinformation has already emerged and spread. Table 2 in the results section, alongside the wider ‘Context and Generalisability’ subsection of the results explains in greater detail the contextual nature of the interventions included in this review, as well as ranking them by generalisability.
By together clarifying the mechanisms and contexts for the interventions included, this review provides a deeper understanding of how the interventions operate and offers a theoretical contribution to realist review methods in public health and psychology. By utilising a realist review approach in the study of misinformation, where little such examination currently exists, this study contributes to the expansion of realist review approaches. Similarly, as both realist reviews and misinformation are quickly growing and relatively newly expanding areas of literature, this approach offers new insights into the study of misinformation interventions, particularly in the case of COVID-19. This review also highlights the importance of context in the countering of misinformation. By demonstrating the limited generalisability of many interventions, it contributes to the ongoing refinement of misinformation interventions to best allow bespoke policy packages of interventions to be developed best fitting the needs of the communities involved.
Methodology
This review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses [37] checklist, available in the supporting information. The review followed a pre-registered protocol submitted before the study began with PROSPERO, registration number CRD42023440580, record title: “Realist review: assessing intervention effectiveness in combating COVID misinformation”, available at the PROSPERO website. Amendments to the information provided in the protocol were centred on the elimination of an initially-planned research question on the intersection of theory and intervention literatures within the reviewed studies. This research question was removed after data extraction and analyses revealed a dearth of theoretical investigation in the reviewed studies. Instead this lack of theoretical involvement in the reviewed studies is noted in the discussion.
Search strategy
This review included a systematic search of Web of Science, Scopus, ASSIA, Psycinfo, and Pubmed to identify English language articles written between January 1, 2020 and June 22, 2023 performed following a pre-registered protocol conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [32]. An updating secondary search was performed using the same search strategy and methodology bringing this study up-to-date through 09 October 2024. Search strategy followed the protocol using pre-determined search terms, with results imported into Excel sheets for ease of deselection. Duplicates were removed and then an initial title-based screening was performed. Screening then followed based on abstract and then full-text review. Additional searching among the references of the included studies followed. Duplicate screening was performed by a team member (K.G.) on ~15% of studies through all screening stages, with any disagreement resolved via discussion. Inter-rater agreement was found to be very high (~92%).
The full search-string chosen for this review, which was only applied to Titles and Abstracts, is as follows: (conspirac* OR anti-vax* OR anti-vaccine OR ‘anti vaccine’ OR misinform* OR fake OR fals*) AND (messag* OR rumor* OR argu* OR rhetoric OR spread*) AND (COVID OR COVID-19 OR coronavirus OR ‘corona virus’ OR pandemic*) AND interven*
Eligibility
Trials or experimental studies were eligible if they were focused on reducing the spread of and vulnerability to COVID-19 misinformation in their participants, and tested an intervention meant to combat COVID-19 misinformation. Studies were required to be in the English language and been published between 2020 and 2024 as searching before the COVID-19 pandemic began was unnecessary.
Quality assessment
The methodological quality of each study chosen for inclusion was assessed via Kennedy et al.’s [33] risk of bias tool for assessing study rigor. It includes eight items for appraisal: (1) cohort, (2) control or comparison group, (3) pre-post intervention data, (4) random assignment of participants to the intervention, (5) random selection of participants for assessment, (6) follow-up rate of 80% or more, (7) comparison groups equivalent on sociodemographics, and (8) comparison groups equivalent at baseline on outcome measures. This assessment tool was used for its flexibility regarding type of methods and interventions in the studies being assessed. Although this analysis was performed, no studies were excluded due to quality as realist reviews explicitly disagree with exclusion from quality concerns as explained below.
Data extraction and analyses
The following information from included studies was extracted into a table to highlight study characteristics as can be seen in the next section: Study, intervention-type, ‘working ingredient’, ‘delivery method’, country of origin, methodology, number of participants, and whether the intervention was found to be successful. Additionally, a variety of other information was extracted to inform the other tables and charts found in the results section. All text from the eligible studies was imported into NVivo Pro 14 and the methods, results, and discussion sections underwent qualitative coding. Coding was done iteratively to categorise the findings. This iterative process evolved into a developed framework as the coding took place. For instance, if one intervention was identified from an article during coding, the coder attempted to assign it to a category within the emerging intervention framework. New subcategories were created if the current categories were insufficient, until all interventions were categorised. As coding progressed, the intervention framework came to be populated through the included studies. The heterogeneity of the included studies and their respective measures disallowed quantitative meta-analysis.
Regarding effectiveness, impact per participant and scalability were the primary variables analysed. Impact per participant refers to the level of individual behavioural change experienced by the participants of each intervention reviewed, as all were centred on individual behaviour. Scalability is a more complex variable consisting of several combined factors including generalisability (how effectively can results be replicated in other contexts and with other groups), resource-intensity (how expensive is the intervention in terms of time, money, and overall resource expenditure), and capacity for upscaling (how many people it could reach). With impact and scalability thus defined, effectiveness can be then analysed by how many people could be impacted and to what extent per person. A sub-analysis of context was undertaken by comparing context by intervention type, and laying out which participant groups were targeted by the interventions. Additional analysis was performed to investigate context beyond the community of participants within the intervention.
It is important to define ‘circumstances’ as used in RQ3. Here, circumstances refers to the context within which an intervention is taking place - such as geographic location, identities and wealth of the targeted community, and structural and institutional factors within which the community and intervention will take place. Additionally, circumstances refers to the experience, resources, and capacity of the research team or implementing body performing the intervention.
Results
Study characteristics
35 papers were found that met inclusion criteria, including 6 out of reviewing the bibliographies of the 29 studies found through the search strategy described above. 636 were initially resulting from the searches, with 230 duplicate results removed, 341 deselected by title, and 45 deselected by full-text review, resulting in 26 eligible papers (Fig 1). A new search of the same parameters and strategy took place on 09 October 2024 to update the study, whose papers are incorporated into this results section. These new papers build on the existing themes and reinforce the conclusions in this study. All new papers fit into the categories previously established in the original search, and their conclusions on impact, scalability, and contextual groups and circumstances for which the interventions would excel are all in line and supportive of the studies from the original search. The proposed interventions in the most recent studies continue to be heterogeneous and highlight the need for flexible reviews that can analyse this variance.
Eligible papers were published between 2020 and 2024, with a variety of national, regional, and international participant groups and study origination countries. The papers reviewed utilised participant groups coming mainly from the USA through private research participant companies like MTurk, Lucid, Prolific, Pollfish, and YouGov but also targeted audiences within the US like essential workers [34] and ‘Latinx’ communities [35]. Beyond the US, participant groups from Germany, the UK, Hong Kong, China, Canada, the Netherlands, Brazil, Kyrgyzstan, India, and internationally were included in the reviewed studies. These studies split into the intervention framework developed in the data extraction process as follows: 8 studies using Accuracy Nudges; 8 using education; 3 using Prebunking; 4 using Games; 6 using message framing; 4 using Community Engagement; and 2 using Debunking (Table 1, Table 2).
Full details of the studies can be seen in the supporting information.
All eligible studies underwent quality assessment using Kennedy et al.’s [33] risk of bias tool for assessing study rigor, results are shown in S1 Table. Many studies lost several points due to lack of follow-up elements or not giving information on whether comparison groups were equivalent on demographics or baseline outcome measures. Iles et al. [36] and Maertens et al. [37] were the only perfect scoring studies in the initial search, with Veletsianos et al. [38] on the other side scoring only 1/8 as the lowest score of the assessed studies. In the updating search, the new studies in general scored higher than the initial batch – a promising sign for the literary body. When sorted into intervention-types, the average quality scores are relatively similar for each group, indicating a similar level of quality across the intervention-types. Additionally, the division of studies by intervention type allows us to mitigate the impact of any one study’s limitations by synthesising across multiple studies within each intervention category. Further details on quality per included study are available in S1 Table.
Intervention characteristics
The studies in this review tested interventions with far greater heterogeneity than the dominant interventions proposed before the COVID-19 pandemic (accuracy nudges and fact-checking). As can be seen above in the study characteristics table, the studies were iteratively sorted into intervention-types as laid out in the methodology section. These intervention-types included: accuracy nudges, community engagement, debunking, prebunking, education, games, and message framing. This section will briefly introduce these intervention types and their defining characteristics.
Accuracy nudges in the reviewed studies consisted of mechanisms including: stimulating attention [39], source credibility labels [40], logo banners to help identify trustworthiness of sources [41], accuracy reminders [23], and tags that mark information as false [42]. These various intervention mechanisms fit under accuracy nudges due to their common characteristics as simple, fast, attention-grabbing labels or reminders that ‘nudge’ the participant to consider information veracity and bring that consideration into the forefront of their minds immediately before reading the information.
Community engagement is difficult to characterise by intervention mechanism because the defining aspect of community engagement occurs before intervention mechanism is determined in the research design. Instead of pre-determining intervention mechanisms and delivery methods, community engagement involves co-creation of the intervention alongside and in collaboration with the targeted community, to be bespoke to the unique context and circumstances of the community [34,35,43].
Debunking refers broadly to reactive interventions (e.g., fact-checking) that seek to ‘debunk’ existing misinformation and help people exposed to it rethink their belief and formulate new understandings of the relevant information [44,45]. In contrast, ‘prebunking’ interventions seek to build resilience to misinformation in people preemptively before exposure has occurred, and potentially even before the piece of misinformation has been created/spread [46]. This typically takes the form of inoculation messages administered to participants before exposure to potential misinformation. In this way they are similar to accuracy nudges – the key difference being that prebunking is more extensive than accuracy nudges. The inoculation messages are more significant, take longer to process, and are intended to take the full attention of the participant for the duration of the message, whereas accuracy nudges are fast and often involve the periphery of a participant’s attention. In the reviewed studies characterised as prebunking, all three involve inoculation messages as their intervention mechanism [28,47,48].
Education is the most heterogeneous of the intervention-types and can be difficult to categorise as educating the participant is essential to all interventions working to address misinformation. In the reviewed studies, this intervention-type involved mechanisms such as: videos [49], comics [38], infographics [50,51], and a multimodal intervention using authentic social media messaging [52]. The defining characteristic of the reviewed studies in this intervention-type is the primacy and exclusivity of education as the goal of the intervention. For instance, in Vandormael et al. [49], an educational video was released and distributed internationally with the goal of maximising viewership, but with no additional features of the intervention beyond watching the video.
Game intervention-types are characterised by the inclusion of a computer game for participants to play as the primary intervention-mechanism. This can be seen in all three of the included studies under this categorisation. These games inform players (participants) on the tactics and manipulation used to create and spread misinformation, with the goal of creating an inoculation effect and helping bolster veracity-judgment in participants. For example, Bad News is the name of the game used in Maertens et al. [37], a popular game used in many studies outside the purview of this review as well. In this game, players take on the role of an antagonist, creating misinformation and working to spread it through social media and the internet.
Message framing as an intervention-type is characterised by the use of psychological framing in the development of the language used in the intervention. Whether presenting information via video or written information, what distinguishes these studies as message framing is their strategic use of language to attempt to make their information transfer to participants as congruent with their extant worldview as possible. This then helps participants internalise that information effectively and can address intervention design concerns around potential backfire effects.
Intervention effectiveness
The two variables most central to answering which interventions work ‘best’ appear to be scalability and impact. If impact is too low, the intervention might not actually engender sufficient behavioural change in participants to combat the misinformation. Similarly, if an intervention cannot be upscaled, it has no capacity to address COVID-19 misinformation at a systemic level. The ‘ladder’ visuals below represent the intervention-types relative to one another across these two variables (Figs 3–4). Relative impact is determined by the measured impact on participants in each study. These measures are not consistent, yet with the authors’ interpretations, comparisons are possible. These relative measurements focus on the impact per participant, with no regard to number of participants or scalability. Inversely, the scalability ladder visual focuses on scalability with no regard to impact per participant. These visuals are meant to simplify and ease understanding of the results, and are purely relative.
On the bottom rung of impact per participant is accuracy nudges, whose impact is heavily debated. Some authors in this review such as Pennycook et al. [23], champion this intervention type and claim significant impact in their results. Gavin et al. [27], who replicated Pennycook et al. [23], found mixed results that stood at odds with the original study. Amin et al. [39] found impact on decision behaviour and tendency to share misinformation with their study, but the rest of the studies in this intervention group found either minimal impact [42], only impact on certain groups [40], or no impact [41] who even noted potential counterproductivity.
Framing of public health messages is next along the impact ladder. Studies testing interventions using different framings of public health messaging found significant impact [36,53,54], although not as high as other intervention-types included in this review. This impact was largely reserved for those heaviest consumers of misinformation and those most vaccine-hesitant [53].
Debunking by its nature must occur retroactively, which limits impact as the initial exposure must be overcome. In this way, debunking has two independent goals: to both disprove internalised misinformation and convince the participant of the veracity of legitimate information. This is a barrier to impact, which is noted by both Vijaykumar et al. [44] and Yousuf et al. [45]. Vijaykumar et al. [44] found no impact on perception or willingness to share misinformation yet found enhanced credibility and readiness to share accurate information because of their intervention. However, Yousuf et al. [45] found that exposure to their intervention did result in enhanced trust in government and significantly stronger rejection of vaccination misconceptions.
Prebunking, as the preventative version of debunking, scores better on impact. Prevention is found to be more powerful in a variety of aspects than reactive debunking. All three included studies [28,47,48] found significant impact among participants, although in the case of Amazeen et al. [47] this significance was limited to those with preexisting ‘healthy’ attitudes. Impact on participants was found to include generating resilience against misinformation, less willingness to share misinformation, and greater willingness to receive a vaccine.
Education is the most varied type of intervention with a range of impact between the individual tested interventions (the relative score here is an aggregate). At best, educational interventions have the potential to be a form of systematic prebunking with great effectiveness. In the reviewed studies, they were found to improve knowledge and increase resilience to misinformation at significant levels, particularly among populations with low preexisting knowledge levels [49,52]. However, Van Stekelenburg et al. [51] found no significant impact, highlighting the variability of this intervention-type.
Games were consistently found to be highly impactful across the various populations who played them, with high levels of durability and longevity compared to other intervention-types reviewed and significant impact levels for all kinds of preexisting attitudes towards vaccination and COVID-19 [37,46,55]. Every reviewed study found significant impact, which corresponds with a high relative impact score, although still below the bespoke and prolonged interventions within community engagement.
Community engagement is the single most impactful intervention-type, with sustained interaction and bespoke interventions to specifically targeted communities who then themselves are brought into the intervention process and invited to participate, make their voices heard, and have their concerns addressed in a bespoke, personal, and trusted manner. All reviewed studies found significant and extensive impact among their participants.
Community engagement is essentially impossible to scale upwards. It inherently requires small numbers of participants and high levels of resource and time investment by those implementing the intervention. The interventions themselves are then not even intended to be generalisable, but rather bespoke to and befitting the contextual needs of the community involved. Community engagement can only effectively be done at a small scale over long periods of time involving the building of trust with community, the proactive engagement and co-creation of interventions and implementation strategies with the community itself, and the implementation strategies themselves can take years to accomplish [34,43].
Debunking scores quite low in scalability. Debunking at a large scale is extremely difficult as it is inherently based on preexisting misinformation and cannot effectively prevent additional misinformation. Further, it must always attempt to reach those specific populations initially exposed to the targeted misinformation to be ‘debunked’, which is difficult and resource-intensive.
Prebunking does not need to attempt to find and target those who already saw some misinformation as the intervention occurs before misinformation is seen. For this reason, prebunking is easier to scale upwards than debunking and is relatively lower in resource-cost. Implementation of prebunking involves the development of ‘inoculation messages’ [28,47] as written messages or video content intended to raise resiliency of participants against misinformation.
Without being built into the public education system, educational interventions may struggle to scale upwards, relying on peer educational champions [56] or social media ‘virality’ to spread [49]. Adjusting anything within the public education system is highly resource-intensive, even though those changes are then highly impactful and wide-reaching. However, when performed in smaller scale as in the included interventions, educational interventions can be substantially reduced in resource intensity [49].
Although not as easily scalable as message framing or accuracy nudges, games are nonetheless relatively highly scalable when compared to the other intervention-types in this review. As the games are already developed, introducing them to new populations is then relatively simple, resource-inexpensive, and quick.
Message framing has high scalability with the simple addition of language strategising and purposeful narrative framings applied to extant and new public health messaging. Message framing is only slightly more resource intensive than accuracy nudges in that it must be bespoke to particular narratives, communities, and groups. However, in each bespoke circumstance, still the resource intensity would be low.
Accuracy nudges is undeniably the highest scalability intervention-type. The core reason why accuracy nudges are so scalable is the extremely low resource intensity needed to implement them. It requires the insertion of nudges in social media feeds and news articles. This would be easy and inexpensive for social media corporations and newspapers to implement, even when scaled into the extreme levels of interaction and users involved in contemporary social media.
Durability of effect
In the studies that did test for longevity/durability of impact in their tested intervention, consistent low levels were found, with findings indicating high reliance on intervention repetition and regular testing of misinformation resilience over a sustained (and potentially indefinite) period to reach functional durability of effect. The study that looked most closely at this was Maertens et al. [37] which performed one of the only longitudinal studies included in this review explicitly investigating longevity of impact using the ‘Bad News’ game as its chosen intervention. They found that their intervention resulted in a significant increase in ability to discern misinformation with lasting effects if regular misinformation resilience testing occurred over time. Without regular testing they found significant decay over a 2 month period ending in a loss of inoculation effect [37].
Context and generalisability
There appears to be a significant distinction in how these interventions work between those with preexisting ‘healthy’ understandings of public health information and those who are the heaviest consumers of misinformation. This was noted in several studies [40–42,46,47,50,54], and in ways that do not initially appear congruent with one another. It is clear this subgroup of heaviest misinformation consumers is impacted differently by many of the interventions included in this review, but that change in impact is not a consistent factor - instead it is an ephemeral variable, difficult to spot and even harder to plan for in study design.
The table below lays out the contexts in which each relevant intervention type was found to be most effective in the groups tested, alongside the groups included within the included studies, relevant findings from the authors regarding context and their intervention, and an overall level of generalisability (Table 3).
Eight reviewed studies found insignificant trends in intervention impact between baseline participants and special groups, with several more looking for such trends and finding none. This indicates the specificity of these intervention-types, and that although context and social group could be determinants of intervention effectiveness, such effects are likely to be small. For example, Bender et al. [54] noted that their intervention framing worked best on those already strongly anti-vaccine. Conversely, Johnson et al. [52] found their intervention worked best on those with less vaccine hesitancy, and that those with higher social political conservatism performed worse on knowledge scores. The insignificant trends found in these studies were typically tied to either age, ethnic group, or political ideology as core identities tied to perceptions and experiences of COVID-19 and the public health responses thereto. Political (rightwing/conservative) ideology was noted in many studies as a subgroup of particular importance and was found to coincide with less accurate pre-intervention beliefs [46,50].
Accuracy nudges were tested with participants from the USA, Kyrgyzstan, Kenya, Nigeria, India, and a comprehensive study on 16 countries with findings that suggest that their impact is difficult to predict and changes depending on the context [27]. Dias et al. [41] noted the potential for a ‘backfire’ effect among those people most bought-in to misinformation, whereas Kreps et al. [42] found no evidence of this effect. Aslett et al. [40] find that their intervention only worked on those who consume the highest levels of misinformation in their participant group and had minimal effect on anyone else. This conflicts with concerns about backfire effects.
Prebunking was tested with participants from online recruiters in the USA and Hong Kong undergraduates. Amazeen et al. [47] found that the intervention only worked on those with preexisting ‘healthy’ attitudes, meaning those whose beliefs already coincided most closely with legitimate public health messaging. Because this intervention-type is intended to inoculate the ‘average’ person against misinformation, it only working on those with preexisting ‘healthy’ attitudes does not reduce the usefulness of prebunking.
Games were tested in the USA, Ghana, and China with proportionally representative online groups. Basol et al. [46] as well as Ma et al. [55] respectively found that the interventions worked across both the political spectrum and the public in general. This indicates high generalisability, particularly with the proportionally representative and relatively large participant cohorts in these studies. However, by the nature of a digital intervention type like games, older people and those with low levels of digital literacy (who are among those most desirable to target for the intervention) may have less desire or ability to play the game.
Debunking was tested in the UK and Brazil among Whatsapp users, and in the Netherlands among the elderly. Interestingly, Vijaykumar et al. [44] found that their intervention was most effective on older people. This indicates that this type of intervention might be most useful among elderly populations and communities. Vijaykumar et al. [44] and Yousuf et al. [45] speculate that perhaps older people have higher baseline trust in governmental messaging and are therefore more open to changing their internalised beliefs based on new information from legitimate sources. By the nature of debunking, it can only be applied reactively to widely believed misinformation, which significantly limits its generalisability.
Education was tested in the USA, Canada, Denmark, Hong Kong, and internationally through social media sharing. Johnson et al. [52] found their intervention worked best on elderly people and those with less hesitancy around COVID-19 vaccination. Similarly, Veletsianos et al. [38] found that their intervention caused a noteworthy ‘backfire’ effect among conservative US republicans (as the most vaccine-hesitant and ‘bought-in’ to misinformation already). Vandormael et al. [49] suggested educational interventions might be most effective among populations with a low baseline knowledge level, as their own participant group has relatively high levels of baseline knowledge (although nonetheless the intervention successfully boosted knowledge of COVID-19 prevention). When taken together, these findings indicate that the groups most ideal for this type of intervention are communities with low baseline knowledge of public health information or communities distrustful of government where peer and individual study might be able to penetrate that distrust.
Message framing was tested in Germany [54], the US [36], Mozambique, and the UK [53] all through online interventions testing framed messaging against traditional extant public health informative messaging. Bender et al. [54] found that extant framing (which typically focuses on collective benefits and informing about vaccination side-effects) worked best for those anxious about vaccination, whereas the intervention framing worked best for those strongly anti-vaccine. Similarly, Freeman et al. [53] found that emphasising personal benefit (the intervention framing) was more effective on the most vaccine-hesitant, whereas emphasising collective benefit (the control/extant framing) was far less effective and even resulted in ‘backfire’ effects. Together these findings make a strong case for message framing interventions to effectively target those communities most distrustful of government messaging, those most ‘bought in’ to conspiracy and misinformation already, and the most politically radicalised.
Community engagement was tested in the US among ‘Latinx’ communities [35], young Black adults [43], and ‘essential workers’ [34]. By its nature, community engagement is very low generalisability as it is more contextually specific, resource intensive, and time-consuming than any other intervention type. Degarmo et al. [35] found their intervention was successful at mitigating health disparities in the communities they engaged. This suggests community engagement would be most effectively utilised in deprived communities, vulnerable communities, and those areas most difficult to reach for any reason.
Discussion
The research questions in this study do not have explicit ranked answers, as impact and scalability differ widely across the interventions included in this review. There are tradeoffs in play, between impact and scalability as well as between generalisability and targeted intervention against subgroups of particular importance. Therefore, the key finding from this review is the insufficiency of any one intervention to address the widely varying needs of the many contexts and groups in which misinformation can spread. For further details on contextual fit for different interventions, please refer to the Results sub-section titled: “Context and Generalisability”. There is a need for the development of comprehensive packages (each containing multiple interventions) as the core policy recommendation. These packages can pull from the different strengths of each intervention type reviewed to best fit the needs of the relevant communities and contexts within which these packages will be developed. This package approach appears to be gaining momentum, with a higher proportion of newest studies incorporating multiple intervention techniques in their studies. When such a package of multiple interventions is impossible, game-type interventions appear to be an outlier in terms of being highly scalable, impactful, low resource-intensity, and highly generalisable relative to the other intervention-types reviewed. Games are catching and interactive with participants, which could explain the significance found in their effect as an intervention. This interaction element could also lead to durable effects over time, although this is insufficiently studied in the current literature. Furthermore, games have the unique distinction of being fun to play, and to hold the potential to encourage public engagement with the intervention outside of experimental contexts or policies, but simple spread of an enjoyable game to play. For this same reason, games hold the potential to be a very effective subset of education interventions, where, e.g., children in school could play the game as part of their curriculum attempting to build inoculation to misinformation from an early age.
Politics and partisan bias
Both the theoretical and intervention literatures around COVID-19 misinformation hint at its politically polarising elements yet fail to address this influence head on. It is important to note that this failing is limited to the narrowly confined studies relevant to this study – the public health literature investigating COVID-19 misinformation interventions. The wider literature around misinformation has well-established links to polarisation and politics, particularly in the US. Dispersed throughout the findings and discussions of the included studies are the political elements of COVID-19 misinformation. It is consistently found that political conservatives, particularly in the US, are uniquely vulnerable and bought-in to misinformation and conspiracism [26,57]. This group was found to have its own unique interactions with many of the tested interventions in this review. When this happened, the authors mention this difference and give some speculation as to why that might be the case, but do not investigate this finding further, or seek to use explanations in the wider literature to support their findings (see [51] for the most comprehensive discussion of this issue in the eligible studies). Additionally, there has been very little work to explicitly begin from this starting point and deep dive into why this might be the case and how interventions might most effectively impact this group. This presents a significant detriment to reaching the stated goal of these interventions - effectively combatting COVID-19 misinformation.
Pennycook et al. [23] is the most influential study included in this review in terms of citation count, references throughout the reviewed studies, and the extent to which their study has been replicated and critiqued within both the studies under review and the wider literature. Within that study they champion the theory that the systemic sharing behaviour of COVID-19 misinformation in our society is “because [people] simply fail to think sufficiently about whether or not the content is accurate when deciding what to share” [23]. Pennycook et al. claim that their findings and this theory indicate that accuracy nudges are not only simple and effective, but the only intervention needed against COVID-19 misinformation. In doing so, they negate the claims of many of the other included studies in this review. This has brought significant criticism against this core idea of what is causing vulnerability to COVID-19 misinformation. If the only issue is a lack of thinking, then accuracy nudges are the obvious intervention. Yet although the findings of Pennycook et al. [23] do suggest the effectiveness of accuracy nudges and the need for interventions that make people think more about their sharing decisions, this ‘theory’ they promote is insufficiently supported when applied to negating the findings of other studies. Their findings suggest the effectiveness of accuracy nudges, but not the ineffectiveness of other interventions. The alternative proposed answer to what is causing vulnerability to misinformation is partisan bias. This explanation posits that it is not failing to think sufficiently or lower cognitive ability that leads to vulnerability to misinformation, but rather the inherent bias that arises from adherence to political ideology in the context of intense political division and polarisation as is affecting the contemporary United States very deeply but also affects many countries today [58]. This debate on partisan bias vs insufficient thinking punctuates the literature on misinformation, including many of the studies included in this review. This debate continues into the present, with even some of the newest studies promoting accuracy nudges and dismissing other interventions for the same reasons initially proposed by Pennycook et al. [23].
Limitations
A primary limitation in this review comes from the heterogeneity of the studies and interventions disallowing meta-analysis and other forms of traditional systematic review analysis that rely on similar outcome measures and methodologies within the eligible studies. This limitation is accentuated by the potential for interpretation bias. The interpretation of the data herein is biased by the perspective and worldview of the authors. Additionally, there is limited consistency between realist reviews and limited standards and assessments available to apply to this review. This does not necessarily limit the rigor of the review but makes analysing that rigor and validity more difficult. The development of more and consistent direction and assessments for realist reviews would address this limitation currently present within the method. Lastly, the limited engagement in the intervention literature with theory limits the extent to which theoretical insights can be drawn from this study.
Future research directions
Although a variety of interventions tested in the studies herein found success in the short term, in the long-term it is impossible to avoid the urgent need for mass-scale education on digital literacy if the goal is to make a population as resilient as possible against misinformation. Given the variability in reporting backfire effects and subgroup-specific outcomes, future interventions should prioritize the development of tailored packages that account for these factors. This should be informed by emerging literature, including ongoing studies (such as the author’s upcoming work), which will further investigate the role of backfire effects and subgroup-specific outcomes. Future research in this direction is pivotal, with experiment-groups in classrooms a clear next step. Additionally, future research on how to address the political difficulties in implementing such a wide-scale intervention is required.
Out of all intervention-types reviewed, games appear to create the highest impact while still being highly scalable and resource-inexpensive, with the potential for longevity in the right conditions [37]. Relative to the other intervention-types, games scores maximally in terms of impact on participants, while still being relatively high on scalability. Future research in this direction is needed to refine and test these results. Longitudinal testing is an obvious follow-up to gain insight into durability of inoculation effect.
Additional areas for future research include: 1) theoretical research into how to build a resilient population and how to address vulnerability to misinformation systemically versus individually; 2) the role of politics and partisan bias in the functioning of these interventions; 3) where misinformation comes from and who gains from it; 4) the role of political polarisation and radicalisation in vulnerability to and the spread of misinformation, both in the United States and globally; 5) standardisation measures or frameworks for consistent evaluation of interventions to allow for meta-analysis and quantitative comparison; and 6) policy-focused principles for the development of intervention packages to guide policymakers.
Conclusions
This review included 35 studies of interventions combatting COVID-19 misinformation. The interventions reviewed varied widely in terms of scalability, resource intensity, impact on participants, the contexts within which each best works, the people onto whom the interventions will have greatest effect, and research quality. The tests performed in the included studies hold rich contributions toward better understanding how misinformation functions, how veracity judgement occurs in individuals and communities, and which interventions work best in which contexts and for whom. COVID-19 showed precisely how harmful and deadly misinformation can be, and what a public health threat it can represent. In this fight against systemic misinformation in our society, a final takeaway from this review is the need for acknowledgement of misinformation as a societal and systemic issue that requires significant investment and time to resolve, if resolution is possible.
Supporting information
S2 Table. Article eligibility tracking table.
https://doi.org/10.1371/journal.pone.0321818.s004
(XLSX)
Acknowledgments
We want to thank Katie Goddard from the Primary Care and Public Health department of the Brighton and Sussex Medical School for her help in duplicating and affirming the deselection and qualitative coding in this study.
This study took place as part of the PhD candidacy of Robert Dickinson at the University of Sussex. Non-financial support came from project supervisors Dominique Mackowski, Harm Van Marwijk, and Elizabeth Ford. Additionally, Katie Goddard performed the role of deselection replication as laid out in the methodology.
References
- 1.
Posetti J, Matthews A. A short guide to the history of ‘fake news’ and disinformation. 2018;7:2018–07.
- 2. Jin SL, Kolis J, Parker J, Proctor DA, Prybylski D, Wardle C, et al. Social histories of public health misinformation and infodemics: case studies of four pandemics. Lancet Infect Dis. 2024;24(10):e638–46. pmid:38648811
- 3. Suarez-Lledo V, Alvarez-Galvez J. Prevalence of health misinformation on social media: Systematic review. J Med Internet Res. 2021;23(1):e17187. pmid:33470931
- 4. Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. Soc Netw Anal Min. 2023;13(1):30. pmid:36789378
- 5. Swire B, Berinsky AJ, Lewandowsky S, Ecker UKH. Processing political misinformation: comprehending the Trump phenomenon. R Soc Open Sci. 2017;4(3):160802. pmid:28405366
- 6. Kolbe M, Torres Alavez JA, Mottram R, Bintanja R, van der Linden EC, Stendel M. Model performance and surface impacts of atmospheric river events in Antarctica. Discov Atmos. 2025;3(1):4. pmid:40130261
- 7. Farrell J, McConnell K, Brulle R. Evidence-based strategies to combat scientific misinformation. Nat Clim Chang. 2019;9(3):191–5.
- 8. Booth E, Lee J, Rizoiu M-A, Farid H. Conspiracy, misinformation, radicalisation: Understanding the online pathway to indoctrination and opportunities for intervention. J Sociol. 2024;60(2):440–57.
- 9. Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman ALJ, Recchia G, et al. Susceptibility to misinformation about COVID-19 around the world. R Soc Open Sci. 2020;7(10):201199. pmid:33204475
- 10. Vraga EK, Bode L. Defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation. Polit Commun. 2020;37(1):136–44.
- 11. Loftus EF. Planting misinformation in the human mind: a 30-year investigation of the malleability of memory. Learn Mem. 2005;12(4):361–6. pmid:16027179
- 12. Lewandowsky S, Stritzke WGK, Freund AM, Oberauer K, Krueger JI. Misinformation, disinformation, and violent conflict: from Iraq and the “War on Terror” to future threats to peace. Am Psychol. 2013;68(7):487–501. pmid:24128313
- 13. Blank H, Launay C. How to protect eyewitness memory against the misinformation effect: a meta-analysis of post-warning studies. J Appl Res Mem Cogn. 2014;3(2):77–88.
- 14. Cook J, Ecker U, Lewandowsky S. Misinformation and how to correct it. Emerg Trends Soc Behav Sci. 2015:1–17.
- 15. Tandoc EC Jr. The facts of fake news: a research review. Sociol Compass. 2019;13(9).
- 16. Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proc Natl Acad Sci U S A. 2016;113(3):554–9. pmid:26729863
- 17. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359(6380):1146–51. pmid:29590045
- 18. Mourão RR, Robertson CT. Fake news as discursive integration: An analysis of sites that publish false, misleading, hyperpartisan and sensational information. Journalism Stud. 2019;20(14):2077–95.
- 19. Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019;240:112552. pmid:31561111
- 20. McDougall J, Brites M-J, Couto M-J, Lucas C. Digital literacy, fake news and education / Alfabetización digital, fake news y educación. Cultura Educ. 2019;31(2):203–12.
- 21. Chan M-PS, Jones CR, Hall Jamieson K, Albarracín D. Debunking: A Meta-Analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. 2017;28(11):1531–46. pmid:28895452
- 22. Cohen D. Twitter extends community notes to quote tweets. 19 Jan 2023 [1 Dec 2023]. https://www.adweek.com/media/twitter-extends-community-notes-to-quote-tweets/
- 23. Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-Nudge intervention. Psychol Sci. 2020;31(7):770–80. pmid:32603243
- 24. Rathje S, Roozenbeek J, Traberg C, Van Bavel J, van der Linden S. Letter to the editors of Psychological Science: meta-analysis reveals that accuracy nudges have little to no effect for US conservatives: regarding Pennycook et al. (2020). Psychol Sci. 2020;
- 25. Lees J, McCarter A, Sarno DM. Twitter’s disputed tags may be ineffective at reducing belief in fake news and only reduce intentions to share fake news among democrats and independents. jots. 2022;1(3).
- 26. Gawronski B, Ng NL, Luke DM. Truth sensitivity and partisan bias in responses to misinformation. J Exp Psychol Gen. 2023;152(8):2205–36. pmid:36972099
- 27.
Gavin L, McChesney J, Tong A, Sherlock J, Foster L, Tomsa S. Fighting the spread of COVID-19 misinformation in Kyrgyzstan, India, and the United States: How replicable are accuracy nudge interventions?. 2022.
- 28. Jiang LC, Sun M, Chu TH, Chia SC. Inoculation works and health advocacy backfires: Building resistance to COVID-19 vaccine misinformation in a low political trust context. Front Psychol. 2022;13:976091. pmid:36389491
- 29. Ogbodo JN, Onwe EC, Chukwu J, Nwasum CJ, Nwakpu ES, Nwankwo SU, et al. Communicating health crisis: a content analysis of global media framing of COVID-19. Health Promot Perspect. 2020;10(3):257–69. pmid:32802763
- 30. Lee J, Kalny C, Demetriades S, Walter N. Angry content for angry people: How anger appeals facilitate health misinformation recall on social media. Media Psychol. 2023;27(5):639–65.
- 31. Featherstone JD, Zhang J. Feeling angry: The effects of vaccine misinformation and refutational messages on negative emotions and vaccination attitude. J Health Commun. 2020;25(9):692–702. pmid:33103600
- 32. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. pmid:33782057
- 33. Kennedy CE, Fonner VA, Armstrong KA, Denison JA, Yeh PT, O’Reilly KR, et al. The evidence project risk of bias tool: assessing study rigor for both randomized and non-randomized intervention studies. Syst Rev. 2019;8(1):3. pmid:30606262
- 34. Ugarte DA, Young S. Effects of an online community peer-support intervention on COVID-19 Vaccine misinformation among essential workers: mixed-methods analysis. West J Emerg Med. 2023;24(2):264–8. pmid:36976597
- 35. DeGarmo DS, De Anda S, Cioffi CC, Tavalire HF, Searcy JA, Budd EL, et al. Effectiveness of a COVID-19 Testing outreach intervention for latinx communities: a cluster randomized trial. JAMA Netw Open. 2022;5(6):e2216796. pmid:35708690
- 36. Iles IA, Gaysynsky A, Sylvia Chou W-Y. Effects of narrative messages on key COVID-19 protective responses: findings from a randomized online experiment. Am J Health Promot. 2022;36(6):934–47. pmid:35081771
- 37. Maertens R, Roozenbeek J, Basol M, van der Linden S. Long-term effectiveness of inoculation against misinformation: three longitudinal experiments. J Exp Psychol Appl. 2021;27(1):1–16. pmid:33017160
- 38. Veletsianos G, Houlden S, Hodson J, Thompson CP, Reid D. An evaluation of a microlearning intervention to limit COVID-19 online misinformation. J Form Des Learn. 2022;6(1):13–24. pmid:35822059
- 39. Amin Z, Ali N, Smeaton A. Visual selective attention system to intervene user attention in sharing COVID-19 misinformation. arXiv. 2021.
- 40. Aslett K, Guess AM, Bonneau R, Nagler J, Tucker JA. News credibility labels have limited average effects on news diet quality and fail to reduce misperceptions. Sci Adv. 2022;8(18):eabl3844. pmid:35522751
- 41.
Dias N, Pennycook G, Rand D. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. 2020.
- 42. Kreps SE, Kriner DL. The COVID-19 infodemic and the efficacy of interventions intended to reduce misinformation. Public Opin Q. 2022;86(1):162–75.
- 43. Maragh-Bass A, Comello ML, Tolley EE, Stevens D Jr, Wilson J, Toval C, et al. Digital storytelling methods to empower young black adults in COVID-19 vaccination decision-making: Feasibility study and demonstration. JMIR Form Res. 2022;6(9):e38070. pmid:36155984
- 44. Vijaykumar S, Jin Y, Rogerson D, Lu X, Sharma S, Maughan A, et al. How shades of truth and age affect responses to COVID-19 (Mis)information: randomized survey experiment among WhatsApp users in UK and Brazil. Humanit Soc Sci Commun. 2021;8(1).
- 45. Yousuf H, van der Linden S, Bredius L, Ted van Essen GA, Sweep G, Preminger Z, et al. A media intervention applying debunking versus non-debunking content to combat vaccine misinformation in elderly in the Netherlands: a digital randomised trial. EClinicalMedicine. 2021;35:100881. pmid:34124631
- 46. Basol M, Roozenbeek J, van der Linden S. Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. J Cogn. 2020;3(1):2. pmid:31934684
- 47. Amazeen MA, Krishna A, Eschmann R. Cutting the bunk: Comparing the solo and aggregate effects of prebunking and debunking Covid-19 vaccine misinformation. Sci Commun. 2022;44(4):387–417.
- 48. Piltch-Loeb R, Su M, Hughes B, Testa M, Goldberg B, Braddock K, et al. Testing the efficacy of attitudinal inoculation videos to enhance COVID-19 vaccine acceptance: Quasi-experimental intervention trial. JMIR Public Health Surveill. 2022;8(6):e34615. pmid:35483050
- 49. Vandormael A, Adam M, Greuel M, Gates J, Favaretti C, Hachaturyan V, et al. The effect of a wordless, animated, social media video intervention on COVID-19 prevention: online randomized controlled trial. JMIR Public Health Surveill. 2021;7(7):e29060. pmid:34174778
- 50. Agley J, Xiao Y, Thompson EE, Chen X, Golzarri-Arroyo L. Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: Randomized controlled trial. J Med Internet Res. 2021;23(10):e32425. pmid:34581678
- 51. van Stekelenburg A, Schaap G, Veling H, Buijzen M. Investigating and improving the accuracy of us citizens’ beliefs about the COVID-19 pandemic: longitudinal survey study. J Med Internet Res. 2021;23(1):e24069. pmid:33351776
- 52. Johnson V, Butterfuss R, Kim J, Orcutt E, Harsch R, Kendeou P. The “Fauci Effect”: Reducing COVID-19 misconceptions and vaccine hesitancy using an authentic multimodal intervention. Contemp Educ Psychol. 2022;70:102084. pmid:35765462
- 53. Freeman D, Loe BS, Yu L-M, Freeman J, Chadwick A, Vaccari C, et al. Effects of different types of written vaccination information on COVID-19 vaccine hesitancy in the UK (OCEANS-III): A single-blind, parallel-group, randomised controlled trial. Lancet Public Health. 2021;6(6):e416–27. pmid:33991482
- 54. Bender F, Rief W, Brück J, Wilhelm M. Effects of a video-based positive side-effect information framing: An online experiment. Health Psycho. 2023;
- 55. Ma J, Chen Y, Zhu H, Gan Y. Fighting COVID-19 Misinformation through an online game based on the inoculation theory: Analyzing the mediating effects of perceived threat and persuasion knowledge. Int J Environ Res Public Health. 2023;20(2):980. pmid:36673733
- 56. Fung MY, Lee YH, Lee YTA, Wong ML, Li JTS, Nok Ng EE, et al. Feasibility of a telephone-delivered educational intervention for knowledge transfer of COVID-19-related information to older adults in Hong Kong: a pre-post-pilot study. Pilot Feasibility Stud. 2022;8(1):228. pmid:36203186
- 57. Van Bavel JJ, Harris EA, Pärnamets P, Rathje S, Doell KC, Tucker JA. Political psychology in the Digital (mis)Information age: a Model of news belief and sharing. Soc Issues Policy Rev. 2021;15(1):84–113.
- 58. Gawronski B. Partisan bias in the identification of fake news. Trends Cogn Sci. 2021;25(9):723–4. pmid:34226126