Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

iHeard STL: Development and first year findings from a local surveillance and rapid response system for addressing COVID-19 and other health misinformation

  • Kimberly J. Johnson,

    Roles Formal analysis, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Olivia Weng,

    Roles Formal analysis, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Hannah Kinzer,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Ayokunle Olagoke,

    Roles Formal analysis, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing

    Affiliation School of Health and Kinesiology, University of Nebraska at Omaha, Omaha, NE, United States of America

  • Balaji Golla,

    Roles Data curation, Visualization

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Caitlin O’Connell,

    Roles Formal analysis, Writing – review & editing

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Taylor Butler,

    Roles Project administration, Writing – original draft

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Yoseph Worku,

    Roles Formal analysis, Visualization

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

  • Matthew W. Kreuter

    Roles Conceptualization, Methodology, Project administration, Supervision, Writing – review & editing

    mkreuter@wustl.edu

    Affiliation Brown School, Washington University in St. Louis, St. Louis, MO, United States of America

Abstract

Background

The U.S. Surgeon General and others have emphasized a critical need to address COVID-19 misinformation to protect public health. In St. Louis, MO, we created iHeard STL, a community-level misinformation surveillance and response system. This paper reports methods and findings from its first year of operation.

Methods

We assembled a panel of over 200 community members who answered brief, weekly mobile phone surveys to share information they heard in the last seven days. Based on their responses, we prioritized misinformation threats. Weekly surveillance data, misinformation priorities, and accurate responses to each misinformation threat were shared on a public dashboard and sent to community organizations in weekly alerts. We used logistic regression to estimate odds ratios (ORs) for associations between panel member characteristics and misinformation exposure and belief.

Results

In the first year, 214 panel members were enrolled. Weekly survey response rates were high (mean = 88.3% ± 6%). Exposure to a sample of COVID-19 misinformation items did not differ significantly by panel member age category or gender; however, African American panel members had significantly higher reported odds of exposure and belief/uncertain belief in some misinformation items (ORs from 3.4 to 17.1) compared to white panel members.

Conclusions

Our first-year experience suggests that this systematic, community-based approach to assessing and addressing misinformation is feasible, sustainable, and a promising strategy for responding to the threat of health misinformation. In addition, further studies are needed to understand whether structural factors such as medical mistrust underly the observed racial differences in exposure and belief.

Introduction

Misinformation is defined as “false or inaccurate information” [1]. During the COVID-19 pandemic, misinformation impacted vaccine uptake, promoted the use of treatments with unknown efficacy, and led to violence directed at workers, including healthcare, airline, and other front-line workers [2]. There is a need to systematically identify and respond to health misinformation to decrease its impact on individual and population health, especially within communities that have had higher rates of infection and mortality such as African American communities [35], to decrease its impact on individual and population health.

Public health surveillance is an essential tool in disease prevention efforts and for controlling morbidity and mortality. Surveillance systems track and report cases, identify and assess emerging threats, and help inform public health strategies aiming to protect and improve the health of populations [3]. Misinformation is increasingly recognized as a threat to the public’s health, and several approaches to misinformation surveillance and response are being explored [47]. Each approach has limitations, including infrequent data collection, reliance exclusively on selected data sources such as social media conversations, and lack of a rapid-response mechanism and distribution infrastructure to disseminate findings and accurate information to local communities where the misinformation is circulating. As one example, in a January 2022 review of COVID-19 misinformation surveillance and response by 50 U.S. state health departments, only 34% of states had content on their websites addressing misinformation, and the most recent update, if noted, was dated three months prior [8].

In this paper, we describe the development of and initial findings from the first year of implementing a community-level misinformation surveillance and rapid response system. Using data and examples, we illustrate how the system tracks, reports, and responds to misinformation, what has been learned about misinformation trends over time, and which sub-groups appear most vulnerable to misinformation.

Methods

This study was approved by the Washington University in St. Louis Institutional Review Board.

Background

In the early stages of the pandemic, frontline workers in St. Louis reported frequent exposure to misinformation from community members, and frustration that they felt unprepared to respond to it effectively and in the moment. iHeard STL (https://hcrlweb.wustl.edu/iheardstl/) was developed at Washington University in St. Louis by the Health Communication Research Lab (https://hcrl.wustl.edu/) to help solve this problem by proactively identifying misinformation circulating in the community and providing rapid, accurate responses to community organizations and the public to help them anticipate and counter misinformation. Weekly data collection is ongoing with funding support and this report describes the first year of data collection. Because iHeardSTL emerged as a local response system and not as a planned research project, sample size was based on feasibility issues rather than statistical power needed to answer specific research questions. iHeard STL consists of three components: surveillance, prioritization, and response.

Surveillance

Panel members.

We recruited panel members to answer brief, weekly mobile phone surveys assessing information they may have heard about COVID-19. We intentionally recruited from areas of St. Louis City and County in Missouri with a higher proportion of Black or African American residents and sought a 50/50 mix of community members and front-line workers. Front-line workers were defined on the recruitment survey as those who “…regularly interact with members of the community for work in-person, online or over the phone. This can include healthcare workers, phone operators, or social service workers, for example. Outreach to potential panel members occurred primarily through community partners such as the St. Louis COVID-19 Regional Response Team (a collaborative of 35+ member organizations), St. Louis City Department of Health, St. Louis County Department of Public Health, YMCA of Greater St. Louis, United Way of Greater St. Louis, Alpha Phi Alpha Fraternity Inc., Herbert Hoover Boys and Girls Club and St. Louis Story Stitchers. We also recruited through the Washington University School of Medicine’s Volunteer for Health Research Participant Registry [9], whose administrators sent an email to members who were ≥18 years old, Black or African American, and residing in zip codes under-represented in our sample.

We shared recruitment materials with our partners and/or distributed them while at community-based events hosted by our partners. Individuals could access the recruitment form through scanning a QR code on the recruitment flyer or email the study email address that was provided on the recruitment material. The recruitment form includes eligibility questions, followed by a full description of the project and what is asked of participants if they volunteer to participate. For those who wish to participate after reading the informed consent information, they provide contact information for payment purposes and to receive the longitudinal survey. Eligibility required being age 18 years or older, a resident of St. Louis City or St. Louis County, Missouri (or a non-resident employed there), and having access to a mobile phone with internet capabilities. Eligible individuals who provided informed consent and completed a brief baseline survey became panel members.

Data collection.

English language baseline and weekly surveys were designed in Qualtrics and optimized for mobile phone use. The first baseline surveys were completed 8/23/2021 and the first weekly surveys began less than one week later on 8/29/2021. Virus circulation in Missouri during this time as measured by new hospital admissions varied widely during the study period [10]. Masking policies during this time included mandates to strong recommendations for masking [11]. Enrollment and baseline surveys continued through the study period of 8/23/2021 to 8/21/2022, with the panel growing in size each week. Every Sunday, a link to the weekly survey was texted to all panel members. The survey closed 48 hours later. Panel members received a $10 Forte cash card for completing the baseline survey; $5 was added to the card electronically each time they completed a weekly survey.

Measures.

The baseline survey helped characterize panel membership and allowed us to stratify weekly misinformation findings by sub-groups within the panel. It assessed items including panel members’ age, gender (male, female, non-binary/third gender, prefer not to say), race (white, Black or African American, American Indian or Alaska Native, Asian or Asian American, Native Hawaiian or Pacific Islander, other, prefer not to say), Hispanic ethnicity (yes/no/prefer not to say), COVID-19 vaccination status (both doses of a 2-dose vaccine/first dose of a 2-dose vaccine/1-dose vaccine/unvaccinated but plan to get a COVID-19 vaccine/unvaccinated and don’t plan to get a COVID-19 vaccine/don’t know), level of worry about COVID-19 (0–100), and whether they or a loved one had ever had COVID-19 or been hospitalized with COVID-19 (yes/no/not sure).

The weekly surveys assessed exposure to, sources of, and belief in specific instances of COVID-19 misinformation. Panel members were first asked, “In the last 7 days, have you heard, read or seen <misinformation item>?” In the first 12 months of the project, 45 different items were included on at least one survey. Sample misinformation items include “taking Ivermectin will prevent or cure COVID-19, so vaccines are not needed” and “the COVID-19 vaccine is more dangerous than the virus itself.” The exact wording for four misinformation items analyzed in this paper is shown in Table 1.

Selection of misinformation items were informed by two main sources: COVID-19 misinformation tracking websites (e.g., CDC’s State of Vaccine Confidence Insights Reports [12], Public Health Communication Collaborative’s Misinformation Alerts [13], Google FactCheck [14]) and an open-ended question on the weekly survey asking panel members what other information about COVID-19 they had heard in the last seven days.

On average, misinformation items were included in the survey for about seven consecutive weeks (range 1–23 weeks); we removed items when <15% of panel members reported hearing the item for at least three consecutive weeks, or items were not applicable anymore. For example, a misinformation item claiming that “COVID is caused by snake venom” was introduced for only one week because exposure was near zero among our panel members. In any given week, five to ten misinformation items were assessed. After each misinformation item, regardless of the panel member’s response, the survey screen displayed accurate information about the item. This is a recommended best practice [15] to avoid introducing, validating, or spreading misinformation just by asking about it.

For panel members who reported that they had heard, read, or seen a misinformation item in the last seven days, the next question assessed where they heard or read or saw it (with responses of family member or friend/neighbor or coworker/someone else/on social media/other internet source/TV or radio/other (please specify)/not sure/refuse to answer). Panel members who reported that they heard the misinformation item were also asked whether they believed it (with responses of definitely true/seems like it could be true/not sure if it’s true or untrue/seems like it’s not true/definitely not true). A final open-ended item asked them to share any other COVID-19 information they had heard in the last seven days.

Each Tuesday, when the weekly survey closed, panel member responses were aggregated and rapidly analyzed for use in summary reports to community partners and the iHeard STL dashboard. We calculated weekly exposure to each item as the number of respondents who reported having heard/read/seen the item (with responses of Yes) in the last seven days among those who responded to the question. Weekly belief was calculated as the percentage of exposed respondents in a given week who reported that the item was definitely true/seems like it could be true/not sure if it’s true or untrue. We also created first exposure and first belief variables for use in sub-group analyses to capture panel members’ responses to these questions the first time they answered them, thereby reducing potential bias from the accurate information that was presented following each misinformation item on each survey. Panel members were categorized as being first exposed if they reported having heard/read/seen an item (with responses of Yes) the first time they answered a survey question about that item. Panel members were categorized as first believing if they reported any of three responses (definitely true/seems like it could be true/not sure if it’s true or untrue) the first time they answered the belief question for a recurring misinformation item.

Prioritization

We prioritized misinformation items according to threat level, which was calculated each week for each misinformation item as the product of the percent exposed and the percent first belief divided by 10,000 (the maximum possible product), generating a score between zero and 100. Threat score categories of low, medium, and high corresponded to scores of 0 to <5, ≥5 to <20, and ≥20, respectively. Threat level was used to identify misinformation items to highlight in our response.

Response

Surveillance findings and responses to misinformation are rapidly reported back to the community in two ways. First, findings from each weekly survey, including threat-level priorities and exposure trend data over time, are posted to a public-facing dashboard [16] within 72 hours of the survey closing. For each misinformation item assessed that week, the dashboard provides accurate responses in a short, and longer version. The short version models a quick response that could be used if someone encountered the misinformation. The longer version provides a more detailed explanation, with evidence and hyperlinks to official and original sources, for users who want or need to know more.

Second, we develop and e-mail a weekly “alert” to community partners that identifies and describes a high priority finding from that week. Alerts include suggested action steps, and a link to the dashboard. Alerts were distributed through GoDaddy [17] from 3/24/2022 to 6/30/2022; we used its "unique view" tracker to determine how many and which partners opened our alert emails (referred to as a “view”), and its "engage" tracker to determine the number of alert recipients that clicked at least one of the links in the alert or shared our email with others [18]. If a subscriber opened the email multiple times only one view was counted.

Data analyses

To illustrate aspects of the system, we conducted four sets of analyses examining: (1) time trends for weekly exposure to and weekly belief of misinformation; (2) variability in misinformation first exposure and first belief by sub-groups of panel members; (3) emerging misinformation identified from open-ended responses; and (4) use of the misinformation dashboard and weekly alerts. All analyses were conducted using R software.

Time trends for weekly exposure to and weekly belief of misinformation.

We selected four misinformation items assessed in weekly surveys a priori to use as examples in analyses (Table 1). We plotted the proportion of panel members exposed to and believing (see S1 Table for definitions) each misinformation item week-by-week. We used linear regression to estimate slopes (β’s) and 95% confidence intervals (CIs) to estimate the percent change in exposure and belief over time as a function of the survey week number. We removed data points before 10/17/2021 due to the small number of enrolled panel members (≤12) in the first seven weeks of surveillance.

Misinformation first exposure and first belief by sub-group.

Using logistic regression, we examined associations between panel members’ age category, race, gender, occupation category, and their exposure to and belief of misinformation. Separate models examined first exposure and first belief as outcomes (see S1 Table for definitions). Among the demographic predictors in each model, stratified analyses were performed by age category (18-39/40-49/ ≥ 50 years), race (white/Black or African American), gender (male/female), and occupation category (front-line worker/not). We excluded those with race and gender other than the categories listed above due to small numbers. The likelihood ratio (LR) test was used to compare models to assess whether a variable significantly improved model fit for first exposure or first belief. Results were considered statistically significant if the two-sided p-value was < .05. Odds ratios (ORs) and 95% confidence intervals (CIs) are reported.

Identifying new misinformation from open-ended responses.

To examine the potential for weekly misinformation surveillance to identify emerging misinformation, we coded open-ended responses provided by panel members when asked what else they had heard about COVID-19 in the last seven days. Because any participant’s open-ended response could include multiple distinct claims (e.g., “vaccines are not effective and masking hurts children”), the unit of analysis for coding was each unique claim reported in a single week.

Claims were assessed for containing misinformation if they could be proven or disproven with peer-reviewed scientific articles, public data, public reports, or news articles from fact-checked news outlets (e.g., New York Times, Washington Post). For this reason, claims such as, “nothing to report,” and “My graduation ceremony required masks” were not assessed for misinformation. Two research team members independently coded each response into one of six exclusive categories (vaccine, disease, virus, healthcare, general, not myth) and up to two of 47 non-exclusive sub-categories (see S2 Table). The codebook was developed using an inductive, iterative approach [19,20]. Claims were classified as misinformation when they were false or unproven given best available evidence at the time [2]. Discrepancies between coders were resolved through discussion. We report the frequency of misinformation claims overall and by category, and the proportion of panel members providing open-ended responses.

Use of misinformation dashboard and alerts.

Using web analytics, we report total page views of the iHeard STL dashboard from 3/24/2022 to 9/30/2022. We also report use of weekly misinformation alerts in terms of opened e-mails and clicks to access the iHeard STL dashboard from 3/24/2022 to 6/30/2022.

Results

Participants and response rates

Between 8/23/2021 and 8/21/2022, we enrolled 214 panel members. Most were female (63.1%) and either white (52.3%) or Black/African American (37.4%) (Table 2); the mean age was 38.6 ± 14.6 years (not shown). Over half were front-line workers (54.7%) and most reported at baseline being fully vaccinated with a two-dose vaccine (85.0%) (Table 2). Response rates for each weekly survey ranged from 72.5% to 100% with a mean of 88.8% ± 4.8% (Fig 1). In all, 7,512 surveys were sent and 6,605 were completed.

thumbnail
Fig 1. The weekly response (%) from August 2021 to August 2022.

https://doi.org/10.1371/journal.pone.0293288.g001

Time trends in weekly exposure and weekly belief

Weekly exposure rates for the four misinformation items showed significant declines over time. Average per week decreases were -8.1% (95% CI: -10.2 to -6.0) for KidMask, -3.3% (95% CI: -5.4 to -1.3) for VaxFail, -2.7% (95% CI: -3.9 to -1.5) for VaxDanger, and -2.6% (95% CI: -4.0 to -1.2) for Ivermectin. The rate of weekly belief did not significantly change over time for any of the four misinformation items (Fig 2, S3 Table).

thumbnail
Fig 2.

Trends of weekly exposure (blue) and weekly belief (coral) of four inaccurate information items: (a) VaxFail; (b) VaxDanger; (c) KidMask; (d) Ivermectin.

https://doi.org/10.1371/journal.pone.0293288.g002

Misinformation first exposure and first belief by sub-group

First exposure did not differ significantly by panel member age category, gender, or occupation category for any of the four example misinformation items (LR test p >.05) (Table 3). However, Black or African American panel members were more likely than white panel members to be exposed to the VaxDanger item (OR = 3.4; 95% CI: 1.4–8.2; pVaxDanger = .006), but not other misinformation items (pVaxFail = .32, pKidMask = .61, pIvermectin = .34).

thumbnail
Table 3. Odds ratios of first exposure and belief by panel member demographic characteristicsa.

https://doi.org/10.1371/journal.pone.0293288.t003

First belief of misinformation was significantly higher for Black or African American vs. white panel members; odds ratios for believing the VaxDanger, VaxFail, and Ivermectin items were, respectively, 17.1 (95% CI: 2.6–339.1), 6.3 (95% CI: 1.6–28.4), and 5.3 (95% CI: 1.2–25.5). Panel members ages 50 and older had an 8.4 (95% CI: 1.5–67.5) times higher odds of believing the KidMask item compared to those 18 to 39.

Open-ended response analysis

Across 52 weekly surveys, 165 different panel members (77% of total enrolled) provided 1,449 open-ended responses about other COVID-19 information they had heard in the last week. Of these, 1,136 were assessable claims. Among all claims, the most frequent classification for main category was “vaccine” (488, 43%), and the most frequent subcategory was “statistics” (250, 13%). Among misinformation claims, the most frequent main category was also “vaccine” (257, 52%), and the most frequent subcategory was “booster” (78, 9%) (Table 4).

thumbnail
Table 4. Misinformation claims by main categories and top 5 sub-categories.

https://doi.org/10.1371/journal.pone.0293288.t004

On average, about 19 panel members (18.71 ± 8.37) provided at least one assessable claim per week, for an average of 0.18 ± 0.12 claims per respondent per week. Of the 1,136 assessable claims, 505 (44%) were classified as misinformation, with 112 panel members ever providing at least one open-ended response that was classified as misinformation. The average number of respondents per week who reported at least one misinformation claim was 9 (8.69 ± 5.57). Overall, there was an average of 0.09 ± 0.08 misinformation claims per respondent per week.

Use of misinformation dashboard and alerts

The iHeard STL dashboard was promoted by email starting 3/24/2022. Through 9/30/2022, we recorded 15,939 views of the homepage, which provides information about exposure to specific misinformation threats in the last week, beliefs in the misinformation, time trends for misinformation exposure, and a short, accurate response to address each misinformation item. Of the 15,939 views, which included Washington University internal users, 611 (4%) then clicked to receive “more information” about a misinformation threat, which took them to a longer response with a more detailed explanation, evidence, and citations to support the accurate response.

We began sending weekly alerts by email on 3/24/2022. Until 6/30/2022, the number of non-university community organizations alert subscribers ranged from 36 to 46 per week (mean per week = 42.5 ± 3.3). The weekly view rate for alerts ranged from 22.7% to 51.4%, with a mean of 32.5% ± 8.9% (Fig 3). The weekly engagement rate–the proportion of those viewing the alert who then clicked on a hyperlink to take them to the dashboard–ranged from 0 to 31.3% by week, with a mean of 13.3% ± 10.2%.

thumbnail
Fig 3. Percent of non-Washington University email recipients who viewed (grey) and engaged (black) in weekly misinformation alerts from 3/24/2022 to 6/30/2022.

https://doi.org/10.1371/journal.pone.0293288.g003

Discussion

Rapid, local responses to COVID-19 and other health misinformation are urgently needed [18]. We developed one of the first local misinformation surveillance and response systems in the United States. Our system routinely tracks and rapidly reports specific claims about COVID-19 that are circulating locally and captures new information that community members are hearing about other emerging health issues. Because these data are collected each week, we can identify near real-time trends, helping to prioritize community responses in the face of multiple misinformation threats, and even targeting those efforts to population sub-groups with disproportionate exposure to and/or belief in specific misinformation.

We found that community exposure to misinformation items generally decreased over time, but the rate varied by misinformation topic. For example, estimated exposure to an item about “masks are dangerous for kids” had the sharpest estimated decrease over time at ~8.1% decline per week, while estimated exposure to an item about “the COVID-19 vaccine is more dangerous than the virus itself” decreased more slowly, at ~2.7% per week. Studies have identified a range of factors that contribute to the spread and duration of misinformation exposure, including “echo chambers” of like-minded clusters of individuals sharing a limited set of information channels; avoidance or selective non-exposure to fact-checked corrective evidence in a crowded media environment; and increasingly rapid news cycles [15,2123]. Although our misinformation surveillance system alone cannot yet distinguish the relative contributions of different factors, the weekly collection of exposure and belief data at a community level makes it possible to integrate data from other sources into analyses to provide important insights that could inform strategies to better address misinformation.

In contrast to declining exposure over time, initial findings from our system suggest that believability of misinformation claims was relatively stable among our panel members. These data reinforce prior research findings that misinformation beliefs are difficult to change over time [2427]. The stability of belief estimates is particularly noteworthy because the weekly survey provided respondents with a brief counter-message immediately after panel members answered each question, each week, about exposure to a misinformation item. In other words, in line with previous literature [2427], we found that belief in some misinformation claims persisted in the face of routine counter-messaging.

These findings highlight the potential of inoculating or “pre-bunking” community members against misinformation to prevent its spread [28]. By identifying misinformation claims with stable believability, counter messaging could be supplemented with theory-based interventions to address the sources of entrenched beliefs. For example, in Zimbabwe, a behavioral theory-informed intervention to address misinformation about HIV and condom usage successfully changed beliefs associated with intention to use condoms [29]. Providing trusted local messengers with accurate information to share with constituents could further enhance and accelerate efforts to address misinformation.

Black or African American panel members had greater odds of reported exposure to and belief in misinformation. Compared to white panel members, their odds of exposure to claims that the vaccine was more dangerous than the disease were three times greater, and they also had higher odds of believing an item could be true or being uncertain whether it’s true or untrue for three out of four misinformation items. There is a long history of African Americans being targeted with health harming products and information as well as mistrust of health care and health research among Africans Americans based on past abuses [30,31]. For our project’s purpose of helping to focus and support community responses to misinformation, the ability to identify specific sub-groups at increased risk of misinformation exposure or belief or uncertain belief is particularly valuable. For example, it could inform message and channel strategies such as outreach through specific community partners or trusted messengers with unique access and credibility in a given sub-group. It could also help guide efforts to better understand the factors underlying differences between sociodemographic groups [32].

The World Health Organization recommends monitoring publicly driven conversation online and offline about vaccine sentiment. Current approaches rely heavily on social media listening or occasional cross-sectional surveys [33]. Our weekly community surveillance efforts complement these approaches by detecting when specific misinformation threats enter a community and how they spread.

In our analyses of over 1,000 open-ended responses to a question about what COVID-19 information panel members had heard in the last 7 days, nearly half of the claims were classified as misinformation (45%). These claims were invaluable in shaping surveillance, as 21 of the 45 survey items we administered in the first year originated as open-ended responses. This illustrates a key benefit of routine misinformation surveillance: it feeds a rapid-cycle process that converts new misinformation reported by a few community members into a community-wide survey within days. If or when community-wide survey responses show that the new misinformation poses a substantial threat, we can immediately spread accurate information to counter it. This echoes other misinformation response initiatives that use community-engaged methods such as citizen science and participatory surveillance to rapidly identify health-relevant events and complement traditional surveillance system methods [3436].

Community-engaged approaches to surveillance may help build trust between public health institutions and communities [36]. For example, it provides community members an opportunity to voice concerns about misinformation and observe how local systems respond to their concerns. At the same time, health professionals can learn about misinformation that is circulating locally and focus their efforts accordingly. Establishing infrastructure that enables community members to be active participants in identifying misinformation locally may help foster community information stewardship [37].

Strengths and limitations

iHeard STL is a novel, real-world, local surveillance and response system for health misinformation. Both the system and this initial evaluation have several strengths. First, they are multi-faceted. The system includes a weekly surveillance apparatus, a public-facing dashboard platform to share results with the community, and multiple mechanisms for rapidly distributing accurate information to community organizations to help them address misinformation when they encounter it. This report shares first-year findings from all components of the system. Second, community engagement with the system has been high. Response rates to weekly surveillance have averaged 88%, two-thirds of panel members have contributed open-ended responses to share new misinformation they have heard, and community organizations and residents are accessing the accurate information provided by the system.

There are important limitations of this evaluation, mostly attributable to iHeard STL being rapidly developed and launched as a community response system rather than a research project. For example, it currently relies on a convenience sample of panel members, which may affect the generalizability of surveillance results. Women and vaccinated individuals are over-represented in our panel, and we intentionally over-recruited Black and African American individuals because they have been disproportionately affected by COVID-19 [38] and have higher rates of vaccination hesitancy, perhaps due to greater exposure to misinformation [39]. Our panel also includes many front-line workers, for whom the system was designed to help. In addition, an estimated 9.2% of our panel members are employed in the healthcare sector (16.5% of these reported being frontline workers) vs. 14% of the U.S. population employed in this sector [40]. This may cause underestimated exposure rates on surveyed health-related claims. Because the community panel was accrued over time, early weeks of surveillance had a smaller number of respondents, which led to some data being excluded from trend analyses for misinformation exposure and belief. Estimates for differential exposure to and belief of misinformation between sub-groups may be imprecise due to small numbers and should be interpreted cautiously. We also excluded individuals who reported a gender other than male or female from the gender analysis and who reported a race/ethnicity other than white or African American from the race analysis, which limits the interpretation of these results to the included groups. In addition, the small sample size with low statistical power prohibited us from performing multivariable regression models. Finally, our surveys were conducted in English and therefore we may have missed capturing differences in misinformation and patterns circulating in non-English speaking communities.

Conclusions

It is feasible to create and operate a community-based system to monitor and rapidly respond to health misinformation. After one year of operation, we find that the data captured through routine surveillance can help identify priority threats in a timely way, as well population sub-groups that may be disproportionately affected.

Our system has evolved a great deal in its first year and will continue to evolve in at least three specific ways: assessing and addressing non-COVID health misinformation (e.g., Mpox and other emerging diseases); assessing and reinforcing accurate health information (e.g., approval of boosters); and expansion to include other communities in other states. We will also examine longer-term and community level effects of the system, such as its impact on scientific trust, health knowledge, and related health improvement in communities.

Supporting information

S1 Table. Exposure and belief definitions, measures, and use.

https://doi.org/10.1371/journal.pone.0293288.s001

(DOCX)

S2 Table. Subcategories for open-ended misinformation survey responses.

https://doi.org/10.1371/journal.pone.0293288.s002

(DOCX)

S3 Table. Beta estimates for first exposure and first belief rate.

https://doi.org/10.1371/journal.pone.0293288.s003

(DOCX)

S4 Table. Crosstab of raw counts of first exposure and first belief by sub-group.

https://doi.org/10.1371/journal.pone.0293288.s004

(DOCX)

References

  1. 1. Misinformation and disinformation. Available from: https://www.apa.org/topics/journalism-facts/misinformation-disinformation [accessed Jul 30, 2023].
  2. 2. Health Misinformation—Current Priorities of the U.S. Surgeon General. Available from: https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html [accessed Jun 8, 2022].
  3. 3. Public Health Surveillance at CDC | CDC. Available from: https://www.cdc.gov/surveillance/improving-surveillance/Public-health-surveillance.html [accessed Jul 27, 2022].
  4. 4. Tibbels N, Dosso A, Allen-Valley A, Benie W, Fordham C, Brou JA, et al. Real-time tracking of COVID-19 rumors using community-based methods in Côte d’Ivoire. Glob Health Sci Pract Johns Hopkins University Press; 2021 Jun 30;9(2):355–364. pmid:34038385.
  5. 5. Gorman JM, Scales DA. Leveraging infodemiologists to counteract online misinformation: Experience with COVID-19 vaccines. Harvard Kennedy School Misinformation Review 2022;
  6. 6. WHO SEARO. Seventy-fourth Session Strengthening public health emergency preparedness and response in the South-East Asia Region. 2021.
  7. 7. Chou WS, Sciences P, Cancer N, Oh A, Sciences P, Cancer N, et al. The Persistence and Peril of Misinformation. Am Sci 2017;105(6):372.
  8. 8. Weng O. How are state and local health departments addressing COVID-19 misinformation? CDC Vaccine Confidence Network. 2022.
  9. 9. Research Participant Registry | Volunteer for Health | Washington University in St. Louis. Available from: https://sites.wustl.edu/wuvfh/ [accessed Jun 29, 2022].
  10. 10. Centers for Disease Control and Prevention. CDC COVID Data Tracker: Home. 2022. Available from: https://covid.cdc.gov/covid-data-tracker/#trends_dailydeaths [accessed May 12, 2022].
  11. 11. COVID-19 Related News and Announcements. Available from: https://www.stlouis-mo.gov/government/departments/health/communicable-disease/covid-19/news.cfm [accessed Jul 30, 2023].
  12. 12. COVID-19 Vaccine Confidence | CDC. Available from: https://www.cdc.gov/vaccines/covid-19/vaccinate-with-confidence.html [accessed Sep 12, 2022].
  13. 13. Misinformation Alerts—Public Health Communication Collaborative. Available from: https://publichealthcollaborative.org/misinformation-alerts/ [accessed Sep 12, 2022]
  14. 14. Fact Check Tools. Available from: https://toolbox.google.com/factcheck/explorer [accessed Jun 29, 2022].
  15. 15. van der Linden S. Misinformation: susceptibility, spread, and interventions to immunize the public. Nature Medicine 2022 28:3 Nature Publishing Group; 2022 Mar 10;28(3):460–467. pmid:35273402.
  16. 16. iheard St.Louis. Available from: https://hcrlweb.wustl.edu/iheardstl/ [accessed Nov 7, 2022].
  17. 17. GoDaddy. Available from: https://www.godaddy.com/offers/brand/new?isc=gofx2001a&cdtl=c_17798693411.g_139290519259.k_kwd-93455629.a_605892592481.d_c.ctv_g&bnb=b&gclid=Cj0KCQjwof6WBhD4ARIsAOi65ah12BWR664V0J5uRu89jVwJwDqXqWf3v6NAJ7BlLzKIlH5r6IbveqgaAoztEALw_wcB [accessed Jul 25, 2022].
  18. 18. Analytics Explained: Click-Throughs | GoDaddy Email Marketing—GoDaddy Help ZA. Available from: https://za.godaddy.com/help/analytics-explained-click-throughs-15990 [accessed Aug 7, 2022].
  19. 19. Transforming qualitative information: Thematic analysis and code development.—PsycNET. Available from: https://psycnet.apa.org/record/1998-08155-000 [accessed Aug 7, 2022].
  20. 20. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006;3(2):77–101.
  21. 21. Randolph W, Viswanath K. Lessons learned from public health media campaigns: Marketing health in a crowded media world *. Annu Rev Public Health 2004;25:419–456. pmid:15015928
  22. 22. Guess AM, Nyhan B, Reifler J. Exposure to untrustworthy websites in the 2016 US election. Nat Hum Behav Nat Hum Behav; 2020 May 1;4(5):472–480. pmid:32123342.
  23. 23. Valecha R, Volety T, Rao HR, Kwon KH. Misinformation Sharing on Twitter during Zika: An Investigation of the Effect of Threat and Distance. IEEE Internet Comput Institute of Electrical and Electronics Engineers Inc.; 2021 Jan 1;25(1):31–39.
  24. 24. Johnson HM, Seifert CM. Sources of the Continued Influence Effect: When Misinformation in Memory Affects Later Inferences. J Exp Psychol Learn Mem Cogn 1994;20(6):1420–1436.
  25. 25. Ecker UKH, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, et al. The psychological drivers of misinformation belief and its resistance to correction.
  26. 26. Walter N, Tukachinsky R. A Meta-Analytic Examination of the Continued Influence of Misinformation in the Face of Correction: How Powerful Is It, Why Does It Happen, and How to Stop It?: https://doi.org/101177/0093650219854600 SAGE PublicationsSage CA: Los Angeles, CA; 2019 Jun 22;47(2):155–177.
  27. 27. Nyhan B, Reifler J. When corrections fail: The persistence of political misperceptions. Polit Behav 2010 Jun;32(2):303–330.
  28. 28. Roozenbeek J, van der Linden S, Goldberg B, Rathje S, Lewandowsky S. Psychological inoculation improves resilience against misinformation on social media. Sci Adv NLM (Medline); 2022 Aug 26;8(34):eabo6254. pmid:36001675.
  29. 29. Kasprzyk D, Montano D. Application of an integrated behavioral model to understand HIV prevention behavior of high-risk men in rural Zimbabwe. In: Ajzen I, Albarracin D, Hornik R, editors. Prediction and Change of Health Behavior: Applying the Reasoned Action Approach Hillsdale: Erlbaum; 2007. p. 145–168. Available from: https://www.researchgate.net/publication/285875746_Application_of_an_integrated_behavioral_model_to_understand_HIV_prevention_behavior_of_high-risk_men_in_rural_Zimbabwe [accessed Sep 12, 2022].
  30. 30. Grier SA, Kumanyika SK. The context for choice: health implications of targeted food and beverage marketing to African Americans. Am J Public Health Am J Public Health; 2008 Sep 1;98(9):1616–1629. pmid:18633097.
  31. 31. Balbach ED, Gasior RJ, Barbeau EM. R.J. Reynolds’ targeting of African Americans: 1988–2000. Am J Public Health Am J Public Health; 2003;93(5):822–827. pmid:12721151.
  32. 32. Pan W, Liu D, Fang J. An Examination of Factors Contributing to the Acceptance of Online Health Misinformation. Front Psychol Frontiers Media SA; 2021 Mar 1;12. pmid:33732192.
  33. 33. Chaney SC, Benjamin P, Mechael P. Finding the Signal through the Noise A landscape review and framework to enhance the effective use of digital social listening for immunisation demand generation. 2021. Available from: https://www.gavi.org/sites/default/files/2021-06/Finding-the-Signal-Through-the-Noise.pdf [accessed Aug 23, 2023].
  34. 34. Myers N. Information Sharing and Community Resilience: Toward a Whole Community Approach to Surveillance and Combatting the “Infodemic.” World Med Health Policy John Wiley & Sons, Ltd; 2021 Sep 1;13(3):581–592. pmid:34230869
  35. 35. ‘Public Editor’ Gives People the Solution Government and Big Tech Haven’t—Public Editor. Available from: https://www.publiceditor.io/blog/launch-pressrelease [accessed Jul 31, 2022].
  36. 36. Smolinski MS, Crawley AW, Olsen JM, Jayaraman T, Libel M. Participatory Disease Surveillance: Engaging Communities Directly in Reporting, Monitoring, and Responding to Health Threats. JMIR Public Health Surveill JMIR Publications Inc.; 2017 Oct 1;3(4). pmid:29021131.
  37. 37. MacPhail VJ, Colla SR. Power of the people: A review of citizen science programs for conservation. Biol Conserv Elsevier; 2020 Sep 1;249:108739.
  38. 38. Reyes MV. The Disproportional Impact of COVID-19 on African Americans. Health Hum Rights Harvard University Press & François-Xavier Bagnoud Center for Health and Human Rights; 2020 Dec 1;22(2):299. pmid:33390715.
  39. 39. Khubchandani J, Macias Y. COVID-19 vaccination hesitancy in Hispanics and African-Americans: A review and recommendations for practice. Brain Behav Immun Health Elsevier; 2021 Aug;15:100277. pmid:34036287.
  40. 40. United States Census Bureau. 22 Million Employed in Health Care Fight Against COVID-19. Available from: https://www.census.gov/library/stories/2021/04/who-are-our-health-care-workers.html [accessed Aug 23, 2023].