Figures
Abstract
Objective
To describe the implementation of a test-negative design case-control study in California during the Coronavirus Disease 2019 (COVID-19) pandemic.
Methods
Between February 24, 2021 ‐ February 24, 2022, a team of 34 interviewers called 38,470 Californians, enrolling 1,885 that tested positive for SARS-CoV-2 (cases) and 1,871 testing negative for SARS-CoV-2 (controls) for 20-minute telephone survey. We estimated adjusted odds ratios for answering the phone and consenting to participate using mixed effects logistic regression. We used a web-based anonymous survey to compile interviewer experiences.
Results
Cases had 1.29-fold (95% CI: 1.24–1.35) higher adjusted odds of answering the phone and 1.69-fold (1.56–1.83) higher adjusted odds of consenting to participate compared to controls. Calls placed from 4pm to 6pm had the highest adjusted odds of being answered. Some interviewers experienced mental wellness challenges interacting with participants with physical (e.g., food, shelter, etc.) and emotional (e.g., grief counseling) needs, and enduring verbal harassment from individuals called.
Conclusions
Calls placed during afternoon hours may optimize response rate when enrolling controls to a case-control study during a public health emergency response. Proactive check-ins and continual collection of interviewer experience(s) and may help maintain mental wellbeing of investigation workforce. Remaining adaptive to the dynamic needs of the investigation team is critical to a successful study, especially in emergent public health crises, like that represented by the COVID-19 pandemic.
Citation: Fukui N, Li SS, DeGuzman J, Myers JF, Openshaw J, Sharma A, et al. (2024) Mixed methods approach to examining the implementation experience of a phone-based survey for a SARS-CoV-2 test-negative case-control study in California. PLoS ONE 19(5): e0301070. https://doi.org/10.1371/journal.pone.0301070
Editor: Moustaq Karim Khan Rony, Bangladesh Open University, BANGLADESH
Received: December 26, 2023; Accepted: March 9, 2024; Published: May 21, 2024
Copyright: © 2024 Fukui et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: De-identified data and supporting data documentation will be made available through GitHub upon publication: https://github.com/noz-o-mi/CA-COVID-Case-Control-implementation.
Funding: This study was supported by the Centers for Disease Control and Prevention, Enhanced Epidemiology and Laboratory Capacity (ELC) grant number: 5-NU50CK000539. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Joseph Leonard has received grants and honoraria from Pfizer, Inc, outside the submitted work. There are no patents, products in development or marketed products associated with this research to declare. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
Introduction
The Coronavirus Disease 2019 (COVID-19) pandemic induced rapid mobilization of public health research to inform policy [1]. Observational studies have played critical roles in defining COVID-19 epidemiology by identifying risk factors for infection and estimating the effectiveness of mitigation strategies [2–7]. Many observational studies conducted during the pandemic utilized remote technologies, such as phones, to safely enroll participants, however these platforms may pose unique challenges [8–12]. Understanding phone-based participation patterns throughout the pandemic may help optimize the implementation of future epidemiologic studies.
Prior to the pandemic, participation in phone surveys varied by disease, age, and time of day [11,13–15]. Individuals or individuals adjacent to person(s) who have history of disease are more likely to participate than unaffected individuals [11]. Younger people may be more willing to answer an unknown caller, but less willing to participate in a public health survey that involves disclosing sensitive information such as their recent contacts [14]. Additionally, the time of day that individuals are called may also influence participation [15]. Polarization of public health throughout the pandemic, including increasingly negative attitudes towards contact tracing, may limit willingness to participate in phone-based COVID-19 research [16–19]. In the novel, dynamic context of the pandemic, identification of predictors of participation in observational studies using remote technologies are limited. Public health professionals report substantial mental health burdens during the pandemic, yet details regarding the toll of sensitive research on researchers is scant [20–23].
We describe the implementation of a phone-based, test-negative SARS-CoV-2 case-control study in California during the COVID-19 pandemic. We estimate predictors of answering the phone, enrolling in the study, and identify reasons for refusing participation. Furthermore, we provide qualitative descriptions of interviewer experiences to identify successes and gaps in staff support systems. These components are critical to successful implementation and can inform future epidemiologic studies conducted throughout similar pandemic settings.
Materials and methods
Study design and enrollment
We reviewed data collected from February 24, 2021, to February 24, 2022, by the California Department of Public Health (CDPH) test-negative case-control study that evaluated risk factors for SARS-CoV-2 infection (S1 File) [6,7]. Potential case and control participants were defined as individuals with a positive and negative laboratory-confirmed SARS-CoV-2 test result, respectively. Cases and controls were individually matched by age group, sex, multi-county region, and test result window (≤7-day difference). Throughout the study period, trained interviewers used soft-phone technology with a California area code to call and facilitate a 20-minute survey in English or Spanish (S1 File). A script accompanied the electronic survey to standardize the participant experience (S1 File). Potential participants were informed of the 20-minute survey length before consenting.
Individuals were eligible to participate if they reported no clinical diagnosis of COVID-19 or positive test result for SARS-CoV-2 infection prior to their most recent test result. From January 6, 2022, as at-home test use increased, those with a previous (< 2 days) positive at-home test result became eligible. If not capable of answering questions, recruitment proceeded if a proxy respondent was available, and the potential participant gave informed consent both to participation and to have the proxy answer on their behalf.
Interviewers enrolled a case, followed by calls to 30+ matched controls, in a repeated case-control pair format. If unsuccessful in enrolling a matched control within their shift, interviewers requested for other interviewers to attempt enrollment in subsequent shifts via an instant messaging platform. To limit recall bias, cases were excluded in the primary analysis if not matched within 7 days. Interviewers documented the outcome (no answer, no consent, partial survey, completed survey) of each call and noted reasons for refusing participation or early call termination.
Ethics and informed consent.
Verbal informed consent was obtained from all adult (aged ≥18 years) participants and parents/guardians of participants aged <18 years. The consenting parent/guardian was asked to answer on behalf of children aged <16 years however, they were able to invite children aged >7 years to participate in the interview if the child was willing, able, and interested. The informed consent script is available in S1 Item in S1 File. The State of California Health and Human Services Agency, Committee for the Protection of Human Subjects (Project 2021–034) approved the study protocol.
Implementation infrastructure
Interviewers collected data daily (excluding holidays) for 10+ hours per week. Research associates, promoted from interviewers, helped maintain databases, manage interviewer training, assign call lists, facilitate weekly meetings, monitor enrollment, and cultivate community.
A communication platform provided live support to interviewers who encountered questions during surveys and served as an option for private and group communication. Supervisors monitored the platform daily to ensure timely response to questions. The platform streamlined communication to easily deliver critical updates, solicit feedback on survey implementation, and detect issues quickly.
The team met weekly to discuss enrollment progress, check-in on wellbeing, highlight interviewer accomplishments, and announce protocol or survey updates. Supervisors offered professional development opportunities during these meetings such as presentations from various public health professionals and workshops covering relevant skills and topics.
Interviewers intermittently encountered difficult conversations with participants. Team-wide, small-group, and 1-on-1 discussions about wellbeing recurred throughout the year to debrief difficult experiences, and mental wellness resources, including counseling and general support conferences across CDPH COVID-19 response sections, were advertised and encouraged.
Interviewer team
Interviewers were recruited from undergraduate and graduate institutions with pay (S1 File). Successful candidates demonstrated strong empathy, patience, good communication, interest in public health or a related field, and had prior customer service, data collection, or healthcare experience. Interviewers completed a rigorous training program to ensure that they were well prepared for challenging interviews and collecting high-quality data (Fig 1). Due to high interviewer turnover in the first three months of the study, multiple hiring sessions occurred. With successive rounds of interviewer on-boarding we implemented a train-the-trainer approach, empowering experienced interviewers to mentor others and respond to questions.
Quantitative methods
We define three cohorts representing different call outcomes: 1) individuals who answered the phone, 2) eligible individuals who consented, and 3) eligible individuals who refused participation in the study. To estimate determinants of participation, we estimated the adjusted odds ratio of answering the phone, consenting to participate, and citing time as a reason for not participating using mixed effects logistic regression. Models included age group, sex, region, SARS-CoV-2 infection status, month, time of day and time of week contacted as fixed effects and allowed random effects at the interviewer level. Additionally, we assessed interaction effects between predictors by SARS-CoV-2 infection status and between time of day and time of week. The Bayesian Information Criterion was used to compare models with and without interaction terms included (S1 File).
All analyses were conducted with R software (version 4.1.3; R Foundation for Statistical computing) and the lme4 package.
Qualitative methods
From June 29 through July 12, 2022, we used an anonymous, self-administered, web-based survey to contextualize quantitative results with interviewer experiences (S1 File). All interviewers involved with the study were invited to participate. We compensated active interviewers for the time spent on their responses. We also reviewed weekly meeting notes for identification of themes.
Results
During the study period, we placed 38,470 calls including 15,154 (39.4%) to cases and 23,316 (60.6%) to controls (Table 1 and S1 File). Among the cases and controls called, 35.5% (5,383/15,154) and 31.3% (7,289/23,316) answered the phone, respectively. Of those who answered the phone, 37.2% (2,004/5,383) and 27.3% (1,991/7,289) consented to participate. Ultimately, 1,885 cases and 1,871 controls completed the survey and were enrolled in the study. Over time, survey completion declined for both cases and controls despite change in calling rate (Fig 2).
On average, interviewers placed 8 calls per case and 13 calls per control to complete enrollment. Parents or guardians of children aged 0 to 4 years required fewer calls to complete enrollment compared to participants in other age categories (7 calls per case and 10 calls per control) (S1 File). The greatest number of calls to complete enrollment for a potential participant occurred between 8am to 11am (10 calls per case and 15 calls per control). On average, the weekly calls to complete enrollment for a control increased over time while remaining relatively steady for cases (S1 File).
During the study, three hiring rounds recruited a total of 34 interviewers. Interviewers were, on average, active for 23 weeks. 17.6% (6/34) remained active for 34–52 weeks. 32.4% (11/34) of interviewers responded to the anonymous experience survey.
Predictors of answering the phone and consenting to participate
We found SARS-CoV-2 infection status, age, region, time of day called, and time of week called were significantly associated with answering the phone. Cases were more likely (aOR: 1.29 [95% CI: 1.24–1.35]) to answer the phone than controls (Fig 3). The likelihood of answering the phone was lowest among older individuals. Calls placed after 6pm (aOR: 0.79 [0.68, 0.90]) and between 8 to 11am (0.84 [0.79–0.90]) were associated with the lowest adjusted odds of answering the phone when compared to calls placed between 4 to 6pm (S1 File).
We did not observe significant interaction between SARS-CoV-2 infection status and the adjusted odds of consenting to participate by region or sex. Estimates for cases and controls are not pictured for these two predictors of consenting to participate.
We also evaluated predictors of consenting to participate and found significant associations with SARS-CoV-2 infection status, age group, sex, and region. Cases had 1.69-fold higher adjusted odds of consenting compared to controls ([95%CI: 1.56–1.83], Fig 3). Women had 1.13-fold (1.04–1.22) higher adjusted odds of consenting than men. Parents or guardians of minors aged 0 to 4 were 1.53-times (1.18–1.98) more likely to consent than those aged 23 to 29.
Some motivations for participant consent, per interviewer reflections, were desire to contribute to public health research, relieve boredom, and express perspectives about the pandemic (Table 2, Quotes 1–2).
We identified differences in likelihood of consenting to participate occurred within SARS-CoV-2 infection status strata among age groups (aOR for cases 0.77 [95% CI: 0.62, 0.95] versus aOR for controls 1.14 [95% CI: 0.94,1.39] aged 60 and older) (S1 File).
Reasons for refusing participation
We identified differences in the reasons for refusing participation among 8,015 eligible individuals who answered the phone. The majority (90.9%; 7285/8015) cited insufficient time as the reason for refusing participation, with the proportion citing this reason increasing over time (S1 File). Others cited language barriers (2.6%; 206/8015), lack of interest (2.3%; 184/8015), call fatigue (0.45%; 36/8015), and/or being unwell or grieving (0.45%; 36/8015) as reasons (S1 File).
We assessed determinants of indicating insufficient time as a reason for refusing participation. Cases were associated with a 0.44-fold (95% CI: 0.37–0.53) lower adjusted odds of citing insufficient time compared to controls (S1 File). Individuals aged 23 to 29 were most likely to cite insufficient time compared to all other age categories. We did not find evidence of significant associations between the time of day or week the individual was called and citing time as a reason for refusing participation.
Although interviewers observed that individuals often refused based on timing, they identified additional reasons, including personal beliefs, distrust, illness, and stress (Table 2, Quotes 3–4).
Sample diversity
Participants completing the survey were comparable to the SARS-CoV-2 test seeking population in California across sex and in age groups 0–4, 18–22, 40–49, and 60+ (Table 1 and S1 File). By design, participants were enrolled equally across each study region. The composition of study participants was roughly proportional to the state by household income and race/ethnicity (S7 Fig).
Pandemic sentiments and behaviors, self-reported by participants, were diverse and changed over time. Agreement with social distancing and face mask recommendations generally remained constant throughout the study period (S1 File), however, anxiety about the pandemic fluctuated between 67.7% in February 2021 (44/65 participants) and 29.6% in July 2021 (84/284 participants). Participants reporting visiting two or more public indoor settings within the two weeks prior to getting tested increased from 58.5% (38/65) in February 2021 to 85.1% (126/148) by February 2022. Attendance to each type of indoor setting remained constant ‐ except for a decrease in grocery store visits and increase in school visits (S1 File). The proportion of individuals ineligible for enrollment due to previously being infected with SARS-CoV-2 increased throughout the study period (S1 File).
Emotional states among participants, as encountered by interviewers, were also variable. The range of pandemic-related emotions that participants expressed included resilience, weariness, loneliness, and anger (Table 2, Quotes 5–6).
Some interviewers reported occurrences of previously vaccine-opposed participants expressing willingness to seek COVID-19 vaccination after testing positive (Table 2, Quote 7).
Interviewer wellbeing
Of the 11 interviewers who responded to the interviewer experience survey, 63.6% (7) stated they occasionally encountered scenarios where they were compelled to search for or connect participants to social services and 18.3% (2) stated they encountered this need often (S1 File). Resources pertaining to healthcare access (COVID-19 or general), housing security, food security, and financial relief were the most frequently requested. Most interviewers reported encountering participant grief or anger occasionally (81.8%, 9/11). 18.2% (2) reported encountering anger often.
Interviewers reported poor mental health and lingering feelings after difficult calls when participants discussed socioeconomic burdens, pandemic hardship, grief, suffering, inequitable conditions, or acted with hostility and bullying. Interviewers felt stressed, especially when participants compelled them to fulfill social service or counselor roles (Table 2, Quote 8).
Interviewers also described many encounters which instilled a sense of purpose, pride, spurred personal growth, cultivated a sense of community, expanded empathy, and uplifted moods. Notably, encounters when participants expressed appreciation, gratitude, humor, or warmth despite hardships had resounding effects on interviewers (Table 2, Quote 9). Interviewers also mentioned how the study provided remote career growth and employment during a time of scarce opportunities.
Structural successes and adaptations
Feedback was frequently solicited to identify improvement opportunities. When mental health concerns surfaced, quick action was taken to strengthen structural support, community engagement, and resources. Research associates, with experience as interviewers, developed and led robust training that emphasized mental wellbeing and methods to navigate difficult conversations. They also compiled information on frequently requested social services and expanded on the standard operating procedure with scenario-specific protocols and responses to demands beyond interviewer duties. Active efforts to sustain a work environment that felt safe, supportive, and caring were made to better protect the mental health of interviewers (Table 2, Quote 10).
Interviewers reported certain structural components as being particularly beneficial: self-assigned scheduling of shift times and weekly meetings. Self-assigned shifts allowed interviewers affected by difficult conversations to take breaks. Meeting weekly helped boost team morale, relieve isolation, and created bonding between team members (Table 2, Quotes 11–13).
Discussion
Over a one-year period during the COVID-19 pandemic, 9.8% of 38,470 individuals invited to our phone-based questionnaire consented to participate. Because the study was conducted across an evolving landscape of COVID-19 epidemiology and public health recommendations, flexibility to adapt protocols, exclusion criteria, and survey questions so that they remained meaningful was necessary. Results were consistent with prior research demonstrating that individuals who have a history of disease are more willing to participate in a health study than those naïve to the disease [11]. The likelihood of an individual answering the phone decreased with age. Older individuals may experience more severe health burdens or reside in institutions unreachable by direct calls [18,24]. The time of day that a potential participant was called influenced the likelihood of answering the phone, but not of consenting to participate. Results confirmed literature reporting that morning calls yield lower enrollment, indicating that strategically timing calls is crucial in maximizing enrollment efficiency [15]. We recommend placing calls during the afternoon and evening, allocating more efforts towards enrolling controls, and restricting survey length if possible.
This study was successful in representing the population seeking SARS-CoV-2 testing in California, with a recruitment effort of almost 40,000 calls and a well-powered size of nearly 4,000 participants within the first year. The data quality allowed for identification of reasons for unsuccessful enrollment and determinants of participation. The infrastructure of the study, particularly weekly meetings, detailed standard operating procedure documentation, and messaging platform enabled quick identification of obstacles and implementation adaptations.
Enrollment—especially of controls—became more difficult throughout the study. This may be explained by the increase in previously positive individuals and by diminished interest or perceived risk regarding the pandemic. We recommend shortening survey length or offering call-backs to minimize loss of participants due to insufficient time.
Interviewers highlighted themes unique to remote phone-based research during the COVID-19 pandemic. Notably, strong participant emotions and harassment were especially trying for some interviewers. We believe these findings are novel in remote, phone-based quantitative health research and unique to the national context of polarized attitudes towards the COVID-19 pandemic [25]. Proactively adapting to emerging obstacles was critical to the success of the study. Designing training that simulated realistic scenarios and detailed protocols for difficult encounters resulted in considerable improvement for subsequent interviewer cohorts. We suggest implementation of frequent proactive mental health check-ins, continual collection of anonymous feedback, and an exit survey for interviewers.
There are several limitations to this analysis. Due to data constraints, we were unable to examine how socioeconomic status, race, education, occupation, and setting, such as housing, may influence the likelihood of answering the phone and consenting to participate. Results may not be generalizable to the broader California population, as individuals who did not seek laboratory-confirmed SARS-CoV-2 testing are excluded by design. Severely ill SARS-CoV-2 positive individuals, unwell individuals with comorbidities, those without stable phone service, and those cautious about phone solicitations might not be well represented in our study.
Conclusions
Our findings demonstrate how researchers can strategize recruitment for future phone-based observational studies conducted amidst an evolving public health emergency. Actively monitoring study implementation enables timely adaptation of practices for data collection and can be an important approach to preserving interviewer and other study staff well-being. We provide evidence of poor mental health and burnout among remote study staff that is consistent with previous literature on public health workers. Our findings will assist future researchers in conducting efficient, sustainable, and timely research in response to emergent public health crises.
Acknowledgments
We would like to thank all study participants that gave time to complete our survey making possible this work.
Members of the California COVID-19 Case-Control Study Team include: Adrian Cornejo, Amanda Lam, Amanda Moe, Amandeep Kaur, Anna Fang, Ashly Dyke, Camilla Barbaduomo, Christine Wan, Diana Nicole Morales Felipe, Diana Poindexter, Erin Xavier, Hyemin Park, Helia Samani, Jessica Ni, Julia Cheunkarndee, Mahsa Javadi, Maya Spencer, Michelle Spinosa, Miriam Bermejo, Monique Miller, Najla Dabbagh, Natalie Dassian, Nikolina Walas, Paulina Frost, Savannah Corredor, Shrey Saretha, Timothy Ho, Vivian Tran, Yang Zhou, Yasmine Abdulrahim, Zheng Dong.
References
- 1. Capano G, Howlett M, Jarvis DSL, Ramesh M, Goyal N. Mobilizing Policy (In)Capacity to Fight COVID-19: Understanding Variations in State Responses. Policy and Society. 2020;39(3):285–308. pmid:35039722
- 2. Lipsitch M, Swerdlow DL, Finelli L. Defining the Epidemiology of Covid-19—Studies Needed. New England Journal of Medicine. 2020;382:1194–1196. pmid:32074416
- 3. Dean NE, Hogan JW, Schnitzer ME. Covid-19 Vaccine Effectiveness and the Test-Negative Design. New England Journal of Medicine. Published online September 8, 2021. pmid:34496195
- 4. Andrejko KL, Pry J, Myers JF, et al. Predictors of Severe Acute Respiratory Syndrome Coronavirus 2 Infection Following High-Risk Exposure. Clinical Infectious Diseases. Published online December 21, 2021. pmid:34932817
- 5. Tenforde MW, Fisher KA, Patel MM. Identifying COVID-19 Risk Through Observational Studies to Inform Control Measures. JAMA. Published online February 22, 2021. pmid:33616617
- 6. Andrejko KL, Pry J, Myers JF, et al. Prevention of Coronavirus Disease 2019 (COVID-19) by mRNA-Based Vaccines Within the General Population of California. Clinical Infectious Diseases. Published online July 20, 2021. pmid:34282839
- 7. Andrejko KL. Effectiveness of Face Mask or Respirator Use in Indoor Public Settings for Prevention of SARS-CoV-2 Infection—California, February–December 2021. MMWR Morbidity and Mortality Weekly Report. 2022;71(6). pmid:35143470
- 8. Zezza A, Martuscelli A, Wollburg P, Gourlay S, Kilic T. Viewpoint: High-frequency phone surveys on COVID-19: Good practices, open questions. Food Policy. 2021;105:102153. pmid:34483442
- 9. Ridolfo H, Boone J, Dickey N. Will They Answer the Phone If They Know It’s Us? Using Caller ID to Improve Response Rates. United States Department of Agriculture; 2013. Accessed October 2022. https://www.nass.usda.gov/Education_and_Outreach/Reports,_Presentations_and_Conferences/reports/Area%20Code%20Report1-27-14.pdf.
- 10. Brick JM, Williams D. Explaining Rising Nonresponse Rates in Cross-Sectional Surveys. Massey DS, Tourangeau R, eds. The ANNALS of the American Academy of Political and Social Science. 2012;645(1):36–59.
- 11. Glass D, Kelsall H, Slegers C, et al. A telephone survey of factors affecting willingness to participate in health research surveys. BMC Public Health. 2015;15(1). pmid:26438148
- 12. Ravanam MS, Skalland B, Zhao Z, Yankey D, Smith C. An Evaluation of the Impact of Using an Alternate Caller ID Display in the National Immunization Survey. Proc Am Stat Assoc. 2018;73:AAPOR2018. pmid:32336963
- 13. Vicente P, Lopes I. When Should I Call You? An Analysis of Differences in Demographics and Responses According to Respondents’ Location in a Mobile CATI Survey. Social Science Computer Review. 2014;33(6):766–778.
- 14. McClain C, Rainie L. The Challenges of Contact Tracing as U.S. Battles COVID-19. Pew Research Center: Internet, Science & Tech. Published October 30, 2020. https://www.pewresearch.org/internet/2020/10/30/the-challenges-of-contact-tracing-as-u-s-battles-covid-19/.
- 15. Shino E, McCarty C. Telephone Survey Calling Patterns, Productivity, Survey Responses, and Their Effect on Measuring Public Opinion. Field Methods. Published online March 13, 2020:1525822X2090828.
- 16. S.V P, Ittamalla R General public’s attitude toward governments implementing digital contact tracing to curb COVID-19 –a study based on natural language processing. International Journal of Pervasive Computing and Communications. 2022;18(5):485–490. doi:https://doi.org/https://doi.org/10.1108/ijpcc-09-2020-0121.
- 17. Ali GGMdN, Rahman MdM, Hossain MdA, et al. Public Perceptions of COVID-19 Vaccines: Policy Implications from US Spatiotemporal Sentiment Analytics. Healthcare. 2021;9(9):1110. pmid:34574884
- 18. Druckman JN, Klar S, Krupnikov Y, Levendusky M, Ryan JB. Affective polarization, local contexts and public opinion in America. Nature Human Behaviour. 2020;5:1–11. pmid:33230283
- 19. Goldstein DAN, Wiedemann J. Who Do You Trust? The Consequences of Partisanship and Trust for Public Responsiveness to COVID-19 Orders. Perspectives on Politics. Published online April 16, 2021:1–27.
- 20. Koné A. Symptoms of Mental Health Conditions and Suicidal Ideation Among State, Tribal, Local, and Territorial Public Health Workers—United States, March 14–25, 2022. MMWR Morbidity and Mortality Weekly Report. 2022;71. pmid:35862276
- 21. Bluvstein I, Ifrah K, Lifshitz R, Markovitz N, Shmotkin D. Vulnerability and Resilience in Sensitive Research: The Case of the Quantitative Researcher. Journal of Empirical Research on Human Research Ethics. Published online June 28, 2021:155626462110274. pmid:34180723
- 22. Kumar S, Cavallaro L. Researcher Self-Care in Emotionally Demanding Research: A Proposed Conceptual Framework. Qualitative Health Research. 2017;28(4):648–658. pmid:29224510
- 23. Fried AL, Fisher CB. Moral stress and job burnout among frontline staff conducting clinical research on affective and anxiety disorders. Professional Psychology: Research and Practice. 2016;47(3):171–180. pmid:28484305
- 24. Miller EA. Protecting and Improving the Lives of Older Adults in the COVID-19 Era. Journal of Aging & Social Policy. 2020;32(4–5):297–309. pmid:32583751
- 25. Hart PS, Chinn S, Soroka S. Politicization and polarization in COVID-19 news coverage. Science Communication. 2020;42(5):107554702095073. pmid:38602988