Figures
Abstract
Background
Real-world performance of COVID-19 diagnostic tests under Emergency Use Authorization (EUA) must be assessed. We describe overall trends in the performance of serology tests in the context of real-world implementation.
Methods
Six health systems estimated the odds of seropositivity and positive percent agreement (PPA) of serology test among people with confirmed SARS-CoV-2 infection by molecular test. In each dataset, we present the odds ratio and PPA, overall and by key clinical, demographic, and practice parameters.
Results
A total of 15,615 people were observed to have at least one serology test 14–90 days after a positive molecular test for SARS-CoV-2. We observed higher PPA in Hispanic (PPA range: 79–96%) compared to non-Hispanic (60–89%) patients; in those presenting with at least one COVID-19 related symptom (69–93%) as compared to no such symptoms (63–91%); and in inpatient (70–97%) and emergency department (93–99%) compared to outpatient (63–92%) settings across datasets. PPA was highest in those with diabetes (75–94%) and kidney disease (83–95%); and lowest in those with auto-immune conditions or who are immunocompromised (56–93%). The odds ratios (OR) for seropositivity were higher in Hispanics compared to non-Hispanics (OR range: 2.59–3.86), patients with diabetes (1.49–1.56), and obesity (1.63–2.23); and lower in those with immunocompromised or autoimmune conditions (0.25–0.70), as compared to those without those comorbidities. In a subset of three datasets with robust information on serology test name, seven tests were used, two of which were used in multiple settings and met the EUA requirement of PPA ≥87%. Tests performed similarly across datasets.
Conclusion
Although the EUA requirement was not consistently met, more investigation is needed to understand how serology and molecular tests are used, including indication and protocol fidelity. Improved data interoperability of test and clinical/demographic data are needed to enable rapid assessment of the real-world performance of in vitro diagnostic tests.
Citation: Rodriguez-Watson CV, Louder AM, Kabelac C, Frederick CM, Sheils NE, Eldridge EH, et al. (2023) Real-world performance of SARS-Cov-2 serology tests in the United States, 2020. PLoS ONE 18(2): e0279956. https://doi.org/10.1371/journal.pone.0279956
Editor: Padmapriya P. Banada, Rutgers Biomedical and Health Sciences, UNITED STATES
Received: April 21, 2022; Accepted: December 19, 2022; Published: February 3, 2023
Copyright: © 2023 Watson et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are contained within the paper and its Supporting information files. Person-level data are unavailable.
Funding: Financial support for this work was provided in part by a grant from The Rockefeller Foundation (HTH 030 GA-S). BDP, CK, GJ used funding provided by Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation (CERSI), a joint effort between Yale University, Mayo Clinic, and the U.S. Food and Drug Administration (FDA) (3U01FD005938) (https://www.fda.gov/). AJB was funded by award number A128219 and Grant Number U01FD005978 from the FDA, which supports the UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation (CERSI). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the HHS or FDA. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: AJB is a co-founder and consultant to Personalis and NuMedii; consultant to Samsung, Mango Tree Corporation, and in the recent past, 10x Genomics, Helix, Pathway Genomics, and Verinata (Illumina); has served on paid advisory panels or boards for Geisinger Health, Regenstrief Institute, Gerson Lehman Group, AlphaSights, Covance, Novartis, Genentech, Merck, and Roche; is a shareholder in Personalis and NuMedii; is a minor shareholder in Apple, Facebook, Alphabet (Google), Microsoft, Amazon, Snap, Snowflake, 10x Genomics, Illumina, Nuna Health, Assay Depot (Scientist.com), Vet24seven, Regeneron, Sanofi, Royalty Pharma, Pfizer, BioNTech, AstraZeneca, Moderna, Biogen, Twist Bioscience, Pacific Biosciences, Editas Medicine, Invitae, Doximity, and Sutro, and several other non-health related companies and mutual funds; and has received honoraria and travel reimbursement for invited talks from Johnson and Johnson, Roche, Genentech, Pfizer, Merck, Lilly, Takeda, Varian, Mars, Siemens, Optum, Abbott, Celgene, AstraZeneca, AbbVie, Westat, several investment and venture capital firms, and many academic institutions, medical or disease specific foundations and associations, and health systems. AJB receives royalty payments through Stanford University, for several patents and other disclosures licensed to NuMedii and Personalis. AJB’s research has been funded by NIH, Northrup Grumman (as the prime on an NIH contract), Genentech, Johnson and Johnson, FDA, Robert Wood Johnson Foundation, Leon Lowenstein Foundation, Intervalien Foundation, Priscilla Chan and Mark Zuckerberg, the Barbara and Gerson Bakar Foundation, and in the recent past, the March of Dimes, Juvenile Diabetes Research Foundation, California Governor’s Office of Planning and Research, California Institute for Regenerative Medicine, L’Oreal, and Progenity. CLB has intellectual property in and receives royalties from BioFire, Inc. She serves as a scientific advisor to IDbyDNA (San Francisco, CA and Salt Lake City, UT); and is on the Board of the Commonwealth Fund. CK is a paid employee of Aetion and hold Aetion stock options. NES is an employee of Optum Labs and owns stock in the parent company UnitedHealth group. NDL was an employee of Health Catalyst at the time the work was performed. JLG is a full-time employee of Regenstrief Institute, which provides independent research services to entities including those within the pharmaceutical and medical device industries. SJG serves as Chief Medical Information Officer for the Indiana Health Information Exchange, and is a founding partner of Uppstroms, LLC. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
Introduction
Despite the availability of highly effective COVID-19 vaccines to prevent hospitalization and reduce mortality [1, 2], variants continue to fuel the surge of COVID-19 across the U.S. [3, 4]. High-quality diagnostic and serology tests are essential tools to better understand the epidemiology of COVID-19 and immunity after infection [5, 6]. Viruses and antibodies are primarily detectable within certain temporal windows [7–9]. However, many individuals infected with SARS-CoV-2 are asymptomatic or may not seek medical care because of mild symptoms [10]. In contrast to molecular diagnostic tests, serologic tests are informative even once the SARS-CoV-2 infection is no longer present [11, 12].
Currently, there are 90 authorized SARS-CoV-2 serology/antibody tests approved for Emergency Use Authorization (EUA) [13]. However, they have not undergone the same evidentiary review standards required for Food and Drug Administration (FDA) clearance due to the COVID-19 national emergency [14, 15]. There is a need to assess the real-world performance of these tests. Further, while large studies have shown that greater than 91% of people with active SARS-CoV-2 infection seroconvert [16, 17], the factors associated with seroconversion (e.g., pre-existing conditions, the severity of COVID-19 presentation) remain elusive.
From a public health perspective, confidence in the ability of serological tests to identify those with recent infections is critical for effective pandemic planning. Estimates of disease prevalence directly inform dynamic population estimates of susceptible, infected, and recovered, which are needed to understand the infectiousness of SARS-CoV-2 [18]. From a clinical perspective, an accurate understanding of SARS-CoV-2 exposure is necessary to understand disease presentation and a clinical course of action, especially when patients do not present with symptoms or present late in their disease course (e.g., post-acute sequelae of SARS-CoV-2). Additionally, identifying factors associated with seropositivity may elucidate potential mechanisms of action that may be foundational in the development of therapy and treatment plans.
To address these gaps, we characterize the performance of serology tests by estimating the positive percent agreement (PPA) of serological samples obtained from people known to be positive for SARS-CoV-2 infection by molecular assay (e.g., PCR). We also sought to identify factors associated with seropositivity. Findings from this study may facilitate understanding of the real-world performance of serology tests, many of which were issued under EUA, and may help inform our understanding of the immune response to SARS-CoV-2.
Materials and methods
Study population and setting
Six health systems (i.e., datasets) collaborated on the Diagnostics Evidence Accelerator (EA): Health Catalyst, Mayo Clinic, Optum Labs, Regenstrief Institute, the University of California Health System, and Aetion and HealthVerity. The EA is a consortium of leading experts in health systems research, regulatory science, data science, and epidemiology, specifically assembled to analyze health system data to address key questions related to COVID-19. The EA provides a platform for rapid learning and research using a common analytic plan. Health Catalyst, Mayo Clinic, and the University of California Health System all utilized electronic health records (EHR) data from their respective healthcare delivery systems. The Regenstrief Institute accessed EHR and public health data from the Indiana Health Information Exchange [19, 20], while Aetion sourced healthcare data from HealthVerity Marketplace encompassing medical claims, pharmacy claims, hospital chargemaster, and data collected directly from laboratories. Optum Labs data included de-identified medical, and pharmacy claims as well as laboratory results data utilized medical, and pharmacy claims from a single, large U.S. insurer as well as data directly from laboratories. We refer to these health systems as datasets A-F for the purposes of anonymity. Data sources included in the analysis are generally categorized as either payer (claims) or healthcare delivery systems. As illustrated in Fig 1, data were drawn from across the U.S. with heavy representation in California, Illinois, Ohio, and Michigan. Characteristics of participating data sources and representative populations are described in the S1 Table.
Reprinted from brightcarbon.com under a CC BY license, with permission from Bright Carbon, original copyright (2021). Each color represents the number of data partners with a presence in each state but does not necessarily correspond to the number of people. The darkest color represents those where all six partners had a presence.
Study design
In this retrospective cohort study, we identified patients across different settings (e.g., inpatient, outpatient, emergency department (ED), or long-term care facility) who tested positive for SARS-CoV-2 ribonucleic acid (RNA) by molecular test between March–September 2020 and who received at least one subsequent serological test for SARS-CoV-2 immunoglobulin (Ig) G or Total antibody (Ab) from 14–90 days after the positive RNA test (Fig 2). We analyzed the first serology test in the 14–90-day follow-up period, which ended on December 31, 2020. “Date of RNA positive” served as the index (cohort entry) date and was defined hierarchically as either the date of 1) sample collection; 2) accession; or 3) result. Because the optimal time to observe a positive serology is at least two weeks after the index date, we only include patients who had at least one serology test 14–90 days after the index date [1–3, 7–9].
To minimize the effect of differential missingness between datasets, we applied the following rules: 1) included all persons with an office or telephone visit in the +/- 14 days around the index date to enable as complete an assessment of presenting symptoms as possible; 2) in claim systems, included only persons with at least six months of enrollment in the year before index; 3) estimated the proportion of patients at each site who had zero encounters in the prior year to contextualize our capture of pre-existing conditions, and 4) excluded variables from analysis if ≥30% of values were missing.
The Western—Copernicus Group (WCG) Institutional Review Board (IRB), the IRB of record for the Reagan-Udall Foundation for the FDA, reviewed the study and determined it to be non-human subjects research. Additionally, all legal and ethical approvals for use of the data included in this study were submitted, reviewed, and/or obtained locally at each contributing dataset by an IRB and/or governing board.
Measures
Outcomes.
The primary outcome of interest for the validation analysis was the PPA of positive antibody (IgG or total) from serology tests with positive RNA from molecular tests (e.g., PCR), which served as the reference standard. Serology tests reported in this analysis included: Abbott Architect IgG [21], Euroimmun IgG [22], Diazyme DZ-Lite SARS-CoV-2 IgG CLIA kit [23], Beckman SARS-CoV-2 IgG [24], Ortho Vitros IgG [25], Diasorin Liaison SARS-CoV-2 S1/S2 IgG [26], and Roche Elecsys Total Ab [27]. The Ortho Vitros was the only test used across multiple (3) datasets. We refer to these manufacturers—serological tests as Δ, Θ, Π, Λ, Ξ, Γ, and Ψ for anonymity. Molecular tests most reported in this analysis included: Hologic Panther Fusion [28], Hologic Aptima [29], Roche Cobas [30], Quest rRT-PCR [31], and Thermo Fisher Scientific Combo Kit [32]. We refer to these manufacturers—molecular tests as Σ, Φ, Ω, X, Y, and j for anonymity.
Covariates.
We collected demographic, behavioral, and environmental characteristics, baseline clinical presentation, key comorbidities, and test characteristics, including manufacturer, according to a diagram illustrating potential factors associated with serology testing (Fig 3). We identified comorbidities and clinical presentation using phenotypes defined by the International Classification of Diseases 10 (ICD-10), and/or National Drug Codes. We identified comorbidities (pre-existing conditions) in the 365 days before the index date through 15 days before the index date. We provided coding algorithms for groups to use, while some groups used existing algorithms generated by their site. The ICD-10 codes used to identify comorbidities are listed in the S2 Table. We also stratified analyses by RNA tests conducted before June 15, 2020, which marked the beginning of the summer wave of infections in the first year of the pandemic, compared to on or after that date.
Statistical analysis
Each contributing dataset ran its analysis according to a common protocol. Results were reviewed as a group to ensure alignment with the protocol and to review any protocol deviations. We calculated PPA as: (Number of positive antibody results ÷ Number of positive RNA results) x 100. We calculated PPA based on the first eligible serology test in the follow-up period overall and by age, sex, race, ethnicity, U.S. region, pregnancy status, pre-existing conditions, including but not limited to cardiovascular disease, obesity, hypertension, kidney disease, asthma, dementia, chronic liver disease, and smoking status. We also report the PPA by presenting symptoms, and serology tests at the time of the first serology test. We examined variations in PPA by serology tests and time, and serology tests and symptom presentation. We also examined variations in PPA by geography and care setting over time. We calculated exact (Clopper-Pearson) 95% confidence intervals (CI). We report significant differences where 95% CI have complete separation—although we did not conduct formal statistical comparisons of PPA between groups.
To study the odds of seropositivity, we estimated a model for the association to identify independent risk factors for seropositivity, assuming a binomial distribution for seropositivity status. Results are presented as the odds ratio (OR) and 95% CI that was calculated using score confidence intervals or exact CI [33]. All variables were treated as categorical. Symptoms were reported as a binary variable: “1” if any of the following symptoms were present: fever >100.4, abnormal chest imaging finding, high respiratory rate, low blood pressure, diarrhea, hypoglycemic, chest pain, delirium/confusion, headache, sore throat, cough, shortness of breath, pneumonia, acute respiratory infection, acute respiratory distress, cardiovascular presentation, renal presentation; and “0” otherwise. For datasets with data covering >1 geographic catchment area, geography was included as either one of four U.S. Census regions, or nine U.S. Census divisions based on patient home zip code. Variables with >30% missing/unknown values were excluded from models (except for pregnancy, pre-existing condition, or presenting symptoms, all of which were included). Each dataset used automated backward selection to remove non-significant pre-existing conditions while forcing all other covariates into the model. All analyses were performed using SAS software, version 9.2 or higher (SAS Institute, North Carolina, U.S.); or the Aetion Evidence Platform v4.13 (including R v3.4.2), which includes audit trails of all transformations of raw data and a quality check of the data ingestion process.
Results
Samples sizes across datasets ranged from 660–7,115; a total of 15,615 people with at least one serology test 14–90 days after the index date were included in the analyses. Between 35–65% of patients identified from health care delivery systems had no documented encounter in the system between 365 and 15 days before the index date. In contrast, only 11% of patients from national insurers reported having zero claims in the baseline period. As shown in Table 1, the serotested population was primarily 45–64 years of age (>40%), with a history of cardiovascular disease, including hypertension (8–70%). Race and ethnicity data were robust (<30% missing) in four datasets. The serotested population in those datasets was primarily White (>53%) and non-Hispanic (>65%), From datasets with national representation, persons from the Northeast (New England and Mid-Atlantic) were most represented in this serotested population. In datasets that represent regionally-based healthcare delivery systems, their population reflected their locations: Pacific and Midwest. Information on manufacturer test names was provided in four datasets. Generally, 2–3 primary tests were utilized in each dataset; 4 of 7 tests reported were used in >1 dataset. We did not observe any difference by age or sex in those for whom the test name was known versus unknown. In a single dataset with <30% of missing data on race/ethnicity, we observe over-representation of White and Hispanic people in those for whom the test name was known.
Positive percent agreement (PPA) of serology among molecularly confirmed SARS-CoV-2
The overall PPA ranged from 65–90% across analytic datasets (Table 2). The real-world PPA met the EUA requirement of ≥87% in three datasets (A, B, D) [34]. Two of these datasets represented national administrative claims and associated results with the date the sample was collected or received by the laboratory; the third represented data from EHRs and associated results with the date the test was conducted, which is lagged further from the clinical interaction than the former. Overall PPA was likely influenced by the mix of serology tests represented in each dataset. Seven serological tests were reported in this analysis, of which two (Δ and Γ) met the EUA PPA requirements. Two tests were used across multiple datasets and performed similarly above the EUA requirement. PPA by serology test type varied across datasets; with three of five reporting significantly lower PPA from total antibody (PPA range: 69–90%) compared to IgG (PPA range: 87–92%); and two showing no difference. We observed no difference in PPA with antibody tests that target spike compared to nucleocapsid proteins.
PPA was significantly higher in Black (PPA range: 86–92%), as compared to White (PPA range: 78–86%), persons in at least two of the four datasets reporting robust race/ethnicity data. PPA was significantly higher in Hispanic (PPA range: 79–96%), compared to non-Hispanic (PPA range: 60–86%), patients. PPA appeared highest in those with diabetes (PPA range: 75–94%) and kidney disease (PPA range: 75–95%), and lowest in those with conditions that leave them immunocompromised (PPA range: 56–93%). We observed higher PPA in the inpatient (PPA range: 70–97%) or ED (PPA range: 93–99%) setting compared to outpatient (PPA range: 63–92%). There was some evidence of higher PPA among patients with at least one COVID-19 related symptoms as compared to those with none (PPA range: 63–91%) among two datasets (B and D); and was particularly high for select conditions like pneumonia (PPA range: 82–97%).
However, differences in the PPA by the presence of symptoms do not appear to be explained by the test. A stratified analysis by test comparing those with and without symptoms (Table 3) showed no significant difference in PPA. PPA trends by calendar time were not consistent across datasets.
Factors associated with seropositivity
In adjusted models (Figs 4–9), the OR for seropositivity was significantly elevated in Hispanic compared to non-Hispanic ethnicity (OR range: 2.59–3.86); among those with pre-existing diabetes (OR range: 1.49–1.56) and obesity (1.63–2.23) as compared to those without pre-existing conditions; and among those observed in the ED compared to outpatient (OR range: 2.49–10.97). The OR for seropositivity was significantly lower in those with pre-existing immunocompromised or autoimmune conditions compared to those without such conditions (OR range: 0.25–0.70). In two of three datasets that included pre-existing cardiovascular disease in the OR model, the OR for seropositivity was significantly lower in persons with, compared to those without, such conditions (OR range: 0.49–0.57). The OR for seropositivity tended to be lower on or after June 15 compared to prior in half the datasets, but differences were not significant in the other half.
Discussion
Serology tests are an important instrument in the toolkit to understand the epidemiology of COVID-19 because of their ability to identify persons with prior infection who may present too late in the infectious period due to mild symptoms, or no symptoms at all. Serology results may inform diagnoses of post-acute SARS-CoV-2 (PASC) and the appropriate treatment course, which may depend on whether patients are at increased risk for severe illness due to insufficient antibody response [35]. The reported sensitivity of the serology tests included in this analysis that were submitted for EUA approval were all >95% [36]. Our analysis of multiple large datasets of patients with confirmed SARS-CoV-2 infection suggests that serology tests performed lower than = expected–with PPA ranges (a measure analogous to sensitivity) from 65–90%.—Our results align with results from smaller, detailed laboratory evaluations that suggest a lack of harmonization, including optimization of cut-off values, may contribute to decreased overall performance. Additionally, our results align with studies that include more representative samples of milder or asymptomatic persons [37–39]. Two of seven tests reported across datasets achieved the EUA requirement of PPA ≥ 87%. As we did not have data on specific serology-molecular pairs or meta-information on the tests (including fidelity to protocols for serology and molecular test analysis), these results reflect more on the real-world implementation of the tests rather than the true quality of the tests. Specifically, where the same test was used across multiple datasets, they all performed similarly. For example, the serology test Γ performed similarly high (PPA >90%) across three datasets. However, the overall PPA for tests performed in datasets A and B were higher than in dataset E. A major factor that may have contributed to this difference is that the other serological tests reported to datasets A and B performed above the EUA requirement. In contrast, the other tests reported in dataset E performed below the EUA requirement. Additionally, datasets A and B leveraged administrative claims data and associated RNA and serology results with sample collection or sample receipt date, while dataset E associated results with the date the test was run.
Dataset E also represents those from a healthcare delivery system where serology tests were initially only used for symptomatic patients with at least 12 days of symptoms. This practice shifted after approximately two months (June 1, 2020) to a protocol that required both molecular and serological testing for SARS-CoV-2 as part of pre-procedure screening. This protocol was in effect for another three months (August 31, 2020), after which the healthcare system shifted to unrestricted testing for both molecular and serology tests and saw a substantial drop in the use of serological testing. We expected that procedural “lags” to serotesting, combined with additional lags due to associating results with a date downstream from the clinical interaction, may have further extended the time between infection/symptom onset and the actual time of serology sampling. The impact of this misclassification may be most important for serology samples at the upper bounds of 90 days; where samples were likely >90 days from the point of infection and humoral antibodies more likely to have declined. Despite changes in the protocol over time, we observed no overall or test-specific difference in PPA before or since June 15, 2020 in dataset E. Nevertheless, administrative protocols create lags in serotesting that challenge our assumptions of whether the observed molecular “test date” is a good proxy for symptom onset. Absent any knowledge of such policy, it’s difficult to make broad assumptions regarding patterns in molecular or serology testing unless established clinical protocols are known.
We observed that patients of Hispanic ethnicity compared to non-Hispanic patients, with pre-existing obesity and those who presented in the ED had a higher OR for seropositivity; and similarly higher PPA. These results further support what others have observed that persons with unmanaged diabetes, who are disproportionately people of color, are vulnerable to hyper-inflammation related to COVID-19 [40]. Furthermore, hyper-inflammation, including pro-inflammatory cytokine storm, has been associated with severe disease, reduced viral clearance [41], and sustained antibody production [42]. Although a recent small study showed that while a low viral load is associated with lower antibody response, clinical illness does not guarantee seroconversion [43]. Other studies have demonstrated people with cancer have a lower probability of mounting an immune response from the vaccine, as demonstrated by seroconversion, viral neutralization, and T-cell response [44, 45]. Our results demonstrating lower odds of seropositivity among those with cancer and other immunodeficiencies suggest that the same may be true regarding their antibody response to infection.
Strengths
Our study has many strengths. This was a large assessment of serotesting across the U.S. in diverse datasets leveraging either EHR or claims data. We developed a protocol that incorporated the unique characteristics of each data source and provided a forum to transparently communicate and collaborate on study design and interpretation. We also established a platform to rapidly collect and analyze data from various systems to evaluate process improvement and identify important trends over time. Such a platform may be used to evaluate process improvement and comparisons within data systems. We did extensive characterization of missing data to guide model development and help with interpretation. Additionally, this study was conducted before public availability of COVID-19 vaccines across the U.S., which minimizes the potential for confounding related to vaccine-induced antibodies.
Limitations
A major limitation in this real-world analysis is a large number of missing test names and relevant meta-data, including quality control measures adopted, for both molecular and serological tests. As such, we were unable to account for molecular-serology pairs when assessing PPA or the fidelity with which these tests were performed. A large amount of missing test name information limited our ability to describe trends by the manufacturer. Although, a thorough examination of missing data does not suggest differential missingness by age or sex. Importantly, the intent of this analysis was not to evaluate individual tests, but the performance of serology in the context of real-world implementation of test protocols and varying reference standards. As discussed in our prior manuscript, the sample included in this study included those who were more likely to be serotested for SARS-CoV-2: White, 45–64 years of age, with prior history of cardiovascular disease. Nevertheless, there was still sufficiently large number of people to assess PPA trends among younger ages and in those with and without other pre-existing conditions. Finally, this study was conducted before the surge of the Omicron variant, which has been shown to have a number of mutations on the N-gene and S-gene that reduce the sensitivity of some diagnostic tests [46]. As such, our inference is limited to the SARS-CoV-2 variants prior to Omicron, primarily alpha.
Conclusion
Across large samples of patients with molecularly confirmed SARS-CoV-2, serology tests did not consistently meet the EUA requirement of PPA ≥ 87% in the post-market setting. However, given the limited availability of test names, this analysis serves as a signal that further investigation into how serology and molecular tests are used, including protocol fidelity, is needed to understand ways to improve the real-world performance of serology tests.
Despite differences in testing protocols and data availability, the similarity in performance of serology tests across datasets suggests that serology tests were robust to differences in care settings. However, the real-world PPA for several serology tests did not meet EUA requirements; and the exclusive representation and low use of such tests in certain datasets look to have impacted the overall performance of serology tests in those datasets. Where data were sufficiently robust, we observed that people of Hispanic ethnicity had a higher odd of seropositivity than non-Hispanics. Higher odds of seropositivity in those with pre-existing diabetes or obesity further support the hypothesis that these conditions are associated with more severe disease, reduced viral clearance, and the sustained presence of antibodies. Conversely, lower odds of seropositivity among those with cancer and other immunodeficiencies suggest that immunopathology in these groups associated with the vaccine may extend to infection.
Interpreting results from real-world data collected from clinical and administrative databases is challenging. A clear understanding of testing protocols at the point of care is needed to validate assumptions regarding proxy variables and to interpret results. Incomplete information on race/ethnicity and test name limited our ability to address racial disparities in testing and real-world performance of serological tests. Nevertheless, implementing best practices for analyzing and reporting results from observational data across multiple datasets yields confidence in trends that are repeated. And where results are divergent, we were able to explore how differences in data sources may explain findings and target areas for future investigation. Improved data interoperability to link test names and clinical/demographic data is critical to enable rapid assessment of the real-world performance of in vitro diagnostic tests, particularly in the face of fast-mutating pathogens.
Supporting information
S1 Table. Characteristics of participating data sources and representative populations.
https://doi.org/10.1371/journal.pone.0279956.s007
(DOCX)
S2 Table. Phenotype (code-lists) for specified presenting symptoms & pre-existing conditions.
https://doi.org/10.1371/journal.pone.0279956.s008
(DOCX)
Acknowledgments
Special thanks to our advisors on this project from the U.S. Food and Drug Administration: Aloka Chakravarty, Tamar Lasky, Gina Valo, Mary Jung, Stephen Lovell, Jacqueline M Major, Daniel Caños, Sara Brenner, and Wendy Rubinstein; and Duke-Margolis: Christina Silcox. We thank all members of the Evidence Accelerator Workgroup for their support and feedback: Roland Romero, James Okusa, Elijah Mari Quinicot, Amar Bhat, Susan Winckler, Alecia Clary, Sadiqa Mahmood, Philip Ballentine, Perry L. Mar, Cynthia Lim Louis, Connor McAndrews, Elitza S. Theel, Cora Han, Pagan Morris, and Charles Wilson. A special thanks and recognition for the contributions and sacrifice of Dr. Michael Waters, our dear colleague, and friend who will be forever in our thoughts. We thank Amir Alishahi Tabriz MD, PhD for his assistance with manuscript preparation.
References
- 1. Moline HL, Whitaker M, Deng L, Rhodes JC, Milucky J, Pham H, et al. Effectiveness of COVID-19 Vaccines in Preventing Hospitalization Among Adults Aged≥ 65 Years—COVID-NET, 13 States, February–April 2021. Morbidity and Mortality Weekly Report. 2021;70: 1088.
- 2. Lopez Bernal J, Andrews N, Gower C, Gallagher E, Simmons R, Thelwall S, et al. Effectiveness of Covid-19 vaccines against the B. 1.617. 2 (Delta) variant. New England Journal of Medicine. 2021.
- 3. England PH. SARS-CoV-2 variants of concern and variants under investigation in England. Public Health England. 2021;11.
- 4. Tao K, Tzou PL, Nouhin J, Gupta RK, de Oliveira T, Kosakovsky Pond SL, et al. The biological and clinical significance of emerging SARS-CoV-2 variants. Nature Reviews Genetics. 2021;22: 757–773. pmid:34535792
- 5. Hanson KE, Caliendo AM, Arias CA, Englund JA, Lee MJ, Loeb M, et al. Infectious Diseases Society of America guidelines on the diagnosis of COVID-19. Clinical infectious diseases. 2020.
- 6. Cheng MP, Yansouni CP, Basta NE, Desjardins M, Kanjilal S, Paquette K, et al. Serodiagnostics for Severe Acute Respiratory Syndrome–Related Coronavirus 2: A Narrative Review. Annals of internal medicine. 2020;173: 450–460.
- 7. Long Q-X, Liu B-Z, Deng H-J, Wu G-C, Deng K, Chen Y-K, et al. Antibody responses to SARS-CoV-2 in patients with COVID-19. Nature medicine. 2020;26: 845–848. pmid:32350462
- 8. Long Q-X, Tang X-J, Shi Q-L, Li Q, Deng H-J, Yuan J, et al. Clinical and immunological assessment of asymptomatic SARS-CoV-2 infections. Nature medicine. 2020;26: 1200–1204. pmid:32555424
- 9. Sethuraman N, Jeremiah SS, Ryo A. Interpreting diagnostic tests for SARS-CoV-2. Jama. 2020;323: 2249–2251. pmid:32374370
- 10. Gao Z, Xu Y, Sun C, Wang X, Guo Y, Qiu S, et al. A systematic review of asymptomatic infections with COVID-19. Journal of Microbiology, Immunology and Infection. 2021;54: 12–16. pmid:32425996
- 11. Caini S, Bellerba F, Corso F, Díaz-Basabe A, Natoli G, Paget J, et al. Meta-analysis of diagnostic performance of serological tests for SARS-CoV-2 antibodies up to 25 April 2020 and public health implications. Eurosurveillance. 2020;25: 2000980. pmid:32553061
- 12. Ainsworth M, Andersson M, Auckland K, Baillie JK, Barnes E, Beer S, et al. Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison. The Lancet Infectious Diseases. 2020;20: 1390–1400. pmid:32979318
- 13.
FDA U. In vitro diagnostics EUAs-serology and other adaptive immune response tests for SARS-CoV-2. 2021.
- 14. Lassaunière R, Frische A, Harboe ZB, Nielsen AC, Fomsgaard A, Krogfelt KA, et al. Evaluation of nine commercial SARS-CoV-2 immunoassays. MedRxiv. 2020.
- 15. Whitman JD, Hiatt J, Mowery CT, Shy BR, Yu R, Yamamoto TN, et al. Test performance evaluation of SARS-CoV-2 serological assays. MedRxiv. 2020. pmid:32511497
- 16. Wajnberg A, Amanat F, Firpo A, Altman DR, Bailey MJ, Mansour M, et al. Robust neutralizing antibodies to SARS-CoV-2 infection persist for months. Science. 2020;370: 1227–1230. pmid:33115920
- 17. Gudbjartsson DF, Norddahl GL, Melsted P, Gunnarsdottir K, Holm H, Eythorsson E, et al. Humoral immune response to SARS-CoV-2 in Iceland. New England Journal of Medicine. 2020;383: 1724–1734. pmid:32871063
- 18. Overton CE, Stage HB, Ahmad S, Curran-Sebastian J, Dark P, Das R, et al. Using statistics and mathematical modelling to understand infectious disease outbreaks: COVID-19 as an example. Infectious Disease Modelling. 2020;5: 409–441. pmid:32691015
- 19. McDonald CJ, Overhage JM, Barnes M, Schadow G, Blevins L, Dexter PR, et al. The Indiana network for patient care: a working local health information infrastructure. Health affairs. 2005;24: 1214–1220.
- 20. Dixon BE, Whipple EC, Lajiness JM, Murray MD. Utilizing an integrated infrastructure for outcomes research: a systematic review. Health Information & Libraries Journal. 2016;33: 7–32. pmid:26639793
- 21.
Abbott® SARS-CoV-2 S1/S2 IgG (REF 6R86-20). 2021. Oct. [Internet]. https://www.fda.gov/media/137383/download.
- 22.
Euroimmun® Anti-SARS-CoV-2 ELISA (IgG) (REF EI 2606–9601 G). 2021. Oct. [Internet]. https://www.fda.gov/media/137609/download.
- 23.
Diazyme Laboratories, Inc. DIAZYME DZ-LITE SARS-CoV-2 IgGCLIA KIT (REF 60900 Rev C). 2021. Oct. [Internet]. https://www.fda.gov/media/139865/download.
- 24.
Beckman Coulter® SARS-CoV-2 S1/S2 IgG (REF C58961). 2021. Oct. [Internet]. https://www.fda.gov/media/139627/download.
- 25.
VITROS Immunodiagnostic Products Anti-SARS-CoV-2 IgG Reagent Pack (REF 619 9919). 2021. Oct. [Internet]. https://www.fda.gov/media/137363/download.
- 26.
DiaSorin Inc, LIAISON® SARS-CoV-2 S1/S2 IgG (REF 311460). 2021. Oct. [Internet]. https://www.fda.gov/media/137359/download.
- 27.
Cobas Elecsys Anti-SARS-CoV-2 (REF 09203095190). 2021. Oct. [Internet]. https://www.fda.gov/media/137605/download.
- 28.
SARS-CoV-2 Assay (Panther Fusion® System). 2021. Oct. [Internet]. https://www.fda.gov/media/136156/download.
- 29.
Aptima® SARS-CoV-2 Assay (Panther® System). 2021. Oct. [Internet]. https://www.fda.gov/media/138096/download.
- 30.
cobas® SARS-CoV-2. Qualitative assay for use on the cobas® 6800/8800 Systems. 2021. Oct. [Internet]. https://www.fda.gov/media/136049/download.
- 31.
Quest Diagnostics. SARS-CoV-2 RNA, Qualitative Real-Time RT-PCR (Test Code 39433). 2021. Oct. [Internet]. https://www.fda.gov/media/136231/download.
- 32.
TaqPath™ COVID-19 Combo Kit and SARS-CoV-2 RNA. Multiplex real-time RT-PCR test intended for the qualitative detection of nucleic acid from SARS‑CoV‑2. 2021. Oct. [Internet]. https://www.fda.gov/media/13612/download.
- 33.
Administration UF and D. Statistical guidance on reporting results from studies evaluating diagnostic tests. Rockville, MD: US FDA. 2007.
- 34.
Administration UF and D. Policy for coronavirus disease-2019 tests during the public health emergency (revised): immediately in effect guidance for clinical laboratories, commercial manufacturers, and Food and Drug Administration staff. United States Food and Drug Administration. United States. Food and Drug Administration; 2020.
- 35.
Fact Sheet For Health Care Providers Emergency Use Authorization (Eua) Of Bamlanivimab And Etesevimab 12222021.: 45.
- 36.
Administration UF and D. EUA authorized serology test performance. 2020.
- 37. Escribano P, Álvarez-Uría A, Alonso R, Catalán P, Alcalá L, Muñoz P, et al. Detection of SARS-CoV-2 antibodies is insufficient for the diagnosis of active or cured COVID-19. Scientific reports. 2020;10: 1–7.
- 38. Harritshøj LH, Gybel-Brask M, Afzal S, Kamstrup PR, Jørgensen CS, Thomsen MK, et al. Comparison of 16 serological SARS-CoV-2 immunoassays in 16 clinical laboratories. Journal of Clinical Microbiology. 2021;59: e02596–20. pmid:33574119
- 39. Ast V, Costina V, Eichner R, Bode A, Aida S, Gerhards C, et al. Assessing the quality of serological testing in the COVID-19 pandemic: results of a European external quality assessment (EQA) scheme for anti-SARS-CoV-2 antibody detection. Journal of clinical microbiology. 2021;59: e00559–21. pmid:34190575
- 40. Landstra CP, De Koning EJ. COVID-19 and diabetes: understanding the interrelationship and risks for a severe course. Frontiers in Endocrinology. 2021;12: 599. pmid:34220706
- 41. Tay MZ, Poh CM, Rénia L, MacAry PA, Ng LF. The trinity of COVID-19: immunity, inflammation and intervention. Nature Reviews Immunology. 2020;20: 363–374. pmid:32346093
- 42. Fajnzylber J, Regan J, Coxen K, Corry H, Wong C, Rosenthal A, et al. SARS-CoV-2 viral load is associated with increased disease severity and mortality. Nature communications. 2020;11: 1–9.
- 43. Liu W, Russell RM, Bibollet-Ruche F, Skelly AN, Sherrill-Mix S, Freeman DA, et al. Predictors of Nonseroconversion after SARS-CoV-2 Infection. Emerging Infectious Diseases. 2021;27: 2454. pmid:34193339
- 44. Yazaki S, Yoshida T, Kojima Y, Yagishita S, Nakahama H, Okinaka K, et al. Difference in SARS-CoV-2 Antibody Status Between Patients With Cancer and Health Care Workers During the COVID-19 Pandemic in Japan. JAMA oncology. 2021. pmid:34047762
- 45. Massarweh A, Eliakim-Raz N, Stemmer A, Levy-Barda A, Yust-Katz S, Zer A, et al. Evaluation of Seropositivity Following BNT162b2 Messenger RNA Vaccination for SARS-CoV-2 in Patients Undergoing Treatment for Cancer. JAMA oncology. 2021. pmid:34047765
- 46.
Administration UF and D. SARS-CoV-2 viral mutations: impact on COVID-19 tests. 2021.