Skip to main content
Advertisement
  • Loading metrics

Accuracy of rapid point-of-care antigen-based diagnostics for SARS-CoV-2: An updated systematic review and meta-analysis with meta-regression analyzing influencing factors

Abstract

Background

Comprehensive information about the accuracy of antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is essential to guide public health decision makers in choosing the best tests and testing policies. In August 2021, we published a systematic review and meta-analysis about the accuracy of Ag-RDTs. We now update this work and analyze the factors influencing test sensitivity in further detail.

Methods and findings

We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched preprint and peer-reviewed databases for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 until August 31, 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity with reverse transcription polymerase chain reaction (RT-PCR) testing as a reference. To evaluate factors influencing test sensitivity, we performed 3 different analyses using multivariable mixed-effects meta-regression models. We included 194 studies with 221,878 Ag-RDTs performed. Overall, the pooled estimates of Ag-RDT sensitivity and specificity were 72.0% (95% confidence interval [CI] 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1). When manufacturer instructions were followed, sensitivity increased to 76.3% (95% CI 73.7 to 78.7). Sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values (97.9% [95% CI 96.9 to 98.9] and 90.6% [95% CI 88.3 to 93.0] for Ct-values <20 and <25, compared to 54.4% [95% CI 47.3 to 61.5] and 18.7% [95% CI 13.9 to 23.4] for Ct-values ≥25 and ≥30) and was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit decrease in mean Ct-value when adjusting for testing procedure and patients’ symptom status. Concordantly, we found the mean Ct-value to be lower for true positive (22.2 [95% CI 21.5 to 22.8]) compared to false negative (30.4 [95% CI 29.7 to 31.1]) results. Testing in the first week from symptom onset resulted in substantially higher sensitivity (81.9% [95% CI 77.7 to 85.5]) compared to testing after 1 week (51.8%, 95% CI 41.5 to 61.9). Similarly, sensitivity was higher in symptomatic (76.2% [95% CI 73.3 to 78.9]) compared to asymptomatic (56.8% [95% CI 50.9 to 62.4]) persons. However, both effects were mainly driven by the Ct-value of the sample. With regards to sample type, highest sensitivity was found for nasopharyngeal (NP) and combined NP/oropharyngeal samples (70.8% [95% CI 68.3 to 73.2]), as well as in anterior nasal/mid-turbinate samples (77.3% [95% CI 73.0 to 81.0]). Our analysis was limited by the included studies’ heterogeneity in viral load assessment and sample origination.

Conclusions

Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all (>90%) when high viral loads are present. With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, they are especially useful to detect persons with high viral load who are most likely to transmit the virus. To further quantify the effects of other factors influencing test sensitivity, standardization of clinical accuracy studies and access to patient level Ct-values and duration of symptoms are needed.

Author summary

Why was this study done?

  • Antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) have proven to be a cornerstone in the fight against the Coronavirus Disease 2019 (COVID-19) pandemic.
  • In an earlier analysis, we found Ag-RDTs to be 76.3% sensitive and 99.1% specific, but with sensitivity varying between test manufacturers, the way tests were performed, and the patients in which they were used.
  • We now present an updated analysis and explore the factors influencing Ag-RDTs’ sensitivity and driving heterogeneity in the results in further detail.

What did the researchers do and find?

  • We searched multiple preprint and peer-reviewed databases for clinical accuracy studies evaluating Ag-RDTs for SARS-CoV-2 at the point of care.
  • Ag-RDTs proved to be 76.3% (95% confidence interval (CI) 73.7 to 78.7) sensitive and 99.1% (95% CI 98.8 to 99.3) specific, when performed as per the manufacturer’s instructions.
  • Sensitivity increased by 2.9 percentage points (95% CI 1.7 to 4.0) for each unit the mean cycle threshold (Ct)-value, a semiquantitative measurement of the real-time polymerase chain reaction test decreased, and sensitivity was highest in samples with a Ct-value <20 (i.e., a high viral load; sensitivity of 97.9% [95% CI 96.9 to 98.9]).
  • Higher sensitivity was also found in samples originating from symptomatic compared to asymptomatic persons, especially when study participants were still within the first week of symptom onset, but these effects were mainly driven by the sample’s viral load.

What do these findings mean?

  • Compared to our previous analysis, Ag-RDTs continue to show high sensitivity and excellent specificity in detecting SARS-CoV-2.
  • With viral load being the main driver behind test sensitivity, Ag-RDTs detect almost all of the persons with high viral load, who are at the greatest risk of transmitting the virus.
  • While it is unlikely that the overall performance of Ag-RDTs will substantially change, further research is needed to analyze the accuracy of Ag-RDTs for different virus variants and sample types, as well as methods of test performance (e.g., self-performed, instrument based) in more detail.

Introduction

Antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) have proven to be a cornerstone in fighting the Coronavirus Disease 2019 (COVID-19) pandemic, as they provide results quickly and are easy to use [1]. Nevertheless, the Ag-RDTs’ performance differs widely between manufacturers, the way they are performed, and the patients in which they are used [2,3]. Thus, a comprehensive synthesis of evidence on commercially available Ag-RDTs and the factors influencing their accuracy is vital to guide public health decision makers in choosing the right test for their needs [4].

Starting in October 2020, we conducted a living systematic review (available online at www.diagnosticsglobalhealth.org, updated weekly until August 31, 2021), summarizing the accuracy of commercially available Ag-RDTs reported in scientific literature. To equip public health decision makers with the latest findings, we published the results of our first review as soon as possible in February 2021 (including literature until December 15, 2020) [5]. After peer-review and including studies for 4 further months (until April 30, 2021), we published an updated review. Here, when performed as per the manufacturer’s instructions, pooled estimates of Ag-RDT sensitivity and specificity were 76.3% (95% confidence interval (CI) 73.1% to 79.2%) and 99.1% (95% CI 98.8% to 99.4). The most sensitive test was the SARS-CoV-2 Antigen Test (LumiraDx, United Kingdom; henceforth called LumiraDx) [4].

Since our last update, many additional studies have been published with a substantial increase in studies assessing asymptomatic participants, allowing for further sub-analysis of findings [3,6]. In addition, we and others found Ag-RDT sensitivity to decrease significantly in persons with lower viral load. Viral load is usually estimated through the number of cycles, i.e., the cycle threshold (Ct) value, a reverse transcriptase polymerase chain reaction (RT-PCR) needs to be performed until viral RNA can be detected, with a low Ct-value indicating a high viral load. Furthermore, sensitivity decreased in asymptomatic persons or persons with more than 7 days since symptom onset (DOS > 7) [4]. However, studies including symptomatic patients enroll persons typically within days since onset of symptoms [7], when viral load is highest [8,9]. On the contrary, studies including only asymptomatic persons have a higher chance of including persons at a later stage in the disease and thus with lower viral load. Therefore, the decrease in Ag-RDT sensitivity might only be driven by viral load, irrespective of persons’ symptom status.

With the present work, we aim not only to give an updated overview on the accuracy of commercially available Ag-RDTs, but also to further explore the impact of viral load, the presence of symptoms, and testing procedure on the accuracy of Ag-RDTs.

Methods

We developed a study protocol following standard guidelines for systematic reviews [10,11], which is available in the Supporting information (S1 Text). We also completed the PRISMA checklist (S1 PRISMA Checklist) and registered the review on PROSPERO (registration number: CRD42020225140).

Search strategy

We performed a search of the databases PubMed, Web of Science, medRxiv, and bioRxiv. The search terms were developed with an experienced medical librarian (MG), using combinations of subject headings (when applicable) and text-words for the concepts of the search question, and checked against an expert assembled list of relevant papers. The main search terms were “Severe Acute Respiratory Syndrome Corona-virus 2,” “COVID-19,” “Betacoronavirus,” “Coronavirus,” and “Point of Care Testing” with no language restrictions. The full list of search terms is available in S2 Text. Also, 1 author (LEB) manually searched the website of FIND, the global alliance for diagnostics (https://www.finddx.org/sarscov2-eval-antigen/), for additional relevant studies, and search results were checked by a second author (SK). We performed the search biweekly through August 31, 2021. The last manual search of the FIND website was performed on September 10, 2021. In addition to conducting the present review, we updated our website www.diagnosticsglobalhealth.org weekly with the latest search results based on the methods outlined below.

Inclusion criteria

We included studies evaluating the accuracy of commercially available Ag-RDTs to establish a diagnosis of SARS-CoV-2 infection at the point-of-care (POC), against RT-PCR or cell culture as reference standard. We included all study populations irrespective of age, presence of symptoms, or study location. No language restrictions were applied. We considered cohort studies, nested cohort studies, case–control or cross-sectional studies, and randomized studies. We included both peer-reviewed publications and preprints.

We excluded studies, in which patients were tested for the purpose of monitoring or ending quarantine. Also, publications with a population size smaller than 10 were excluded (although the size threshold of 10 is arbitrary, such small studies are more likely to give unreliable estimates of sensitivity and specificity). Analytical accuracy studies, where tests are performed on spiked samples with a known quantity of virus, were also excluded.

Index tests

Ag-RDTs for SARS-CoV-2 aim to detect infection by recognizing viral proteins (typically the SARS-CoV-2 nucleoprotein). Most Ag-RDTs dedicated for POC deployment use specific labeled antibodies attached to a nitrocellulose matrix strip (lateral flow assay) to capture and detect the viral antigen. Successful binding of the antigen to the antibodies is either detected visually by the appearance of a line on the matrix strip or through a specific reader instrument for fluorescence detection. Other POC instrument-based tests use chips or cartridges that enable an automated immunoassay testing procedure. Ag-RDTs typically provide results within 10 to 30 minutes [3].

Reference standard

Viral culture detects viable virus that is relevant for transmission but is only available in research settings. Since RT-PCR tests are more widely available and SARS-CoV-2 RNA (as reflected by RT-PCR Ct-value) highly correlates with SARS-CoV-2 antigen quantities [12], we considered RT-PCR an acceptable reference standard for the purposes of this systematic review. Where an international standard for the correlation of the viral load to the Ct-values was used, we also report the viral load [13].

Study selection and data extraction

Two reviewers (LEB and SS, LEB and CE, or LEB and MB) reviewed the titles and abstracts of all publications identified by the search algorithm independently, followed by a full-text review of those eligible, to select the articles for inclusion in the systematic review. Any disputes were solved by discussion or by a third reviewer (CMD).

Studies that assessed multiple Ag-RDTs or presented results based on differing parameters (e.g., various sample types) were considered as individual data sets. At first, 4 authors (SK, CE, SS, and MB) extracted 5 randomly selected papers in parallel to align data extraction methods. Afterwards, data extraction and the assessment of methodological quality and independence from test manufacturers (see below) were performed by 1 author per paper (LEB, SK, CE, SS, or MB) and reviewed by a second (LEB, SK, SS, or MB). Any differences were resolved by discussion or by consulting a third author (CMD). The data items extracted can be found in the Supporting information (S1 Table).

Assessment of methodological quality

The quality of the clinical accuracy studies was assessed by applying the QUADAS-2 tool [14]. The tool evaluates 4 domains: study participant selection, index test, reference standard, and flow and timing. For each domain, the risk of bias is analyzed using different signaling questions. Beyond the risk of bias, the tool also evaluates the applicability of each included study to the research question for every domain. We prepared a QUADAS-2 assessment guide specific to the needs of this review, which can be found in the Supporting information (S3 Text).

Assessment of independence from manufacturers

We examined whether a study received financial support from a test manufacturer (including the free provision of Ag-RDTs), whether any study author was affiliated with a test manufacturer, and whether a respective conflict of interest was declared. Studies were judged not to be independent from the test manufacturer if at least 1 of these aspects was present; otherwise, they were considered to be independent.

Statistical analysis and data synthesis

We extracted raw data from the studies and recalculated performance estimates where possible based on the extracted data. Also, some primary studies reported the median Ct-value along with the first and third interquartile range (IQR) and/or minimum and maximum values rather than the sample mean and standard deviation. To incorporate these studies in our analyses, we applied the quantile estimation approach [15] to estimate the mean and standard deviation of the Ct-values. In an effort to use as much of the heterogeneous data as possible, the cutoffs for the Ct-value groups were relaxed by 2 to 3 points within each range. The <20 group included values reported up to ≤20, the <25 group included values reported as ≤24 or <25 or 20 to 25, and the <30 group included values from ≤29 to ≤33 and 25 to 30. The ≥25 group included values reported as ≥25 or 25 to 30, and the ≥30 group included values from ≥30 to ≥35. For the same reason, when categorizing by age, the age group <18 years (children) included samples from persons whose age was reported as <16 or <18 years, whereas the age group ≥18 years (adults) included samples from persons whose age was reported as ≥16 or ≥18 years. Also, for the symptom duration groups, the ≤7 days group included ≤4, ≤5, ≤6, 6 to 7, ≤7, and ≤9 days, and the >7 days group included >5, 6 to 10, 6 to 21, >7, and 8 to 14 days. Relaxing the boundaries for the Ct-value, age, and duration of symptoms subgroup resulted in some overlap within the respective groups. Predominant variants of concern (VoC) for each study were analyzed using the online tool CoVariants [16] with respect to the stated study period. The respective VoCs were extracted according the current WHO listing [17]. The raw data can be found in the Supporting information (S2 Table) and with more details online (https://doi.org/10.11588/data/T3MIB0).

If 4 or more data sets were available with at least 20 RT-PCR-positive samples per data set for a predefined analysis, a meta-analysis was performed. We report pooled estimates of sensitivity and specificity for SARS-CoV-2 detection along with 95% CIs using a bivariate model (implemented with the “reitsma” command from the R package “mada,” version 0.5.10). Summary receiver operating characteristic (sROC) curves were created for the 2 Ag-RDTs with the highest sensitivity. In subgroup analyses (below), where papers presented data only on sensitivity, a univariate random effects inverse variance meta-analysis was performed (using the “metagen” command from the R package “meta,” version 5.1–1, and the “rma” command from the R package “metafor,” version 3.0–2). When there were fewer than 4 studies for an index test, only a descriptive analysis was performed, and accuracy ranges are reported.

We prepared forest plots for the sensitivity and specificity of each test and visually evaluated the heterogeneity between studies. In addition, heterogeneity was assessed by calculating Cochran’s Q and I2 indices. Because there is no standard method taking into account the correlation between the sensitivity and specificity in bivariate models, we calculated these indices from a pooled diagnostic odds ratio using the “madauni” function from the “mada” package. However, while this was the only approach possible, we do not view it as fully statistical stringent and present the resulting Cochran’s Q and I2 only in the Supporting information (S3 Table). For the univariate models, the heterogeneity measures were obtained from the “metagen” model output directly and are reported in the results section.

We predefined subgroups for meta-analysis based on the following characteristics: Ct-value range, testing procedure in accordance with manufacturer’s instructions as detailed in the instructions for use (IFU) (henceforth called IFU-conforming) versus not IFU-conforming, age (<18 versus ≥18 years), sample type, presence or absence of symptoms, symptom duration (≤7 days versus >7 days), viral load, and predominant SARS-CoV-2 variant. We also provide mean Ct-value across true positive (TP) and false negative (FN) test results. For categorization by sample type, we assessed (1) nasopharyngeal (NP) alone or combined with other (e.g., oropharyngeal [OP]); (2) OP alone; (3) anterior nasal (AN) or mid-turbinate (MT); (4) a combination of bronchoalveolar lavage and throat wash (BAL/TW); or (5) saliva.

We applied multivariable linear mixed-effect meta-regression models to explore factors that affect diagnostic test sensitivity. Based on our previous analysis [4], we a priori defined an individual’s time since infection and sample type and condition as underlying factors, influencing test sensitivity through an individual’s symptom status (symptomatic versus asymptomatic), the sample’s viral load (estimated by the mean Ct-value as presented in the study for the sub cohort of interest), and the testing procedure (IFU- versus not IFU-conforming). We performed 3 different analyses, each of which obtained unadjusted and adjusted estimates (i.e., an estimate of the association between a factor and test sensitivity, holding the other covariates in the model constant) of the effect of factors on test sensitivity.

In the first analysis, we estimated the direct effect of symptom status, viral load, and testing procedure on test sensitivity. For the second and third analysis, we restricted the meta-regression models to data sets of symptomatic persons due to a lack of data. Specifically, the second analysis assessed the effect of time since infection (estimated as the sample mean of symptom duration), viral load, and testing procedure on test sensitivity. The third analysis also assessed the effect of time since infection, viral load, and testing procedure on test sensitivity, but depicted the time since infection as a binary covariate of the symptom duration subgroup (≤7 versus >7 days). Further details on the implementation of the meta-regression models and the underlying casual diagrams are available in the Supporting information (Figs A and B in S4 Text). Data sets with less than 5 RT-PCR positives were excluded. We considered an effect to be statistically significant when the regression coefficient’s 95% CI did not include 0. The analyses were performed using the “metafor” R package, version 3.0–2 [18].

As recommended to investigate publication bias for diagnostic test accuracy meta-analyses, we performed the Deeks’ test for funnel-plot asymmetry [19] (using the “midas” command in Stata, version 15); a p-value < 0.10 for the slope coefficient indicates significant asymmetry.

Sensitivity analysis

Three sensitivity analyses were performed: estimation of sensitivity and specificity excluding case–control studies, estimation of sensitivity and specificity excluding not peer-reviewed studies, and estimation of sensitivity and specificity excluding studies that were potentially influenced through test manufacturers. We compared the results of each sensitivity analysis against the overall results to assess the potential bias introduced by case–control, not peer-reviewed, and manufacturer-influenced studies.

Results

Summary of studies

The systematic search resulted in 31,254 articles. After removing duplicates, 11,462 articles were screened, and 433 papers were considered eligible for full-text review. Of these, 259 were excluded because they did not present primary data or the Ag-RDT was not commercially available. For similar reasons, we also excluded 4 studies from the FIND website. A list of the studies excluded and their reason for exclusion can be found in the Supporting information (S5 Text). This left 174 studies from the systematic search [20193] as well as further 20 studies from the FIND website [194213] to be included in the review (Fig 1).

thumbnail
Fig 1. PRISMA flow diagram.

Based on Page and colleagues [214]. Ag-RDT, antigen rapid diagnostic tests; IFU, instructions for use.

https://doi.org/10.1371/journal.pmed.1004011.g001

At the end of the data extraction process, 21 studies were still in preprint form [20,21,25,51,54,59,62,69,73,78,88,104,120,125,133,164,171,172,177,178,190]. All studies included were written in English, except for 3 in Spanish [57,66,138], 1 in Turkish [99], and 1 in French [157]. Out of the 194 studies, 26 conducted a case–control study [25,36,38,70,71,76,85,88,9294,97,98,100,107,112,139,144,148,149,155,160,170,172,186,188], while the remaining 168 were cross-sectional or cohort studies. The reference method was RT-PCR in all except 1 study, which used viral culture [139].

The 194 studies were divided into 333 data sets. Across these, 76 different Ag-RDTs were evaluated (75 lateral flow assays, of which 63 are interpreted visually and 12 required an automated, proprietary reader; 1 assay is an automated immunoassay). The most common reasons for testing were the occurrence of symptoms (98 data sets, 29.4% of data sets) and screening of asymptomatic persons with (3; 0.9%) or without (22; 6.6%) close contact to a SARS-CoV-2 confirmed case. In 142 (42.6%) of the data sets, individuals were tested due to more than 1 of the reasons mentioned and for 68 (20.4%) the reason for testing was unclear.

In total, 221,878 Ag-RDTs were performed, with a mean number of samples per data set of 666 (range 15 to 22,994). The age of the individuals tested was specified for only 90,981 samples, of which 84,119 (92.5%) were from adults (age group ≥18) and 6,862 (7.5%) from children (age group <18). Symptomatic persons comprised 74,118 (33.4%) samples, while 97,982 (44.2%) samples originated from asymptomatic persons, and for 49,778 (22.4%) samples, the participant’s symptom status was not stated by the authors. The most common sample type evaluated was NP and mixed NP/OP (117,187 samples, 52.8%), followed by AN/MT (86,354 samples, 38.9%). There was substantially less testing done on the other sample types, with 3,586 (1.6%) tests done from OP samples, 1,256 (0.6%) from saliva, 219 (0.1%) from BAL/TW, and for 13,276 (6.0%) tests the type of sample was not specified in the respective studies.

A summary of the tests evaluated in clinical accuracy studies, including study author and sample size, as well as study design aspects that could potentially influence test performance, such as sample type, sample condition, IFU conformity, and symptom status, can be found in the Supporting information (S2 Table). The Standard Q test (SD Biosensor, South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q) was the most frequently used with 57 (17.1%) data sets and 36,246 (16.3%) tests, while the Panbio test (Abbott Rapid Diagnostics, Germany; henceforth called Panbio) was assessed in 55 (16.5%) data sets with 38,620 (17.4%) tests performed. Detailed results for each clinical accuracy study are available in the Supporting information (S1 Fig).

Methodological quality of studies

The findings on study quality using the QUADAS-2 tool are presented in Fig 2A and 2B. In 294 (88.3%) data sets, a relevant study population was assessed. However, for only 68 (20.4%) of the data sets, the selection of study participants was considered representative of the setting and population chosen (i.e., they avoided inappropriate exclusions or a case–control design and enrollment occurred consecutive or randomly).

thumbnail
Fig 2.

(a) Methodological quality of the clinical accuracy studies (risk of bias). (b) Methodological quality of the clinical accuracy studies (applicability).

https://doi.org/10.1371/journal.pmed.1004011.g002

The conduct and interpretation of the index tests were considered to have low risk of bias in 176 (52.9%) data sets (e.g., through appropriate blinding of persons interpreting the visual read-out). However, for 155 (46.5%) data sets, sufficient information to clearly judge the risk of bias was not provided. In only 151 (45.3%) data sets, the Ag-RDTs were performed according to IFU, while 138 (41.4%) were not IFU-conforming, potentially impacting the diagnostic accuracy; for 44 (13.2%) data sets, the IFU status was unclear. The most common deviations from the IFU were (1) use of samples that were prediluted in transport media not recommended by the manufacturer (113 data sets, 12 unclear); (2) use of banked samples (103 data sets, 12 unclear); and (3) a sample type that was not recommended for Ag-RDTs (8 data sets, 11 unclear).

In 126 (37.8%) data sets, the reference standard was performed before the Ag-RDT, or the operator conducting the reference standard was blinded to the Ag-RDT results, resulting in a low risk of bias. In almost all other data sets (206; 61.9%), this risk could not be assessed, due to missing information and for 1 data set (0.3%) intermediate concern was raised. The applicability of the reference test was judged to be of low concern for all data sets, as viral culture or RT-PCR are considered to adequately define the target condition for the purpose of this study.

In 327 (98.2%) data sets, the sample for the index test and reference test were obtained at the same time, while this was unclear in 6 (1.8%). In 227 (68.2%) data sets, the same RT-PCR assay was used as the reference of all included samples, while in 85 (25.5%) data sets, multiple RT-PCR assays were used as the reference. The RT-PCR systems used most frequently were the Cobas SARS-CoV-2 Test (Roche, Germany; used in 79 data sets [23.7%]), the Allplex 2019-nCoV Assay (Seegene, South Korea; used in 61 data sets [18.3%]), and the GeneXpert (Cepheid, United States, CA; used in 34 data sets [10.2%]). For 21 (6.3%) data sets, the RT-PCR used as reference standard was unclear. The RT-PCR system, its limit of detection (if publicly available from the manufacturer) and sample type used in each data set can be found in the Supporting information (S2 Table). Furthermore, for 19 (5.7%) data sets, there was a concern that not all selected study participants were included in the analysis.

Finally, 45 (23.2%) of the studies received financial support from the Ag-RDT manufacturer. In 13 of these as well as in 2 others (in total 7.7% of all studies), employment of the authors by the manufacturer of the Ag-RDT studied was indicated. The respective studies are listed in the Supporting information (S6 Text). Overall, a competing interest was found in 47 (24.2%) of the studies. Detailed assessment of each QUADAS domain can be found in the Supporting information (S2 Fig).

Detection of SARS-CoV-2 infection

Overall, 38 data sets were excluded from the meta-analysis, as they included fewer than 20 RT-PCR positive samples. An additional 28 data sets were missing either sensitivity or specificity and were only considered for univariate analyses. The remaining 267 data sets, evaluating 198,584 tests, provided sufficient data for bivariate analysis. The results are presented in Fig 3A–3E. Detailed results for the subgroup analysis are available in the Supporting information (S3S7 Figs).

thumbnail
Fig 3. (a–f) Pooled sensitivity and specificity by IFU conformity, Ct-value*, sample type, symptom status, duration of symptoms, and age.

*Low Ct-values are the RT-PCR semiquantitative correlate for a high virus concentration, only sensitivity calculated. AN, anterior nasal; CI, confidence interval; IFU, instructions for use; MT, mid-turbinate; N, number of; NP, nasopharyngeal; RT-PCR, reverse transcription polymerase chain reaction.

https://doi.org/10.1371/journal.pmed.1004011.g003

Including any test and type of sample, the pooled estimates of sensitivity and specificity were 72.0% (95% CI 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1), respectively. When comparing IFU and non-IFU-conform testing, sensitivity markedly differed with 76.3% (95% CI 73.7 to 78.7) compared to 66.7% (95% CI 62.6 to 70.6), respectively. Pooled specificity was similar in both groups: 99.1% (95% CI 98.8 to 99.3) and 98.4% (95% CI 97.8 to 98.8), respectively (Fig 3A).

Subgroup analysis by Ct-value

We use Ct-value as a semiquantitative correlate for the sample’s viral load [12]. As a point of reference, we assume as a median conversion that a Ct-value of 25 corresponds to a viral load of 1.5 * 106 RNA copies per milliliter of transport media, but this varies between the types of RT-PCRs used for measuring viral load [144,215].

In samples with Ct-values <20, a very high estimate of sensitivity was found (97.9% [95% CI 96.9 to 98.9]). The pooled sensitivity for Ct-values <25 was markedly better at 90.6% (95% CI 88.3 to 93.0) compared to the group with Ct ≥ 25 at 54.4% (95% CI 47.3 to 61.5). A similar pattern was observed when the Ct-values were analyzed using cutoffs <30 or ≥30, resulting in an estimated sensitivity of 76.8% (95% CI 73.1 to 80.4) and 18.7% (95% CI 13.9 to 23.4), respectively (Fig 3B).

When pooling Ct-value estimates for TP Ag-RDT results (TP; 5,083 samples, 69 data sets) and FN (2,390 samples, 76 data sets) Ag-RDT results, the mean Ct-values were 22.2 (95% CI 21.5 to 22.8) and 30.2 (95% CI 29.6 to 30.9), respectively (S8 Fig). Across both TP and FN samples, mean Ct-value was 26.3 (95% CI 25.5 to 27.1). This demonstrates that RT-PCR positive samples missed by Ag-RDT have a substantially lower viral load (higher Ct-value) compared to those that were detected. Individual forest plots for each data set with mean Ct-values are presented in the Supporting information (S9 Fig).

Subgroup analysis by sample type

Most data sets evaluated NP or combined NP/OP swabs (197 data sets and 104,341 samples) as the sample type for the Ag-RDT. NP or combined NP/OP swabs achieved a pooled sensitivity of 70.8% (95% CI 68.3 to 73.2) and specificity of 98.8% (95% CI 98.6 to 99.1). Data sets that used AN/MT swabs for Ag-RDTs (52 data sets and 84,020 samples) showed a summary estimate for sensitivity of 77.3% (95% CI 73.0 to 81.0) and specificity of 99.1% (95% CI 98.6 to 99.4). However, 2 studies that reported direct head-to-head comparison of NP and AN/MT samples from the same participants using the same Ag-RDT (Standard Q) reported equivalent performance [116,117]. In contrast, saliva swabs (4 data sets, 1,216 samples) showed the lowest pooled sensitivity with only 50.1% (95% CI 7.7 to 92.3) (Fig 3C). In 3 of the data sets utilizing a saliva sample, saliva was collected as whole mouth fluid (sensitivity from 8.1% [95% CI 2.7 to 17.8] to 55.6% [95% CI 35.3 to 74.5]) [24,92,154]. The fourth used a cheek swab for sample collection (sensitivity 100% [95% CI 90.3 to 100]) [55].

Due to only 3 data sets with 3,586 samples, we were not able to estimate pooled sensitivity and specificity for OP samples. Median sensitivity and specificity were 59.4% (range 50.0% to 81.0%) and 99.1% (range 99.0% to 100.0%), respectively. We were also not able to perform a subgroup meta-analysis for BAL/TW due to insufficient data, with only 1 study with 73 samples evaluating the Biocredit Covid-19 Antigen rapid test kit (RapiGEN, South Korea; henceforth called Rapigen), Panbio and Standard Q available and sensitivity ranging between 33.3% and 88.1% [155]. However, the use of BAL/TW sampling would be considered not IFU-conforming.

Subgroup analysis in symptomatic and asymptomatic participants

Within the data sets possible to meta-analyze, 55,186 (43.2%) samples were from symptomatic and 72,457 (56.8%) from asymptomatic persons. The pooled sensitivity for symptomatic persons was markedly higher compared to asymptomatic persons with 76.2% (95% CI 73.3 to 78.9) versus 56.8% (95% CI 50.9 to 62.4). Specificity was above 98.6% for both groups (Fig 3D).

Subgroup analysis comparing symptom duration

Data were analyzed for 9,470 persons from 26 data sets with symptoms less than 7 days, while for persons with symptoms ≥7 days, fewer data were available (620 persons, 13 data sets). The pooled sensitivity estimate for individuals with symptoms <7 days was 81.9% (95% CI 77.7 to 85.5), which is markedly higher than the 51.8% (95% CI 41.5 to 61.9) sensitivity for individuals tested ≥7 days from onset of symptoms (Fig 3D).

Subgroup analysis by virus variant

The 188 data sets with 153,522 samples were conducted in settings where the SARS-CoV-2 wild type was dominant. Here, sensitivity was 72.3% (95% CI 69.7 to 74.7) and specificity was 99.0% (95% CI 98.7 to 99.2). When the alpha variant (26 data sets, 19,512 samples) was the main variant, sensitivity slightly decreased to 67.0% (95% CI 58.5 to 74.5), but with overlapping CIs, and specificity remained similar (99.3% [95% CI 98.7 to 99.6]). In settings where the wild type and the alpha variant were codominant (6 data sets, 8,753 samples), sensitivity and specificity were 72.0% (95% CI 57.9 to 82.8) and 99.6% (95% CI 98.4 to 99.9), respectively.

Data were also available for the Beta, Gamma, Delta, Epsilon, Eta, and Kappa variant, but too limited to meta-analyze. Of these, most data were available for the Gamma variant, with sensitivity ranging from 84.6% to 89.9% (3 data sets, 886 samples) [202,209,213]. The main virus variant for each data set is listed in the Supporting information (S2 Table). All studies included in this review were conducted before the occurrence of the Omicron variant.

Subgroup analysis by age

For adults (age group ≥18), it was possible to pool estimates across 62,433 samples, whereas the pediatric group (age group <18) included 5,137 samples. There was only a small difference with overlapping CIs in sensitivity with 74.8% (95% CI 71.5 to 77.8) and 69.8% (95% CI 61.0 to 77.3) for the adult and pediatric group, respectively. For those data sets that reported a median Ct-value per age group, the Ct-value was slightly lower in the adult (median 22.6, Q1 = 20.5, Q3 = 24.6, 48 data sets) compared to the pediatric group (median 23.2, Q1 = 20.3, Q3 = 25.2, 3 data sets). Specificity was similar in both groups with over 99% (Fig 3E).

Meta-regression

The first analysis, assessing all variables that could influence sensitivity (symptom status, testing procedure [IFU-conforming versus not IFU-conforming], and mean Ct-value), included 65 data sets of symptomatic and 18 of asymptomatic persons. The second and third analysis assessed only symptomatic persons with 28 and 50 data sets, respectively. The full list of data sets for each analysis and detailed results are available in the Supporting information (Tables A–D in S4 Text).

In the first analysis, we found viral load (as estimated by Ct-value) to be the driving factor of sensitivity. Sensitivity was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit the mean Ct-value decreased (Table B in S4 Text), after adjusting for symptom status and testing procedure (Fig 4). In addition, sensitivity was estimated to be 20.0 percentage points (95% CI 13.7 to 26.3) higher for samples from symptomatic compared to asymptomatic participants. However, when controlling for testing procedure and mean Ct-value, this difference declined to only 11.1 percentage points (95% CI 4.8 to 17.4). The difference between IFU-conforming versus not IFU-conforming testing procedure was not significant (5.2 percentage points [95% CI ‒2.6 to 13.0] higher for IFU-conforming) after controlling for symptom status and mean Ct-value.

thumbnail
Fig 4. Pooled estimate of sensitivity across mean Ct-values holding symptom status and IFU-status constant at their respective means.

Dotted lines are the corresponding 95% CIs. The size of each point is a function of the weight of the data set in the model, where larger data sets have larger points. CI, confidence interval; Ct, cycle threshold; IFU, instructions for use.

https://doi.org/10.1371/journal.pmed.1004011.g004

When assessing only symptomatic participants, test sensitivity was estimated to decrease by 3.2 percentage points (95% CI ‒1.5 to 7.9) for every 1 day increase in average duration of symptoms (mean duration of symptoms ranged from 2.75 to 6.47 days). However, with the CI including the value 0, this effect was not statistically significant. When controlling for mean Ct-value and testing procedure, the estimated effect of the average duration of symptoms was close to 0 (0.7 percentage points [95% CI ‒5.0 to 6.4], Table C in S4 Text).

Concordantly, for samples collected after 7 days of symptom onset sensitivity were estimated to be 22.9 percentage points (95% CI 10.3 to 35.4) lower compared to those collected within 7 days. When controlling for mean Ct-value and testing procedure, the model still showed a decrease in sensitivity for samples collected after 7 days of symptom onset, but again closer to 0 and no longer statistically significant (‒13.8 percentage points [95% CI ‒27.7 to 0.1], Table D in S4 Text).

Analysis of individual tests

Based on 179 data sets with 143,803 tests performed, we were able to perform bivariate meta-analysis of the sensitivity and specificity for 12 different Ag-RDTs (Fig 5). Across these, pooled estimates of sensitivity and specificity on all samples were 71.6% (95% CI 69.0 to 74.1) and 99.0% (95% CI 98.8 to 99.2), which were very similar to the overall pooled estimate across all meta-analyzed data sets (72.0% and 98.9%, above).

thumbnail
Fig 5. Bivariate analysis of 12 Ag-RDTs.

Pooled sensitivity and specificity were calculated based on reported sample size, true positives, true negatives, false positives, and false negatives. Ag-RDT, antigen rapid diagnostic test; CI, confidence interval; N, number of.

https://doi.org/10.1371/journal.pmed.1004011.g005

The highest pooled sensitivity was found for the SARS-CoV-2 Antigen Test (LumiraDx, UK; henceforth called LumiraDx) and the Standard Q nasal test (SD Biosensor, South Korea; distributed in Europe by Roche, Germany; henceforth called Standard Q nasal) with 82.7% (95% CI 73.2 to 89.4) and 81.4% (95% CI 73.8 to 87.2), respectively. However, all tests except the COVID-19 Ag Respi-Strip (Coris BioConcept, Belgium; henceforth called Coris; sensitivity 48.4% [95% CI 36.1 to 61.0]) had CIs that were overlapping. The pooled specificity was above 98% for all of the tests, except for the Standard F test (SD Biosensor, South Korea; henceforth called Standard F) and LumiraDx with specificities of 97.9% (95% CI 96.9 to 98.5) and 96.9% (95% CI 94.4 to 98.3), respectively. Hierarchical summary receiver operating characteristic for LumiraDx and Standard Q nasal are available in the Supporting information (S10 Fig).

For 2 Ag-RDTs, we were only able to perform a univariate analysis, due to insufficient data. Sensitivities for the COVID-19 Rapid Antigen Test Cassette (SureScreen, UK; henceforth called SureScreen V) and the Nadal COVID-19 Ag Test (Nal von Minden, Germany; henceforth called Nadal) were similar with 57.7% (95% CI 40.9 to 74.4) and 56.6% (95% CI 26.9 to 86.3), respectively (S11 Fig). Specificity was only possible to calculate for the Nadal, which was lowest throughout the per test analysis with 91.1% (95% CI 80.2 to 100). For the remaining 62 Ag-RDTs, there were insufficient numbers of data sets for a uni-or bivariate meta-analysis. However, performance estimates and factors potentially influencing these are descriptively analyzed in the Supporting information (S4 Table) for each of the 62 tests.

For Panbio and Standard Q, it was also possible to pool sensitivity per Ct-value subgroup for each individual test. Panbio and Standard Q reached sensitivities of 97.2% (95% CI 95.3 to 99.2) and 98.1% (95% CI 96.3 to 99.9) for Ct-value <20, 89.8% (95% CI 85.4 to 94.3) and 92.6% (95% CI 88.5 to 96.7) for Ct-value <25 and 73.7% (95% CI 66.0 to 81.3) and 75.7% (95% CI 67.9 to 83.4) for Ct-value <30, respectively. For Ct-value ≥20 sensitivities for Panbio and Standard Q were 89.2% (95% CI 82.1 to 96.3) and 89.0% (95% CI 81.0 to 96.9), 51.2% (95% CI 39.4 to 63.0) and 56.4% (95% CI 45.1 to 67.8) for Ct-value ≥25, and 22.8% (95% CI 12.2 to 33.4) and 20.4% (95% CI 10.5 to 30.3) for Ct-value ≥30, respectively (S4A–S4F Fig). For BinaxNow (Abbott Rapid Diagnostics, Germany), LumiraDx, SD Biosensor, Standard F, Coris, and INNOVA SARS-CoV-2 Antigen Rapid Qualitative Test (Innova Medical Group, United States of America; henceforth called Innova), sufficient data to pool sensitivity was only available for certain Ct-values, which are available in the Supporting information (S4A–S4F Fig) as well. In addition, for 8 tests it was possible to calculate pooled sensitivity and specificity estimates only including data sets that conformed to the IFU. These are also listed in the Supporting information (S5 Table).

In total, 31 studies accounting for 106 data sets conducted head-to-head clinical accuracy evaluations of different tests using the same sample(s) from the same participant. These data sets are outlined in the Supporting information (S2 Table). Nine studies performed their head-to-head evaluation as per IFU and on symptomatic individuals. Across 4 studies, the Standard Q nasal (sensitivity 80.5% to 91.2%) and the Standard Q (sensitivity 73.2% to 91.2%) showed a similar range of sensitivity [116,130,216]. One study reported a sensitivity of 60.4% (95% CI 54.9 to 65.6) for the Standard Q and 56.8% (95% CI 51.3 to 62.2) for the Panbio in a mixed study population of symptomatic, asymptomatic, and high-risk contact persons [190]. Another study described a sensitivity of 56.4% (95% CI 44.7 to 67.6) for the Rapigen and 52.6% (95% CI 40.9 to 64) for the SGTi-flex COVID-19 Ag (Sugentech, South Korea) [164]. One study included only very few samples and using a not IFU-conforming sample type (BAL), limiting the ability to draw conclusions from the results [155].

Publication bias

The results of the Deeks’ test for all data sets with complete results (p = 0.24), Standard Q publications (p = 0.39), Panbio publications (p = 0.81), and Lumira (p = 0.61) demonstrate no significant asymmetry in the funnel plots, which suggests no publication bias. All funnel plots are listed in the Supporting information (S12 Fig)

Sensitivity analysis

We performed 3 sensitivity analyses including 213 data sets for non-case–control studies, 216 data sets including only peer-reviewed studies, and 190 data sets including only data sets without any manufacturer influence. When excluding case–control studies, the sensitivity and specificity remained at 71.9% (95% CI 69.4 to 74.2) and 99.0% (95% CI 98.8 to 99.2), respectively. Similarly, when assessing only peer-reviewed studies, sensitivity and specificity did not change significantly with 71.1% (95% CI 68.5 to 73.6) and 98.9% (95% CI 98.6 to 99.1), respectively. If studies that could have potentially been influenced by test manufacturers were excluded, sensitivity decreased marginally, but with overlapping CIs (sensitivity of 70.3% [95% CI 67.6 to 72.9] and specificity of 99.0% [95% CI 98.7 to 99.2]).

Discussion

After reviewing 194 clinical accuracy studies, we found Ag-RDTs to be 76.3% (95% CI 73.7 to 78.7) sensitive and 99.1% (95% CI 98.8 to 99.3) specific in detecting SARS-CoV-2 compared to RT-PCR when performed according to manufacturers’ instructions. While sensitivity was higher in symptomatic compared to asymptomatic persons, especially when persons were still within the first week of symptom onset, the main driver behind test sensitivity was a sample’s viral load. LumiraDx and Standard Q nasal were the most accurate tests, but heterogeneity in the design of the studies evaluating these tests potentially favored test specific estimates.

Using Ct-value as a semiquantitative correlate for viral load, there was a significant correlation between test sensitivity and viral load, with sensitivity increasing by 2.9 percentage points for every unit decrease in mean Ct-value when controlling for symptom status and testing procedure. The pooled Ct-value for TP was on average over 8 points lower than for FN results (Ct-value of 22.2 for TP compared to 30.2 for FN results). Viral load being the deciding factor for test sensitivity confirms prior work [12].

Furthermore, sensitivity was found to be higher when samples were from symptomatic (76.2% sensitivity) compared to asymptomatic participants (56.8% sensitivity). This was confirmed in the regression model, estimating sensitivity to be 20.0 percentage points higher in samples that originated from symptomatic participants. In our previous analysis, we assumed that the increase in sensitivity is not due to the symptom status as such, but results from the fact that in symptomatic study, populations chances are higher to include participants at the beginning of the disease with high viral load [4]. In the present analysis, this assumption shows to be largely true. Controlling for Ct-value, the RT-PCR correlate for viral load, the effect of symptomatic versus asymptomatic participants on test sensitivity strongly decreased to 11.1 percentage points. As others found symptomatic and asymptomatic individuals to have the same viral load when at the same stage of the disease [8], we would have expected the regression coefficient to have decreased even further to 0. This nonzero difference in sensitivity between symptomatic and asymptomatic participants may be due to the lack of access to individual participant Ct-values, which required our analyses to control for the mean Ct-value over all participants in a data set rather than the individual Ct-values. Furthermore, some variability is likely introduced by the testing for the Ag-RDT and the RT-PCR not to occur from the sample. Therefore, some degree of residual confounding is likely present.

We also found sensitivity to be higher when participants were tested within 7 days of symptom onset (81.9% sensitivity) compared to >7 days (51.8% sensitivity). Concordantly, our regression model estimated that sensitivity decreases by 3.2 percentage points for every 1-day increase in mean symptom duration. Again, this decrease in sensitivity is driven by viral load as was seen when controlling for Ct-value. Importantly, it is not yet clear how the emergence of new SARS-CoV-2 VoC and the growing vaccination coverage will affect Ag-RDTs sensitivity in the early days after symptom onset. Most of the studies included in this analysis were performed at the time the wild type and Alpha variant were circulating. Test sensitivity was slightly lower for the Alpha variant compared to the wild type (67.0% [95% CI 58.5 to 74.5] versus 72.3% [95% CI 69.7 to 74.7]). However, conclusions on differences in performance between variants are difficult to draw as between study heterogeneity was substantial and, while this does not preclude a difference between groups, CIs were widely overlapping. Furthermore, pooled sensitivity for studies where Alpha and wild type were codominant (72.0% [95% CI 57.9 to 82.8]) were similar to that of the wild type alone. Similar Ag-RDT sensitivity was also found with the Delta variant compared to wild type, and for the Omicron SARS-CoV-2 variant initial data suggests similar clinical performance as well, although analytical performance pointed toward a potentially lower performance [217220]. Vaccination did not affect viral kinetics in the first week [221] and is unlikely to do so for the Omicron variant [222]. To further inform public health decision makers on the best strategy to apply Ag-RDTs, clinical accuracy studies in settings with high prevalence of the Omicron variant are urgently needed.

Looking at specific tests, LumiraDx and Standard Q nasal showed the highest sensitivity, performing above the 80% sensitivity target defined by WHO. However, while the Standard Q nasal was 99.1% (95% CI 98.4 to 99.5) specific, the LumiraDx only had a specificity of 96.9% (95% CI 94.4 to 98.3), which is just below the WHO target of 97%. The reason for the lower specificity is unclear, particularly as independent analytical studies also confirmed the test had no cross-reactivity [106]. Sample to sample variability must be considered, particularly as the sensitivity of the index tests approaches that of the reference test. The 2 most often evaluated tests, namely Panbio (32,370 samples, sensitivity of 71.9%) and Standard Q (35,664 samples, sensitivity of 70.9%), performed slightly below the overall average. Similarly, Panbio and Standard Q were also the most extensively evaluated Ag-RDTs in the prior analysis, and with a sensitivity slightly above average [4]. Nonetheless, this updated analysis indicates that limited added value is to be expected from any further analysis of Ag-RDTs’ overall sensitivity or the sensitivity of the most widely evaluated tests. However, it will be important to continue to reassess tests’ analytical sensitivity for detection of new specific variants (e.g., Omicron). In addition, with a recent WHO guideline on self-performed Ag-RDTs having laid the scientific foundation [223], it would be of interest to further evaluate the accuracy and ease of use of self-performed Ag-RDTs, or specific characteristics of instrument-based Ag-RDTs.

Furthermore, sensitivity strongly differed between studies that conducted the Ag-RDTs as per manufacturer’s instructions and those that did not (sensitivity of 66.7% for not IFU-conforming versus 76.3% for IFU-conforming). This was also reflected in our regression model, where test performance decreased when not following manufacturer’s instructions; however, this was not significant (‒5.2 percentage points [95% CI ‒13.0 to 2.6]). In regards to sample types, saliva showed a markedly lower sensitivity of 50.1%, compared to NP or AN/MT samples, confirming what we found in our previous analysis [4]. Especially in light of the current debate on whether saliva or throat swabs might be a more sensitive sample to detect the SARS-CoV-2 Omicron variant than NP or AN/MT samples [224226], further research is urgently needed to quantify the difference in viral load resulting from different sample types and thus the effect of sample type on test sensitivity.

In concordance with the above, many studies reporting an unusually low sensitivity performed the Ag-RDT not as per IFU [30,32,44,70,101,112,137,188] or used saliva samples [24,154,159,227]. However, 2 studies with IFU-conforming testing procedure on NP or AN/MT sample still showed a low sensitivity. This quite likely results from the on average low viral load in 1 study [53] and the asymptomatic study population in the other [179]. On the contrary, compared to the other studies unusual high sensitivity was found in studies where average viral load was high [49,88,148,149] or participants were mainly within the first week of symptom onset [46,58,139].

The main strength of our study lies in its living approach. The ability to update our methodology as the body of evidence grows has enabled an improved analysis. For example, while data were too heterogenous for a meta-regression during the prior analysis, with additional data sets we are now able to analyze the relationship between an Ag-RDT’s sensitivity, the samples’ Ct-value, and the participants’ symptom status in depth. Similarly, we decided to focus on clinical accuracy studies for POC Ag-RDTs in this current review as analytical accuracy studies require a dedicated approach to be comparable. Furthermore, the main results of our latest extractions are publicly available on our website. This has not only equipped public health professionals with an up-to-date overview on the current research landscape [228,229], but also led other researchers and the test manufacturers to check our data, improving the quality of our report through continuous peer-review.

Nonetheless, our study is limited in that we use RT-PCR as a reference standard to assess the accuracy of Ag-RDTs, which are generally much more sensitive than Ag-RDTs [230] and might be a less appropriate reference standard than viral culture [139,231,232]. However, viral culture is available in research settings only and its validity as a true proxy of actual transmissibility is not proven; therefore, we find RT-PCR a suitable reference standard for the clinical accuracy studies included in this review. Furthermore, we fully acknowledge that Ct-value is only an estimate of viral load, and that the correlation between Ct-value and viral load varies between RT-PCR assays, potentially affecting the sensitivity and specificity of the evaluated Ag-RDTs [215]; nonetheless, we believe that the analysis of pooled Ct-value data across a very large data set is a useful strategy to understand the overall contribution of viral load to Ag-RDT performance. Moreover, we are aware that the test specific sensitivities and specificities can be influenced by differences in study design. However, we aimed to counterbalance this effect by assessing relevant aspects in study design for each study and analyzing outliers. To enhance comparability in between clinical accuracy studies, future studies should include individuals at a similar stage in the disease, use the same sample types, and adhere to the WHO standard for measuring SARS-CoV-2 viral load [13]. Finally, our study only includes literature up until August 31, 2021. Thus, we were not able to analyze information on Delta or Omicron variants, and look to future research to close this gap in literature.

Conclusions

In summary, Ag-RDTs detect most of the persons infected with SARS-CoV-2 when performed according to the manufacturers’ instructions. While this confirms the results of our previous analysis, the present analysis highlights that the sample’s viral load is the most influential factor underlying test sensitivity. Thus, Ag-RDTs can play a vital role in detecting persons with high viral load and therefore likely to be at highest risk of transmitting the virus. This holds true even in the absence of patient symptoms or differences in the duration of symptoms. To foster further research analyzing specific Ag-RDTs and the factors influencing their sensitivity in more detail, standardization of clinical accuracy studies and access to patient level Ct-value and duration of symptoms are essential.

Supporting information

S1 Fig. Forest plots of all Ag-RDTs.

Ag-RDT, antigen rapid diagnostic test; CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

https://doi.org/10.1371/journal.pmed.1004011.s002

(PDF)

S3 Fig. Forest plots for subgroup analysis by Ct-values.

CI, confidence interval; Ct, cycle threshold.

https://doi.org/10.1371/journal.pmed.1004011.s004

(PDF)

S4 Fig. Forest plots for subgroup analysis by Ct-values per test.

CI, confidence interval; Ct, cycle threshold.

https://doi.org/10.1371/journal.pmed.1004011.s005

(PDF)

S5 Fig. Forest plots for subgroup analysis by IFU versus non-IFU.

CI, confidence interval; FN, false negative; FP, false positive; IFU, instructions for use; TN, true negative; TP, true positive.

https://doi.org/10.1371/journal.pmed.1004011.s006

(PDF)

S6 Fig. Forest plots for subgroup analysis by sample type.

CI, confidence interval; FN, false negative; FP, false positive; TN, true negative; TP, true positive.

https://doi.org/10.1371/journal.pmed.1004011.s007

(PDF)

S7 Fig. Forest plots for subgroup analysis by symptomatic versus asymptomatic.

CI, confidence interval.

https://doi.org/10.1371/journal.pmed.1004011.s008

(PDF)

S8 Fig. Forest plot for subgroup analysis by mean Ct-values for TP and FN samples.

CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

https://doi.org/10.1371/journal.pmed.1004011.s009

(PDF)

S9 Fig. Forest plots for subgroup analysis by mean Ct-values for TP and FN samples.

CI, confidence interval; Ct, cycle threshold; FN, false negative; TP, true positive.

https://doi.org/10.1371/journal.pmed.1004011.s010

(PDF)

S10 Fig. HSROC curve Standard Q nasal and LumiraDx Ag-RDT.

Ag-RDT, antigen rapid diagnostic test; HSROC, Hierarchical summary receiver-operating characteristic.

https://doi.org/10.1371/journal.pmed.1004011.s011

(PDF)

S11 Fig. Forest plot for univariate analysis for Nadal and SureScreen V.

CI, confidence interval.

https://doi.org/10.1371/journal.pmed.1004011.s012

(PDF)

S12 Fig. Funnel plots for all, LumiraDx, Panbio, and Standard Q studies.

https://doi.org/10.1371/journal.pmed.1004011.s013

(PDF)

S1 Table. List of parameters extracted from studies.

https://doi.org/10.1371/journal.pmed.1004011.s014

(XLSX)

S3 Table. Overall and sensitivity analysis.

https://doi.org/10.1371/journal.pmed.1004011.s016

(XLSX)

S4 Table. Tests analyzed descriptively not included in meta-analysis.

https://doi.org/10.1371/journal.pmed.1004011.s017

(XLSX)

S1 Text. Study protocol submitted to PROSPERO.

https://doi.org/10.1371/journal.pmed.1004011.s019

(DOCX)

S3 Text. QUDAS-2 assessment interpretation guide.

https://doi.org/10.1371/journal.pmed.1004011.s021

(DOCX)

S6 Text. Studies potentially influenced by the test manufacturer.

https://doi.org/10.1371/journal.pmed.1004011.s024

(DOCX)

References

  1. 1. World Health Organization. Recommendations for national SARS-CoV-2 testing strategies and diagnostic capacities. 2021.
  2. 2. Dinnes J, Deeks JJ, Berhane S, Taylor M, Adriano A, Davenport C, et al. Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection. Cochrane Database Syst Rev. 2021:3:CD013705. pmid:33760236
  3. 3. World Health Organization. Antigen-detection in the diagnosis of SARS-CoV-2 infection: interim guidance. 2021.
  4. 4. Brümmer LE, Katzenschlager S, Gaeddert M, Erdmann C, Schmitz S, Bota M, et al. Accuracy of novel antigen rapid diagnostics for SARS-CoV-2: A living systematic review and meta-analysis. PLoS Med. 2021;18(8):e1003735. pmid:34383750
  5. 5. Brümmer LE, Katzenschlager S, Gaeddert M, Erdmann C, Schmitz S, Bota M, et al. The accuracy of novel antigen rapid diagnostics for SARS-CoV-2: a living systematic review and meta-analysis. medRxiv [Preprint, updated version published in PLoS Medicine]; published March 01, 2021. pmid:34383750
  6. 6. World Health Organization. Antigen-detection in the diagnosis of SARS-CoV-2 infection. WHO Reference Number: WHO/2019-nCoV/Antigen_Detection/20211, 2021.
  7. 7. Nehme M, Braillard O, Alcoba G, Aebischer Perone S, Courvoisier D, Chappuis F, et al. COVID-19 Symptoms: Longitudinal Evolution and Persistence in Outpatient Settings. Ann Intern Med. 2021;174(5):723–5. pmid:33284676
  8. 8. Kissler SM, Fauver JR, Mack C, Olesen SW, Tai C, Shiue KY, et al. Viral dynamics of acute SARS-CoV-2 infection and applications to diagnostic and public health strategies. PLoS Biol. 2021;19(7):e3001333. pmid:34252080
  9. 9. Wang Y, Zhang L, Sang L, Ye F, Ruan S, Zhong B, et al. Kinetics of viral load and antibody response in relation to COVID-19 severity. J Clin Investig. 2020;130(10):5235–44. pmid:32634129
  10. 10. Leeflang MM, Deeks JJ, Gatsonis C, Bossuyt PM. Cochrane Diagnostic Test Accuracy Working Group. Systematic reviews of diagnostic test accuracy. Ann Intern Med. 2008;149(12):889–97.
  11. 11. Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9, W64. pmid:19622511
  12. 12. Pollock N, Savage T, Wardell H, Lee R, Mathew A, Stengelin M, et al. Correlation of SARS-CoV-2 Nucleocapsid Antigen and RNA Concentrations in Nasopharyngeal Samples from Children and Adults Using an Ultrasensitive and Quantitative Antigen Assay. J Clin Microbiol. 2020;59:e03077–20. pmid:33441395
  13. 13. World Health Organization. First WHO International Standard for SARS-CoV-2 RNA. 2021.
  14. 14. Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529–36. pmid:22007046
  15. 15. McGrath S, Zhao X, Steele R, Thombs BD, Benedetti A, DEPRESsion Screening Data Collaboration. Estimating the sample mean and standard deviation from commonly reported quantiles in meta-analysis. Stat Methods Med Res. 2020:962280219889080. pmid:32292115
  16. 16. Hodcroft EB. 2021. CoVariants: SARS-CoV-2 Mutations and Variants of Interest. [cited 2022 March 20] Available from: https://covariants.org/.
  17. 17. World Health Organization. Tracking SARS-CoV-2 variants. 2022.
  18. 18. Viechtbauer W. Conducting Meta-Analyses in R with the metafor Package. J Stat Softw. 2010;36(3):1–48.
  19. 19. van Enst WA, Ochodo E, Scholten RJ, Hooft L, Leeflang MM. Investigation of publication bias in meta-analyses of diagnostic test accuracy: a meta-epidemiological study. BMC Med Res Methodol. 2014;14:70. pmid:24884381
  20. 20. Abdul-Mumin A, Abubakari A, Agbozo F, Abdul-Karim A, Nuertey BD, Mumuni K, et al. Field evaluation of specificity and sensitivity of a standard SARS-CoV-2 antigen rapid diagnostic test: A prospective study at a teaching hospital in Northern Ghana. medRxiv [Preprint]; published September 17, 2021.
  21. 21. Abdulrahman A, Mustafa F, Al Awadhi AI, Al Ansari Q, Al Alawi B, Al Qahtani M. Comparison of SARS-COV-2 nasal antigen test to nasopharyngeal RT-PCR in mildly symptomatic patients. medRxiv [Preprint]; published December 08, 2020.
  22. 22. Abusrewil Z, Alhudiri I, Kaal H, Edin El Meshri S, Ebrahim F, Dalyoum T, et al. Time scale performance of rapid antigen testing for SARS-COV-2: evaluation of ten rapid antigen assays. J Med Virol. 2021:1–7.
  23. 23. Agarwal J, Das A, Pandey P, Sen M, Garg J. “David vs. Goliath”: A simple antigen detection test with potential to change diagnostic strategy for SARS-CoV-2. J Infect Dev Ctries. 2021;15(7):904–909. pmid:34343113
  24. 24. Agulló V, Fernández-González M, Ortiz de la Tabla V, Gonzalo-Jiménez N, García JA, Masiá M, et al. Evaluation of the rapid antigen test Panbio COVID-19 in saliva and nasal swabs: A population-based point-of-care study. J Inf Secur. 2020;82(5):186–230. pmid:33309541
  25. 25. Akashi Y, Kiyasu Y, Takeuchi Y, Kato D, Kuwahara M, Muramatsu S, et al. Evaluation and clinical implications of the time to a positive results of antigen testing for SARS-CoV-2. medRxiv [Preprint]; published June 13, 2021. pmid:34799237
  26. 26. Akingba OL, Sprong K, Hardie DR. Field performance evaluation of the PanBio rapid SARS-CoV-2 antigen assay in an epidemic driven by 501Y.v2 (lineage B.1.351) in the Eastern Cape, South Africa. J Clin Virol Plus. 2021;1(1–2):100013. pmid:35262001
  27. 27. Albert E, Torres I, Bueno F, Huntley D, Molla E, Fernandez-Fuentes MA, et al. Field evaluation of a rapid antigen test (Panbio COVID-19 Ag Rapid Test Device) for COVID-19 diagnosis in primary healthcare centres. Clin Microbiol Infect. 2020;27(3):472.e7 –472.e10. pmid:33189872
  28. 28. Alemany A, Baro B, Ouchi D, Ubals M, Corbacho-Monné M, Vergara-Alert J, et al. Analytical and Clinical Performance of the Panbio COVID-19 Antigen-Detecting Rapid Diagnostic Test. J Inf Secur. 2020;82(5):186–230. pmid:33421447
  29. 29. Allan-Blitz LT, Klausner JD. A Real-World Comparison of SARS-CoV-2 Rapid Antigen vs. Polymerase Chain Reaction Testing in Florida. J Clin Microbiol. 2021;59(10):e0110721. pmid:34346715
  30. 30. Baccani I, Morecchiato F, Chilleri C, Cervini C, Gori E, Matarrese D, et al. Evaluation of Three Immunoassays for the Rapid Detection of SARS-CoV-2 Antigens. Diagn Microbiol Infect Dis. 2021;101(2):115434. pmid:34174523
  31. 31. Bachman CM, Grant BD, Anderson CE, Alonzo LF, Garing S, Byrnes SA, et al. Clinical validation of an open-access SARS-COV-2 antigen detection lateral flow assay, compared to commercially available assays. PLoS ONE. 2021;16(8):e0256352. pmid:34403456
  32. 32. Baro B, Rodo P, Ouchi D, Bordoy A, Saya Amaro E, Salsench S, et al. Performance characteristics of five antigen-detecting rapid diagnostic test (Ag-RDT) for SARS-CoV-2 asymptomatic infection: a head-to-head benchmark comparison. J Inf Secur. 2021;82(6):269–75. pmid:33882299
  33. 33. Beck ET, Paar W, Fojut L, Serwe J, Jahnke RR. Comparison of Quidel Sofia SARS FIA Test to Hologic Aptima SARS-CoV-2 TMA Test for Diagnosis of COVID-19 in Symptomatic Outpatients. J Clin Microbiol. 2020;59(2):e02727–0. pmid:33239376
  34. 34. Berger A, Ngo Nsoga M-T, Perez Rodriguez FJ, Abi Aad Y, Sattonnet P, Gayet-Ageron A, et al. Diagnostic accuracy of two commercial SARS-CoV-2 Antigen-detecting rapid tests at the point of care in community-based testing centers. PLoS ONE. 2020;16(3):e0248921. pmid:33788882
  35. 35. Bianco G, Boattini M, Barbui AM, Scozzari G, Riccardini F, Coggiola M, et al. Evaluation of an antigen-based test for hospital point-of-care diagnosis of SARS-CoV-2 infection. J Clin Virol. 2021;139:104838. pmid:33946040
  36. 36. Blairon L, Cupaiolo R, Thomas I, Piteüs S, Wilmet A, Beukinga I, et al. Efficacy comparison of three rapid antigen tests for SARS-CoV-2 and how viral load impact their performance. J Med Virol. 2021;93:5783–8. pmid:34050945
  37. 37. Bornemann L, Kaup O, Kleideiter J, Panning M, Ruprecht B, Wehmeier M. Real-life evaluation of the Sofia SARS-CoV-2 antigen assay in a large tertiary care hospital. J Clin Virol. 2021;140:104854. pmid:34044330
  38. 38. Bouassa MRS, Veyer D, Péré H, Bélec L. Analytical performances of the point-of-care SIENNA COVID-19 Antigen Rapid Test for the detection of SARS-CoV-2 nucleocapsid protein in nasopharyngeal swabs: A prospective evaluation during the COVID-19 second wave in France. Int J Infect Dis. 2021;106:8–12. pmid:33746093
  39. 39. Brihn A, Chang J, OY K, Balter S, Terashita D, Rubin Z, et al. Diagnostic Performance of an Antigen Test with RT-PCR for the Detection of SARS-CoV-2 in a Hospital Setting—Los Angeles County, California, June-August 2020. Morb Mortal Wkly Rep. 2021;70(19):702–6. pmid:33983916
  40. 40. Bruzzone B, De Pace V, Caligiuri P, Ricucci V, Guarona G, Pennati BM, et al. Comparative diagnostic performance of different rapid antigen detection tests for COVID-19 in the real-world hospital setting. Int J Infect Dis. 2021;107:215–8. pmid:33930540
  41. 41. Bulilete O, Lorente P, Leiva A, Carandell E, Oliver A, Rojo E, et al. Panbio rapid antigen test for SARS-CoV-2 has acceptable accuracy in symptomatic patients in primary health care. J Inf Secur. 2021;82:391–8. pmid:33592253
  42. 42. Caramello V, Boccuzzi A, Basile V, Ferraro A, Macciotta A, Catalano A, et al. Are antigenic tests useful for detecting SARS-CoV-2 infections in patients accessing to emergency departments? Results from a North-West Italy Hospital. J Inf Secur. 2021;83:237–79. pmid:34023367
  43. 43. Carbonell-Sahuquillo S, Lázaro-Carreño MI, Camacho J, Barrés-Fernández A, Albert E, Torres I, et al. Evaluation of a rapid antigen detection test (Panbio COVID-19 Ag Rapid Test Device) as a point-of-care diagnostic tool for COVID-19 in a Pediatric Emergency Department. J Med Virol. 2021:1–5.
  44. 44. Caruana G, Croxatto A, Kampouri E, Kritikos A, Opota O, Foerster M, et al. ImplemeNting SARS-CoV-2 Rapid antigen testing in the Emergency wArd of a Swiss univErsity hospital: the INCREASE study. Microorganisms. 2021;9(4):798. pmid:33920307
  45. 45. Caruana G, Lebrun LL, Aebischer O, Opota O, Urbano L, de Rham M, et al. The dark side of SARS-CoV-2 rapid antigen testing: screening asymptomatic patients. New Microbes New Infect. 2021;42:100899. pmid:34007453
  46. 46. Cassuto NG, Gravier A, Colin M, Theillay A, Pires-Roteira D, Pallay S, et al. Evaluation of a SARS-CoV-2 antigen-detecting rapid diagnostic test as a self-test: diagnostic performance and usability. J Med Virol. 2021:1–7.
  47. 47. Cento V, Renica S, Matarazzo E, Antonello M, Colagrossi L, Di Ruscio F, et al. Frontline Screening for SARS-CoV-2 Infection at Emergency Department Admission by Third Generation Rapid Antigen Test: Can We Spare RT-qPCR? Viruses-Basel. 2021;13(5):818. pmid:34062916
  48. 48. Cerutti F, Burdino E, Milia MG, Allice T, Gregori G, Bruzzone B, et al. Urgent need of rapid tests for SARS CoV-2 antigen detection: Evaluation of the SD-Biosensor antigen test for SARS-CoV-2. J Clin Virol. 2020;132:104654. pmid:33053494
  49. 49. Chaimayo C, Kaewnaphan B, Tanlieng N, Athipanyasilp N, Sirijatuphat R, Chayakulkeeree M, et al. Rapid SARS-CoV-2 antigen detection assay in comparison with real-time RT-PCR assay for laboratory diagnosis of COVID-19 in Thailand. Virol J. 2020;17(1):177. pmid:33187528
  50. 50. Chiu R, Kojima N, Mosley G, Cheng KK, Pereira D, Brobeck M, et al. Evaluation of the INDICAID COVID-19 Rapid Antigen Test in symptomatic populations and asymptomatic community testing. Microbiol Spectr. 2021;9(1):e0034221. pmid:34346748
  51. 51. Christensen K, Ren H, Chen S, Cooper C, Young S. Clinical evaluation of BD Veritor SARS-CoV-2 and Flu A+B Assay for point-of-care (POC) System. medRxiv [Preprint]; published May 05, 2021.
  52. 52. Ciotti M, Maurici M, Pieri M, Andreoni M, Bernardini S. Performance of a rapid antigen test in the diagnosis of SARS-CoV-2 infection. J Med Virol. 2021;93:2988–91. pmid:33527409
  53. 53. Dankova Z, Novakova E, Skerenova M, Holubekova V, Lucansky V, Dvorska D, et al. Comparison of SARS-CoV-2 Detection by Rapid Antigen and by Three Commercial RT-qPCR Tests: A Study from Martin University Hospital in Slovakia. Int J Environ Res Public Health. 2021;18(13). pmid:34280974
  54. 54. Del Vecchio C, Brancaccio G, Brazzale AR, Lavezzo E, Onelia F, Franchin E, et al. Emergence of N antigen SARS-CoV-2 genetic variants escaping detection of antigenic tests. medRxiv [Preprint]; published March 26, 2021.
  55. 55. Di Domenico M, De Rosa A, Di Gaudio F, Internicola P, Bettini C, Salzano N, et al. Diagnostic Accuracy of a New Antigen Test for SARS-CoV-2 Detection. Int J Environ Res Public Health. 2021;18(12):6310. pmid:34200827
  56. 56. Dierks S, Bader O, Schwanbeck J, Gross U, Weig MS, Mese K, et al. Diagnosing SARS-CoV-2 with Antigen Testing, Transcription-Mediated Amplification and Real-Time PCR. J Clin Med. 2021;10(11):2404. pmid:34072381
  57. 57. Domínguez Fernández M, Peña Rodríguez MF, Lamelo Alfonsín F, Bou AG. Experience with Panbio rapid antigens test device for the detection of SARS-CoV-2 in nursing homes. Enferm Infecc Microbiol Clin. 2021:S0213–05X. pmid:33551148
  58. 58. Drain PK, Ampajwala M, Chappel C, Gvozden AB, Hoppers M, Wang M, et al. A Rapid, High-Sensitivity SARS-CoV-2 Nucleocapsid Immunoassay to Aid Diagnosis of Acute COVID-19 at the Point of Care: A Clinical Performance Study. Infect Dis Ther. 2021;10(2):753–61. pmid:33629225
  59. 59. Drevinek P, Hurych J, Kepka Z, Briksi A, Kulich M, Zajac M, et al. The sensitivity of SARS-CoV-2 antigen tests in the view of large-scale testing. medRxiv [Preprint]; published November 24, 2020.
  60. 60. Eleftheriou I, Dasoula F, Dimopoulou D, Lebessi E, Serafi E, Spyridis N, et al. Real-life evaluation of a COVID-19 rapid antigen detection test in hospitalized children. J Med Virol. 2021. pmid:34156112
  61. 61. Escrivá BF, Mochón MDO, González RM, García CS, Pla AT, Ricart AS, et al. The effectiveness of rapid antigen test-based for SARS-CoV-2 detection in nursing homes in Valencia, Spain. J Clin Virol. 2021;143:104941. pmid:34399104
  62. 62. Faíco-Filho KS, Finamor Júnior FE, Moreira LVL, Lins PRG, Justo AFO, Bellei N. Evaluation of the Panbio COVID-19 Ag Rapid Test at an Emergency Room in a Hospital in São Paulo, Brazil. medRxiv [Preprint]; published March 24, 2021.
  63. 63. Favresse J, Gillot C, Oliveira M, Cadrobbi J, Elsen M, Eucher C, et al. Head-to-Head Comparison of Rapid and Automated Antigen Detection Tests for the Diagnosis of SARS-CoV-2 Infection. J Clin Med. 2021;10(2):265. pmid:33450853
  64. 64. Fenollar F, Bouam A, Ballouche M, Fuster L, Prudent E, Colson P, et al. Evaluation of the Panbio Covid-19 rapid antigen detection test device for the screening of patients with Covid-19. J Clin Microbiol. 2020;59:e02589–20. pmid:33139420
  65. 65. Ferguson J, Dunn S, Best A, Mirza J, Percival B, Mayhew M, et al. Validation testing to determine the sensitivity of lateral flow testing for asymptomatic SARS-CoV-2 detection in low prevalence settings: Testing frequency and public health messaging is key. PLoS Biol. 2021;19(4):e3001216. pmid:33914730
  66. 66. Fernández MD, Estévez AS, Alfonsín FL, Arevalo GB. Usefulness of the LumiraDx SARS-COV-2 antigen test in nursing home. Enferm Infecc Microbiol Clin. 2021. pmid:34177031
  67. 67. Fernandez-Montero A, Argemi J, Rodríguez JA, Ariño AH, Moreno-Galarraga L. Validation of a rapid antigen test as a screening tool for SARS-CoV-2 infection in asymptomatic populations. Sensitivity, specificity and predictive values. EClinicalMedicine. 2021:37:100954. pmid:34127960
  68. 68. Ferté T, Ramel V, Cazanave C, Lafon ME, Bébéar C, Malvy D, et al. Accuracy of COVID-19 rapid antigenic tests compared to RT-PCR in a student population: The StudyCov study. J Clin Virol. 2021;141:104878. pmid:34134035
  69. 69. Filgueiras P, Corsini C, Almeida NBF, Assis J, Pedrosa ML, de Oliveira A, et al. COVID-19 Rapid Antigen Test at hospital admission associated to the knowledge of individual risk factors allow overcoming the difficulty of managing suspected patients in hospitals COVID-19 Rapid Antigen Test facilitates the management of suspected patients on hospital admission. medRxiv [Preprint]; published January 08, 2021.
  70. 70. Fourati S, Langendorf C, Audureau E, Challine D, Michel J, Soulier A, et al. Performance of six rapid diagnostic tests for SARS-CoV-2 antigen detection and implications for practical use. J Clin Virol. 2021;142:104930. pmid:34390929
  71. 71. Frediani JK, Levy JM, Rao A, Bassit L, Figueroa J, Vos MB, et al. Multidisciplinary assessment of the Abbott BinaxNOW SARS-CoV-2 point-of-care antigen test in the context of emerging viral variants and self-administration. Sci Rep. 2021;11(1):14604. pmid:34272449
  72. 72. García-Fiñana M, Hughes DM, Cheyne CP, Burnside G, Stockbridge M, Fowler TA, et al. Performance of the Innova SARS-CoV-2 antigen rapid lateral flow test in the Liverpool asymptomatic testing pilot: population based cohort study. Br Med J. 2021;374:n1637. pmid:34230058
  73. 73. Gomez Marti JL, Gribschaw J, McCullough M, Mallon A, Acero J, Kinzler A, et al. Differences in detected viral loads guide use of SARS-CoV-2 antigen-detection assays towards symptomatic college students and children. medRxiv [Preprint]; published February 01, 2021.
  74. 74. Gremmels H, Winkela BMF, Schuurmana R, Rosinghb A, Rigterc NAM, Rodriguezd O, et al. Real-life validation of the Panbio COVID-19 Antigen Rapid Test (Abbott) in community-dwelling subjects with symptoms of potential SARS-CoV-2 infection. EClinicalMedicine. 2020;31:100677. pmid:33521610
  75. 75. Gupta A, Khurana S, Das R, Srigyan D, Singh A, Mittal A, et al. Rapid chromatographic immunoassay-based evaluation of COVID-19: A cross-sectional, diagnostic test accuracy study & its implications for COVID-19 management in India. Indian J Med Res. 2020;153(1):126. pmid:33818469
  76. 76. Halfon P, Penaranda G, Khiri H, Garcia V, Drouet H, Philibert P, et al. An optimized stepwise algorithm combining rapid antigen and RT-qPCR for screening of COVID-19 patients. PLoS ONE. 2021;16(9):e0257817. pmid:34555117
  77. 77. Harris DT, Badowski M, Jernigan B, Sprissler R, Edwards T, Cohen R, et al. SARS-CoV-2 Rapid Antigen Testing of Symptomatic and Asymptomatic Individuals on the University of Arizona Campus. Biomedicine. 2021;9(5):539. pmid:34066047
  78. 78. Herrera V, Hsu V, Adewale A, Hendrix T, Johnson L, Kuhlman J, et al. Testing of Healthcare Workers Exposed to COVID19 with Rapid Antigen Detection. medRxiv [Preprint]; published August 18, 2020.
  79. 79. Holzner C, Pabst D, Anastasiou OE, Dittmer U, Manegold RK, Risse J, et al. SARS-CoV-2 rapid antigen test: Fast-safe or dangerous? An analysis in the emergency department of an university hospital. J Med Virol. 2021;93:5323–7. pmid:33969499
  80. 80. Homza M, Zelena H, Janosek J, Tomaskova H, Jezo E, Kloudova A, et al. Five Antigen Tests for SARS-CoV-2: Virus Viability Matters. Viruses. 2021;13(4). pmid:33921164
  81. 81. Homza M, Zelena H, Janosek J, Tomaskova H, Jezo E, Kloudova A, et al. Covid-19 antigen testing: better than we know? A test accuracy study Infectious Diseases. 2021;53(9):661–8. pmid:33985403
  82. 82. Houston H, Gupta-Wright A, Toke-Bjolgerud E, Biggin-Lamming J, John L. Diagnostic accuracy and utility of SARS-CoV-2 antigen lateral flow assays in medical admissions with possible COVID-19. J Hosp Infect. 2021;110:203–5. pmid:33539935
  83. 83. Ifko M, Skvarc M. Use of Immunochromatographic SARS-CoV-2 Antigen Testing in Eight Long-Term Care Facilities for the Elderly. Health. 2021;9(7):868. pmid:34356246
  84. 84. Iglὁi Z, Velzing J, van Beek J, van de Vijver D, Aron G, Ensing R, et al. Clinical evaluation of the Roche/SD Biosensor rapid antigen test with symptomatic, non-hospitalized patients in a municipal health service drive-through testing site. Emerg Infect Dis. 2020;27(5):1323–9. pmid:33724916
  85. 85. Jääskeläinen AE, Ahava MJ, Jokela P, Szirovicza L, Pohjala S, Vapalahti O, et al. Evaluation of three rapid lateral flow antigen detection tests for the diagnosis of SARS-CoV-2 infection. J Clin Virol. 2021;137:104785. pmid:33711694
  86. 86. James AE, Gulley T, Kothari A, Holder K, Garner K, Patil N. Performance of the BinaxNOW COVID-19 Antigen Card test relative to the SARS-CoV-2 real-time reverse transcriptase polymerase chain reaction assay among symptomatic and asymptomatic healthcare employees. Infect Control Hosp Epidemiol. 2021:1–3. pmid:33487197
  87. 87. Jegerlehner S, Suter-Riniker F, Jent P, Bittel P, Nagler M. Diagnostic accuracy of a SARS-CoV-2 rapid antigen test in real-life clinical settings. Int J Infect Dis. 2021;109:118–22. pmid:34242764
  88. 88. Johnson C, Ferguson K, Smith T, Wallace D, McConnell K, Conserve D, et al. Evaluation of the Panbio SARS-CoV-2 rapid antigen detection test in the Bahamas. medRxiv [Preprint]; published July 15, 2021.
  89. 89. Jung C, Levy C, Varon E, Biscardi S, Batard C, Wollner A, et al. Diagnostic Accuracy of SARS-CoV-2 Antigen Detection Test in Children: A Real-Life Study. Front Pediatr. 2021;9:647274. pmid:34336732
  90. 90. Kahn M, Schuierer L, Bartenschlager C, Zellmer S, Frey R, Freitag M, et al. Performance of antigen testing for diagnosis of COVID-19: a direct comparison of a lateral flow device to nucleic acid amplification based tests. BMC Infect Dis. 2021;21(1):798. pmid:34376187
  91. 91. Kanaujia R, Ghosh A, Mohindra R, Singla V, Goyal K, Gudisa R, et al. Rapid antigen detection kit for the diagnosis of SARS-CoV-2—are we missing asymptomatic patients? Indian J Med Microbiol. 2021. pmid:34294504
  92. 92. Kannian P, Lavanya C, Ravichandran K, Gita JB, Mahanathi P, Ashwini V, et al. SARS-CoV2 antigen in whole mouth fluid may be a reliable rapid detection tool. Oral Dis. 2021;00:1–2. pmid:33534926
  93. 93. Karon BS, Donato L, Bridgeman AR, Blommel JH, Kipp B, Maus A, et al. Analytical sensitivity and specificity of four point of care rapid antigen diagnostic tests for SARS-CoV-2 using real-time quantitative PCR, quantitative droplet digital PCR, and a mass spectrometric antigen assay as comparator methods. Clin Chem. 2021:hvab138. pmid:34240163
  94. 94. Kenyeres B, Ánosi N, Bányai K, Mátyus M, Orosz L, Kiss A, et al. Comparison of four PCR and two point of care assays used in the laboratory detection of SARS-CoV-2. J Virol Methods. 2021;293:114165. pmid:33872650
  95. 95. Kernéis S, Elie C, Fourgeaud J, Choupeaux L, Delarue SM, Alby ML, et al. Accuracy of saliva and nasopharyngeal sampling for detection of SARS-CoV-2 in community screening: a multicentric cohort study. Eur J Clin Microbiol Infect Dis. 2021:1–10. pmid:34342768
  96. 96. Kilic A, Hiestand B, Palavecino E. Evaluation of Performance of the BD Veritor SARS-CoV-2 Chromatographic Immunoassay Test in Patients with Symptoms of COVID-19. J Clin Microbiol. 2021;59(5). pmid:33637583
  97. 97. Kim D, Lee J, Bal J, Seo SK, Chong CK, Lee JH, et al. Development and Clinical Evaluation of an Immunochromatography-Based Rapid Antigen Test (GenBody COVAG025) for COVID-19 Diagnosis. Viruses-Basel. 2021;13(5). pmid:33946860
  98. 98. Kim HW, Park M, Lee JH. Clinical Evaluation of the Rapid STANDARD Q COVID-19 Ag Test for the Screening of Severe Acute Respiratory Syndrome Coronavirus 2. Ann Lab Med. 2022;42(1):100–4. pmid:34374355
  99. 99. Kipritci Z, Keskin AU, Ciragil P, Topkaya AE. Evaluation of a Visually-Read Rapid Antigen Test Kit (SGA V-Chek) for Detection of SARS-CoV-2 Virus. Mikrobiyol Bul. 2021;55(3):461–4. pmid:34416811
  100. 100. Koeleman JGM, Brand H, de Man SJ, Ong DSY. Clinical evaluation of rapid point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. Eur J Clin Microbiol Infect Dis. 2021:1–7. pmid:34021840
  101. 101. Kohmer N, Toptan T, Pallas C, Karaca O, Pfeiffer A, Westhaus S, et al. The Comparative Clinical Performance of Four SARS-CoV-2 Rapid Antigen Tests and Their Correlation to Infectivity In Vitro. Journal of. Clin Med. 2021;10(2). pmid:33477365
  102. 102. Kolwijck E, Brouwers-Boers M, Broertjes J, van Heeswijk K, Runderkamp N, Meijer A, et al. Validation and implementation of the Panbio COVID-19 Ag rapid test for the diagnosis of SARS-CoV-2 infection in symptomatic hospital healthcare workers. Infect Prev Pract. 2021;3(2):100142. pmid:34316580
  103. 103. Korenkov M, Poopalasingam N, Madler M, Vanshylla K, Eggeling R, Wirtz M, et al. Evaluation of a rapid antigen test to detect SARS-CoV-2 infection and identify potentially infectious individuals. J Clin Microbiol. 2021;59(9):e0089621. pmid:34213977
  104. 104. Krüger LJ, Gaeddert M, Köppel L, Brümmer LE, Gottschalk C, Miranda IB, et al. Evaluation of the accuracy, ease of use and limit of detection of novel, rapid, antigen-detecting point-of-care diagnostics for SARS-CoV-2. medRxiv [Preprint]; published October 04, 2020.
  105. 105. Krüger LJ, Gaeddert M, Tobian F, Lainati F, Gottschalk C, Klein JAF, et al. The Abbott PanBio WHO emergency use listed, rapid, antigen-detecting point-of-care diagnostic test for SARS-CoV-2-Evaluation of the accuracy and ease-of-use. PLoS ONE. 2021;16(5):e0247918. pmid:34043631
  106. 106. Krüger LJ, Klein JAF, Tobian F, Gaeddert M, Lainati F, Klemm S, et al. Evaluation of accuracy, exclusivity, limit-of-detection and ease-of-use of LumiraDx: An antigen-detecting point-of-care device for SARS-CoV-2. Infection. 2021:1–12. pmid:34383260
  107. 107. Krüttgen A, Cornelissen CG, Dreher M, Hornef MW, Imöhl M, Kleinesa M. Comparison of the SARS-CoV-2 Rapid antigen test to the real star Sars-CoV-2 RT PCR kit. J Virol Methods. 2020;288:114024. pmid:33227341
  108. 108. Kumar KK, Sampritha UC, Maganty V, Prakash AA, Basumatary J, Adappa K, et al. Pre-Operative SARS CoV-2 Rapid Antigen Test and Reverse Transcription Polymerase Chain Reaction: A conundrum in surgical decision making. Indian J Ophthalmol. 2021;69(6):1560–2. pmid:34011741
  109. 109. Kurihara Y, Kiyasu Y, Akashi Y, Takeuchi Y, Narahara K, Mori S, et al. The evaluation of a novel digital immunochromatographic assay with silver amplification to detect SARS-CoV-2. J Infect Chemother. 2021;27(10):1493–7. pmid:34294528
  110. 110. L’Huillier A, Lacour M, Sadiku D, Gadiri M, De Siebenthal L, Schibler M, et al. Diagnostic accuracy of SARS-CoV-2 rapid antigen detection testing in symptomatic and asymptomatic children in the clinical setting. J Clin Microbiol. 2021;59(9):e00991–21. pmid:34190574
  111. 111. Lambert-Niclot S, Cuffel A, Le Pape S, Vauloup-Fellous C, Morand-Joubert L, Roque-Afonso AM, et al. Evaluation of a Rapid Diagnostic Assay for Detection of SARS-CoV-2 Antigen in Nasopharyngeal Swabs. J Clin Microbiol. 2020;58(8):e00977–20. pmid:32404480
  112. 112. Lee J, Kim SY, Huh HJ, Kim N, Sung H, Lee H, et al. Clinical Performance of the Standard Q COVID-19 Rapid Antigen Test and Simulation of its Real-World Application in Korea. Ann Lab Med. 2021;41(6):588–92. pmid:34108286
  113. 113. Leixner G, Voill-Glaninger A, Bonner E, Kreil A, Zadnikar R, Viveiros A. Evaluation of the AMP SARS-CoV-2 rapid antigen test in a hospital setting. Int J Infect Dis. 2021;108:353–6. pmid:34087486
  114. 114. Leli C, Matteo LD, Gotta F, Cornaglia E, Vay D, Megna I, et al. Performance of a SARS CoV-2 antigen rapid immunoassay in patients admitted to the Emergency Department. Int J Infect Dis. 2021;110:135–40. pmid:34302961
  115. 115. Linares M, Pérez-Tanoira R, Carrero A, Romanyk J, Pérez-García F, Gómez-Herruz P, et al. Panbio antigen rapid test is reliable to diagnose SARS-CoV-2 infection in the first 7 days after the onset of symptoms. J Clin Virol. 2020;133:104659. pmid:33160179
  116. 116. Lindner A, Nikolai O, Rohardt C, Burock S, Hülso C, Bölke A, et al. Head-to-head comparison of SARS-CoV-2 antigen-detecting rapid test with professional-collected nasal versus nasopharyngeal swab. Eur Respir J. 2020;57(5):2004430.
  117. 117. Lindner AK, Nikolai O, Kausch F, Wintel M, Hommes F, Gertler M, et al. Head-to-head comparison of SARS-CoV-2 antigen-detecting rapid test with self-collected nasal swab versus professional-collected nasopharyngeal swab. Eur Respir J. 2021;57(4).
  118. 118. Lindner AK, Nikolai O, Rohardt C, Kausch F, Wintel M, Gertler M, et al. Diagnostic accuracy and feasibility of patient self-testing with a SARS-CoV-2 antigen-detecting rapid test. J Clin Virol. 2021;141:104874. pmid:34144452
  119. 119. Liotti FM, Menchinelli G, Lalle E, Palucci I, Marchetti S, Colavita F, et al. Performance of a novel diagnostic assay for rapid SARS-CoV-2 antigen detection in nasopharynx samples. Clin Microbiol Infect. 2020;27:487–8. pmid:32979567
  120. 120. Lunca C, Cojocaru C, Gurzu IL, Petrariu FD, Cojocaru E. Performance of antigenic detection of SARS-CoV-2 in nasopharyngeal samples. medRxiv [Preprint]; published July 16, 2021.
  121. 121. Menchinelli G, De Angelis G, Cacaci M, Liotti FM, Candelli M, Palucci I, et al. SARS-CoV-2 Antigen Detection to Expand Testing Capacity for COVID-19: Results from a Hospital Emergency Department Testing Site. Diagnostics. 2021;11(7). pmid:34359294
  122. 122. Merino-Amador P, González-Donapetry P, Domínguez-Fernández M, González-Romo F, Sánchez-Castellano M, Seoane-Estevez A, et al. Clinitest rapid COVID-19 antigen test for the diagnosis of SARS-CoV-2 infection: A multicenter evaluation study. J Clin Virol. 2021;143:104961. pmid:34461560
  123. 123. Merino-Amador P, Guinea J, Muñoz-Gallego I, González-Donapetry P, Galán J-C, Antona N, et al. Multicenter evaluation of the Panbio COVID-19 Rapid Antigen-Detection Test for the diagnosis of SARS-CoV-2 infection. Clin Microbiol Infect. 2020;27(5):758–61. pmid:33601009
  124. 124. Mertens P, De Vos N, Martiny D, Jassoy C, Mirazimi A, Cuypers L, et al. Development and Potential Usefulness of the COVID-19 Ag Respi-Strip Diagnostic Assay in a Pandemic Context. Front Med. 2020;7:225. pmid:32574326
  125. 125. Micocci M, Buckle P, Hayward G, Allen J, Davies K, Kierkegaard P, et al. Point of Care Testing using rapid automated Antigen Testing for SARS-COV-2 in Care Homes–an exploratory safety, usability and diagnostic agreement evaluation. medRxiv [Preprint]; published April 26, 2021.
  126. 126. Möckel M, Corman VM, Stegemann MS, Hofmann J, Stein A, Jones TC, et al. SARS-CoV-2 Antigen Rapid Immunoassay for Diagnosis of COVID-19 in the Emergency Department. Biomarkers. 2021;26(3):213–20. pmid:33455451
  127. 127. Muhi S, Tayler N, Hoang T, Ballard SA, Graham M, Rojek A, et al. Multi-site assessment of rapid, point-of-care antigen testing for the diagnosis of SARS-CoV-2 infection in a low-prevalence setting: A validation and implementation study. Lancet Reg Health West Pac. 2021;9:100115. pmid:33937887
  128. 128. Nalumansi A, Lutalo T, Kayiwa J, Watera C, Balinandi S, Kiconco J, et al. Field Evaluation of the Performance of a SARS-CoV-2 Antigen Rapid Diagnostic Test in Uganda using Nasopharyngeal Samples. Int J Infect Dis. 2020;104:282–6. pmid:33130198
  129. 129. Ngo Nsoga MT, Kronig I, Perez Rodriguez FJ, Sattonnet-Roche P, Da Silva D, Helbling J, et al. Diagnostic accuracy of Panbio rapid antigen tests on oropharyngeal swabs for detection of SARS-CoV-2. PLoS ONE. 2021;16(6):e0253321. pmid:34166410
  130. 130. Nikolai O, Rohardt C, Tobian F, Junge A, Corman V, Jones T, et al. Anterior nasal versus nasal mid-turbinate sampling for a SARS-CoV-2 antigen-detecting rapid test: does localisation or professional collection matter? Infect Dis Ther. 2021:1–6. pmid:34445926
  131. 131. Nordgren J, Sharma S, Olsson H, Jämtberg M, Falkeborn T, Svensson L, et al. SARS-CoV-2 rapid antigen test: High sensitivity to detect infectious virus. J Clin Virol. 2021;140:104846. pmid:33971580
  132. 132. Okoye NC, Barker AP, Curtis K, Orlandi RR, Snavely EA, Wright C, et al. Performance Characteristics of BinaxNOW COVID-19 Antigen Card for Screening Asymptomatic Individuals in a University Setting. J Clin Microbiol. 2021;59(4):e03282–20. pmid:33509809
  133. 133. Onsongo SN, Otieno K, van Duijn S, Adams E, Omollo M, Odero IA, et al. Field performance of NowCheck rapid antigen test for SARS-CoV-2 in Kisumu County, western Kenya. medRxiv [Preprint]; published August 13, 2021.
  134. 134. Orsi A, Pennati BM, Bruzzone B, Ricucci V, Ferone D, Barbera P, et al. On-field evaluation of a ultra-rapid fluorescence immunoassay as a frontline test for SARS-COV-2 diagnostic. J Virol Methods. 2021;295:114201. pmid:34058287
  135. 135. Osmanodja B, Budde K, Zickler D, Naik MG, Hofmann J, Gertler M, et al. Accuracy of a Novel SARS-CoV-2 Antigen-Detecting Rapid Diagnostic Test from Standardized Self-Collected Anterior Nasal Swabs. Journal of. Clin Med. 2021;10(10). pmid:34068236
  136. 136. Osterman A, Baldauf HM, Eletreby M, Wettengel JM, Afridi SQ, Fuchs T, et al. Evaluation of two rapid antigen tests to detect SARS-CoV-2 in a hospital setting. Med Microbiol Immunol. 2021;210(1):65–72. pmid:33452927
  137. 137. Osterman A, Iglhaut M, Lehner A, Späth P, Stern M, Autenrieth H, et al. Comparison of four commercial, automated antigen tests to detect SARS-CoV-2 variants of concern. Med Microbiol Immunol. 2021:1–13.
  138. 138. Parada-Ricart E, Gomez-Bertomeu F, Picó-Plana E, Olona-Cabases M. Usefulness of the antigen for diagnosing SARS-CoV-2 infection in patients with and without symptoms. Enferm Infecc Microbiol Clin. 2020;39:357–8. pmid:34629600
  139. 139. Pekosz A, Cooper C, Parvu V, Li M, Andrews J, Manabe YCC, et al. Antigen-based testing but not real-time PCR correlates with SARS-CoV-2 virus culture. Clin Infect Dis. 2020:ciaa1706. pmid:33479756
  140. 140. Pena M, Ampuero M, Garces C, Gaggero A, Garcia P, Velasquez MS, et al. Performance of SARS-CoV-2 rapid antigen test compared with real-time RT-PCR in asymptomatic individuals. Int J Infect Dis. 2021;107:201–4. pmid:33945868
  141. 141. Pérez-García F, Romanyk J, Gómez-Herruz P, Arroyo T, Pérez-Tanoira R, Linares M, et al. Diagnostic performance of CerTest and Panbio antigen rapid diagnostic tests to diagnose SARS-CoV-2 infection. J Clin Virol. 2021;137:104781. pmid:33639492
  142. 142. Perez-Garcia F, Romanyk J, Gutierrez HM, Ballestero AL, Ranz IP, Arroyo JG, et al. Comparative evaluation of Panbio and SD Biosensor antigen rapid diagnostic tests for COVID-19 diagnosis. J Med Virol. 2021;93(9):5650–4. pmid:34002864
  143. 143. Peto T. COVID-19: Rapid antigen detection for SARS-CoV-2 by lateral flow assay: A national systematic evaluation of sensitivity and specificity for mass-testing. EClinicalMedicine. 2021;36:100924. pmid:34101770
  144. 144. Pickering S, Batra R, Merrick B, Snell LB, Nebbia G, Douthwaite S, et al. Comparative performance of SARS-CoV-2 lateral flow antigen tests and association with detection of infectious virus in clinical specimens: a single-centre laboratory evaluation study. Lancet Microbe. 2021;2(9):E461–71. pmid:34226893
  145. 145. Pilarowski G, Lebel P, Sunshine S, Liu J, Crawford E, Marquez C, et al. Performance characteristics of a rapid SARS-CoV-2 antigen detection assay at a public plaza testing site in San Francisco. J Infect Dis. 2020;223(7):1139–44. pmid:33173911
  146. 146. Pollock N, Jacobs J, Tran K, Cranston A, Smith S, O’Kane C, et al. Performance and Implementation Evaluation of the Abbott BinaxNOW Rapid Antigen Test in a High-throughput Drive-through Community Testing Site in Massachusetts. J Clin Microbiol. 2021;59(5):e00083–21. pmid:33622768
  147. 147. Pollock NR, Tran K, Jacobs JR, Cranston AE, Smith S, O’Kane CY, et al. Performance and Operational Evaluation of the Access Bio CareStart Rapid Antigen Test in a High-Throughput Drive-Through Community Testing Site in Massachusetts. Open Forum. Infect Dis Ther. 2021;8(7):ofab243. pmid:34250188
  148. 148. Porte L, Legarraga P, Iruretagoyena M, Vollrath V, Pizarro G, Munita J, et al. Evaluation of two fluorescence immunoassays for the rapid detection of SARS-CoV-2 antigen—new tool to detect infective COVID-19 patients. PeerJ. 2020;9:e10801.
  149. 149. Porte L, Legarraga P, Vollrath V, Aguilera X, Munita JM, Araos R, et al. Evaluation of a novel antigen-based rapid detection test for the diagnosis of SARS-CoV-2 in respiratory samples. Int J Infect Dis. 2020;99:328–33. pmid:32497809
  150. 150. Qahtani MA, Yang CWT, Lazosky L, Li X, D’Cruz J, Romney MG, et al. SARS-CoV-2 rapid antigen testing for departing passengers at Vancouver international airport. J Travel Med. 2021:taab085. pmid:34046663
  151. 151. Ristić M, Nikolić N, Čabarkapa V, Turkulov V, Petrović V. Validation of the STANDARD Q COVID-19 antigen test in Vojvodina, Serbia. PLoS ONE. 2021;16(2):e0247606. pmid:33617597
  152. 152. Rottenstreich A, Zarbiv G, Kabiri D, Porat S, Sompolinsky Y, Reubinoff B, et al. Rapid antigen detection testing for universal screening for severe acute respiratory syndrome coronavirus 2 in women admitted for delivery. Am J Obstet Gynecol. 2021;224(5):539–40. pmid:33453181
  153. 153. Salvagno GL, Gianfilippi G, Bragantini D, Henry BM, Lippi G. Clinical assessment of the Roche SARS-CoV-2 rapid antigen test. Diagnosis. 2021;8(3):322–6. pmid:33554511
  154. 154. Sberna G, Lalle E, Capobianchi MR, Bordi L, Amendola A. Letter of concern re: "Immunochromatographic test for the detection of SARS-CoV-2 in saliva. J Infect Chemother. 2021 Feb;27(2):384–386. ". J Infect Chemother. 2021;27:1129–30. 10.1016/j.jiac.2021.04.003 pmid:33888419
  155. 155. Schildgen V, Demuth S, Lüsebrink J, Schildgen O. Limits and opportunities of SARS-CoV-2 antigen rapid tests–an experience based perspective. Pathogens. 2020;10(1):38. pmid:33466537
  156. 156. Schuit E, Veldhuijzen IK, Venekamp RP, van den Bijllaardt W, Pas SD, Lodder EB, et al. Diagnostic accuracy of rapid antigen tests in asymptomatic and presymptomatic close contacts of individuals with confirmed SARS-CoV-2 infection: cross sectional study. Br Med J. 2021;374:n1676. pmid:34315770
  157. 157. Schwob J-M, Miauton A, Petrovic D, Perdrix J, Senn N, Jaton K, et al. Diagnostic du Covid-19 en milieu ambulatoire. Rev Med Suisse. 2020;17(737):862–5.
  158. 158. Scohy A, Anantharajah A, Bodeus M, Kabamba-Mukadi B, Verroken A, Rodriguez-Villalobos H. Low performance of rapid antigen detection test as frontline testing for COVID-19 diagnosis. J Clin Virol. 2020;129:104455. pmid:32485618
  159. 159. Seitz T, Schindler S, Winkelmeyer P, Zach B, Wenisch C, Zoufaly A, et al. Evaluation of rapid antigen tests based on saliva for the detection of SARS-CoV-2. J Med Virol. 2021;93:4161–2. pmid:33788280
  160. 160. Seynaeve Y, Heylen J, Fontaine C, Maclot F, Meex C, Diep AN, et al. Evaluation of Two Rapid Antigenic Tests for the Detection of SARS-CoV-2 in Nasopharyngeal Swabs. J Clin Med. 2021;10(13):2774. pmid:34202731
  161. 161. Shah M, Salvatore P, Ford L, Kamitani E, Whaley M, Mitchell K, et al. Performance of Repeat BinaxNOW SARS-CoV-2 Antigen Testing in a Community Setting, Wisconsin, November-December 2020. Clin Infect Dis. 2021;73:S54–7. pmid:33909068
  162. 162. Shaikh N, Friedlander EJ, Tate PJ, Liu H, Chang CH, Wells A, et al. Performance of a Rapid SARS-CoV-2 Antigen Detection Assay in Symptomatic Children. Pediatrics. 2021;148(3):e2021050832. pmid:34039718
  163. 163. Shaw JLV, Deslandes V, Smith J, Desjardins M. Evaluation of the Abbott Panbio COVID-19 Ag rapid antigen test for the detection of SARS-CoV-2 in asymptomatic Canadians. Diagn Microbiol Infect Dis. 2021;101(4):115514. pmid:34418823
  164. 164. Shidlovskaya E, Kuznetsova N, Divisenko E, Nikiforova M, Siniavin A, Ogarkova D, et al. The Value of Rapid Antigen Tests to Identify Carriers of Viable SARS-CoV-2. medRxiv [Preprint]; published March 12, 2021.
  165. 165. Shrestha B, Neupane A, Pant S, Shrestha A, Bastola A, Rajbhandari B, et al. Sensitivity and specificity of lateral flow antigen test kits for covid-19 in asymptomatic population of quarantine centre of province 3. Kathmandu Univ Med J. 2020;18(2):36–9. pmid:33605236
  166. 166. Smith RD, Johnson JK, Clay C, Girio-Herrera L, Stevens D, Abraham M, et al. Clinical Evaluation of Sofia Rapid Antigen Assay for Detection of Severe Acute Respiratory Syndrome Coronavirus 2 among Emergency Department to Hospital Admissions. Infect Control Hosp Epidemiol. 2021:1–6. pmid:34162449
  167. 167. Šterbenc A, Tomič V, Bidovec Stojković U, Vrankar K, Rozman A, Zidarn M. Usefulness of rapid antigen testing for SARS-CoV-2 screening of healthcare workers: a pilot study. Clin Exp Med. 2021:1–4. pmid:34021827
  168. 168. Stohr J, Zwart VF, Goderski G, Meijer A, Nagel-Imming CRS, Kluytmans-van den Bergh MFQ, et al. Self-testing for the detection of SARS-CoV-2 infection with rapid antigen tests for people with suspected COVID-19 in the community. Clin Microbiol Infect. 2021. pmid:34363945
  169. 169. Stokes W, Berenger BM, Portnoy D, Scott B, Szelewicki J, Singh T, et al. Clinical performance of the Abbott Panbio with nasopharyngeal, throat, and saliva swabs among symptomatic individuals with COVID-19. Eur J Clin Microbiol Infect Dis. 2021;40(8):1721–6. pmid:33742322
  170. 170. Strömer A, Rose R, Schäfer M, Schön F, Vollersen A, Lorentz T, et al. Performance of a Point-of-Care Test for the Rapid Detection of SARS-CoV-2 Antigen. Microorganisms. 2020;9(1). pmid:33379279
  171. 171. Suzuki H, Akashi Y, Ueda A, Kiyasu Y, Takeuchi Y, Maehara Y, et al. Diagnostic performance of a novel digital immunoassay (RapidTesta SARS-CoV-2): a prospective observational study with 1,127 nasopharyngeal samples. medRxiv [Preprint]; published August 04, 2021.
  172. 172. Takeda Y, Mori M, Omi K. SARS-CoV-2 qRT-PCR Ct value distribution in Japan and possible utility of rapid antigen testing kit. medRxiv [Preprint]; published June 19, 2020.
  173. 173. Takeuchi Y, Akashi Y, Kato D, Kuwahara M, Muramatsu S, Ueda A, et al. The evaluation of a newly developed antigen test (QuickNavi-COVID19 Ag) for SARS-CoV-2: A prospective observational study in Japan. J Infect Chemother. 2021;27(6):890–4. pmid:33727026
  174. 174. Takeuchi Y, Akashi Y, Kato D, Kuwahara M, Muramatsu S, Ueda A, et al. Diagnostic performance and characteristics of anterior nasal collection for the SARS-CoV-2 antigen test: a prospective study. Sci Rep. 2021;11(1):10519. pmid:34006975
  175. 175. Terpos E, Ntanasis-Stathopoulos I, Skvarc M. Clinical Application of a New SARS-CoV-2 Antigen Detection Kit (Colloidal Gold) in the Detection of COVID-19. Diagnostics. 2021;11(6):995. pmid:34070844
  176. 176. Thakur P, Saxena S, Manchanda V, Rana N, Goel R, Arora R. Utility of Antigen-Based Rapid Diagnostic Test for Detection of SARS-CoV-2 Virus in Routine Hospital Settings. Lab Med. 2021; Online ahead of print. pmid:33928384
  177. 177. Thell R, Kallab V, Weinhappel W, Mueckstein W, Heschl L, Heschl M, et al. Evaluation of a novel, rapid antigen detection test for the diagnosis of SARS-CoV-2. medRxiv [Preprint]; published April 22, 2021. pmid:34843505
  178. 178. Thirion-Romero I, Guerrero-Zuniga S, Arias-Mendoza A, Cornejo-Juarez DP, Meza-Meneses P, Torres-Erazo DS, et al. Evaluation of a rapid antigen test for SARS-CoV-2 in symptomatic patients and their contacts: a multicenter study. medRxiv [Preprint]; published May 24, 2021. pmid:34678504
  179. 179. Tinker SC, Szablewski CM, Litvintseva AP, Drenzek C, Voccio GE, Hunter MA, et al. Point-of-Care Antigen Test for SARS-CoV-2 in Asymptomatic College Students. Emerg Infect Dis. 2021;27(10). pmid:34399086
  180. 180. Toptan T, Eckermann L, Pfeiffer A, Hoehl S, Ciesek S, Drosten C, et al. Evaluation of a SARS-CoV-2 rapid antigen test: potential to help reduce community spread? J Clin Virol. 2020;135:104713. pmid:33352470
  181. 181. Torres I, Poujois S, Albert E, Álvarez G, Colomina J, Navarro D. Point-of-care evaluation of a rapid antigen test (CLINITEST) Rapid COVID-19 Antigen Test) for diagnosis of SARS-CoV-2 infection in symptomatic and asymptomatic individuals. J Inf Secur. 2021;82(5):e11–2. pmid:33587922
  182. 182. Torres I, Poujois S, Albert E, Colomina J, Navarro D. Real-life evaluation of a rapid antigen test (Panbio COVID-19 Ag Rapid Test Device) for SARS-CoV-2 detection in asymptomatic close contacts of COVID-19 patients. Clin Microbiol Infect. 2020;27(4):636.E1-636.E4.
  183. 183. Tsai SC, Lee WS, Chen PY, Hung SH. Real world clinical performance of SARS-CoV-2 rapid antigen tests in suspected COVID-19 cases in Taiwan. J Formos Med Assoc. 2021;120(10):2042–3. pmid:34301424
  184. 184. Turcato G, Zaboli A, Pfeifer N, Ciccariello L, Sibilio S, Tezza G, et al. Clinical application of a rapid antigen test for the detection of SARS-CoV-2 infection in symptomatic and asymptomatic patients evaluated in the emergency department: a preliminary report. J Inf Secur. 2020;82:E14–6. pmid:33347944
  185. 185. Van der Moeren N, Zwart VF, Lodder EB, Van den Bijllaardt W, Van Esch H, Stohr J, et al. Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands. PLoS ONE. 2021;16(5):e0250886. pmid:33983971
  186. 186. Van Honacker E, Van Vaerenbergh K, Boel A, De Beenhouwer H, Leroux-Roels I, Cattoir L. Comparison of five SARS-CoV-2 rapid antigen detection tests in a hospital setting and performance of one antigen assay in routine practice: a useful tool to guide isolation precautions? J Hosp Infect. 2021;114:144–52. pmid:33785377
  187. 187. Villaverde S, Domínguez-Rodríguez S, Sabrido G, Pérez-Jorge C, Plata M, Romero MP, et al. Diagnostic Accuracy of the Panbio SARS-CoV-2 Antigen Rapid Test Compared with Rt-Pcr Testing of Nasopharyngeal Samples in the Pediatric Population. J Pediatr. 2021;232:P287–289.E4. pmid:33484697
  188. 188. Weitzel T, Legarraga P, Iruretagoyena M, Pizarro G, Vollrath V, Araos R, et al. Comparative evaluation of four rapid SARS-CoV-2 antigen detection tests using universal transport medium. Travel Med Infect Dis. 2020;39:101942. pmid:33278609
  189. 189. Weitzel T, Pérez C, Tapia D, Legarraga P, Porte L. SARS-CoV-2 rapid antigen detection tests. Lancet Infect Dis. 2021;21(8):1067–8. pmid:33961799
  190. 190. Wertenauer C, Michael GB, Dressel A, Pfeifer C, Hauser U, Wieland E, et al. Diagnostic Efficacy of Rapid Antigen Testing for SARS-CoV-2: The COVid-19 AntiGen (COVAG) study. medRxiv [Preprint]; published August 07, 2021.
  191. 191. Yin N, Debuysschere C, Decroly M, Bouazza FZ, Collot V, Martin C, et al. SARS-CoV-2 Diagnostic Tests: Algorithm and Field Evaluation From the Near Patient Testing to the Automated Diagnostic Platform. Front Med. 2021;8:380. pmid:33889587
  192. 192. Young BC, Eyre DW, Jeffery K. Use of lateral flow devices allows rapid triage of patients with SARS-CoV-2 on admission to hospital. J Inf Secur. 2021;82:276–316. pmid:33662408
  193. 193. Young S, Taylor SN, Cammarata CL, Varnado KG, Roger-Dalbert C, Montano A, et al. Clinical evaluation of BD Veritor SARS-CoV-2 point-of-care test performance compared to PCR-based testing and versus the Sofia 2 SARS Antigen point-of-care test. J Clin Microbiol. 2020;59(1). pmid:33023911
  194. 194. Foundation for Innovative New Diagnostics. FIND Evaluation of Bionote, Inc. NowCheck COVID-19 Ag Test. External Report Version 15, 20 April 2021, 2020.
  195. 195. Foundation for Innovative New Diagnostics. FIND Evaluation of RapiGEN Inc. BIOCREDIT COVID-19 Ag. External Report Version 21, 10 December, 2020.
  196. 196. Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc. STANDARD F COVID-19 Ag FIA. External Report Version 21, 10 December, 2020.
  197. 197. Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc. STANDARD Q COVID-19 Ag Test. External Report Version 21, 10 December, 2020.
  198. 198. Foundation for Innovative New Diagnostics. FIND Evaluation of Boditech Medical, Inc. iChroma COVID-19 Ag Test. External Report Version 10, 23 February 2021, 2021.
  199. 199. Foundation for Innovative New Diagnostics. FIND Evaluation of Joysbio (Tianjin) Biotechnology Co., Ltd. SARS-CoV-2 Antigen Rapid Test Kit (Colloidal Gold). External Report Version 10, 11 February 2021, 2021.
  200. 200. Foundation for Innovative New Diagnostics. FIND Evaluation of Guangzhou Wondfo Biotech Co., Ltd Wondfo 2019-nCoV Antigen Test (Lateral Flow Method). Public Report Version 10, 25 February 2021, 2021.
  201. 201. Foundation for Innovative New Diagnostics. FIND Evaluation of Abbott Panbio COVID-19 Ag Rapid Test Device (NASAL). External Report Version 10, 11 February 2021, 2021.
  202. 202. Foundation for Innovative New Diagnostics. FIND Evaluation of Bionote, Inc. NowCheck COVID-19 Ag Test, nasal swab. External Report Version 10, 30 March 2021, 2021.
  203. 203. Foundation for Innovative New Diagnostics. FIND Evaluation of Fujirebio Inc. Espline SARS-CoV-2. External Report Version 10, 29 March 2021, 2021.
  204. 204. Foundation for Innovative New Diagnostics. FIND Evaluation of Mologic Ltd, COVID 19 RAPID ANTIGEN TEST. External Report Version 10, 23 April 2021, 2021.
  205. 205. Foundation for Innovative New Diagnostics. FIND Evaluation of NADAL COVID-19 Ag Rapid Test. External Report Version 10, 26 April 2021, 2021.
  206. 206. Foundation for Innovative New Diagnostics. FIND Evaluation of Acon Biotech (Hangzhou) Co. Ltd; Flowflex SARS-CoV-2 Antigen Rapid Test. External Report Version 10, 9 June 2021, 2021.
  207. 207. Foundation for Innovative New Diagnostics. FIND Evaluation of Edinburgh Genetics; ActivXpress+ COVID-19 Antigen Complete Testing Kit. External Report Version 10, 26 April 2021, 2021.
  208. 208. Foundation for Innovative New Diagnostics. FIND Evaluation of Green Cross Medical Sciences Corp.; Genedia W COVID-19 Ag. External Report Version 10, 25 April 2021, 2021.
  209. 209. Foundation for Innovative New Diagnostics. FIND Evaluation of Hotgen; Novel Coronavirus 2019-nCoV Antigen Test (Colloidal Gold). External Report Version 20, [15 September 2021], 2021.
  210. 210. Foundation for Innovative New Diagnostics. FIND Evaluation of Abbott; Panbio COVID-19 Ag Rapid Test Device. Country Specific External Report Version 10, 28 April 2021, 2021.
  211. 211. Foundation for Innovative New Diagnostics. FIND Evaluation of Premier Medical Corporation Pvt. Ltd; Sure Status COVID-19 Antigen Card Test. External Report Version 11, 18 August 2021, 2021.
  212. 212. Foundation for Innovative New Diagnostics. FIND Evaluation of SD Bionsensor, Inc.; STANDARD Q COVID-19 Ag Test. External Report (Continue from V21) Version 1, 22 April 2021, 2021.
  213. 213. Foundation for Innovative New Diagnostics. FIND Evaluation of SD Biosensor, Inc.; STANDARD Q COVID-19 Ag Test, nasal swab. External Report Version 20, 12 April 2021, 2021.
  214. 214. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Br Med J. 2021;372:n71. pmid:33782057
  215. 215. Callahan C, Lee RA, Lee GR, Zulauf K, Kirby JE, Arnaout R. Nasal Swab Performance by Collection Timing, Procedure, and Method of Transport for Patients with SARS-CoV-2. J Clin Microbiol. 2021;59(9):e0056921. pmid:34076471
  216. 216. Lindner A, Nikolai O, Rohardt C, Kausch F, Wintel M, Gertler M, et al. SARS-CoV-2 patient self-testing with an antigen-detecting rapid test: a head-to-head comparison with professional testing. medRxiv [Preprint]; published January 08, 2021.
  217. 217. Deerain J, Druce J, Tran T, Batty M, Yoga Y, Fennell M, et al. Assessment of the analytical sensitivity of ten lateral flow devices against the SARS-CoV-2 omicron variant. J Clin Microbiol. 2021:jcm0247921. pmid:34936477
  218. 218. Bekliz M, Adea K, Alvarez C, Essaidi-Laziosi M, Escadafal C, Kaiser L, et al. Analytical sensitivity of seven SARS-CoV-2 antigen-detecting rapid tests for Omicron variant. medRxiv [Preprint]; published December 22, 2021.
  219. 219. Bayart J-L, Degosserie J, Favresse J, Gillot C, Didembourg M, Djokoto HP, et al. Analytical Sensitivity of Six SARS-CoV-2 Rapid Antigen Tests for Omicron versus Delta Variant. Viruses. 2022;14(4):654. pmid:35458384
  220. 220. Pd Michelena, Torres I, Ramos-García Á, Gozalbes V, Ruiz N, Sanmartín A, et al. Real-life performance of a COVID-19 rapid antigen detection test targeting the SARS-CoV-2 nucleoprotein for diagnosis of COVID-19 due to the Omicron variant. J Inf Secur. 2022; Online ahead of print. pmid:35217106
  221. 221. Kissler SM, Fauver JR, Mack C, Tai CG, Breban MI, Watkins AE, et al. Viral Dynamics of SARS-CoV-2 Variants in Vaccinated and Unvaccinated Persons. N Engl J Med. 2021;385(26):2489–91. pmid:34941024
  222. 222. Hay JA, Kissler SM, Fauver JR, Mack C, Tai CG, Samant RM, et al. Viral dynamics and duration of PCR positivity of the SARS-CoV-2 Omicron variant. medRxiv [Preprint]; published January 14, 2022.
  223. 223. World Health Organization. Use of SARS-CoV-2 antigen-detection rapid diagnostic tests for COVID-19 self-testing. 2022.
  224. 224. Adamson B, Sikka R, Wyllie AL, Premsrirut P. Discordant SARS-CoV-2 PCR and Rapid Antigen Test Results When Infectious: A December 2021 Occupational Case Series. medRxiv [Preprint]; published January 05, 2022.
  225. 225. Marais G, Hsiao N-y, Iranzadeh A, Doolabh D, Enoch A, Chu C-y, et al. Saliva swabs are the preferred sample for Omicron detection. medRxiv [Preprint]; published December 24, 2021.
  226. 226. Schrom J, Marquez C, Pilarowski G, Wang G, Mitchell A, Puccinelli R, et al. Direct Comparison of SARS CoV-2 Nasal RT- PCR and Rapid Antigen Test (BinaxNOW(TM)) at a Community Testing Site During an Omicron Surge. medRxiv [Preprint]; published January 12, 2022.
  227. 227. Audigé A, Böni J, Schreiber PW, Scheier T, Buonomano R, Rudiger A, et al. Reduced Relative Sensitivity of the Elecsys SARS-CoV-2 Antigen Assay in Saliva Compared to Nasopharyngeal Swabs. Microorganisms. 2021;9(8):1700. pmid:34442779
  228. 228. Dye C, Bartolomeos K, Moorthy V, Kieny MP. Data sharing in public health emergencies: a call to researchers. Bull World Health Organ. 2016;94(3):158. pmid:26966322
  229. 229. Modjarrad K, Moorthy VS, Millett P, Gsell PS, Roth C, Kieny MP. Developing Global Norms for Sharing Data and Results during Public Health Emergencies. PLoS Med. 2016;13(1):e1001935. pmid:26731342
  230. 230. Arnaout R, Lee RA, Lee GR, Callahan C, Cheng A, Yen CF, et al. The Limit of Detection Matters: The Case for Benchmarking Severe Acute Respiratory Syndrome Coronavirus 2 Testing. Clin Infect Dis. 2021;73(9):e3042–6. pmid:33532847
  231. 231. Mina MJ, Parker R, Larremore DB. Rethinking Covid-19 Test Sensitivity—A Strategy for Containment. N Engl J Med. 2020;383(22). pmid:32997903
  232. 232. Kirby JE, Riedel S, Dutta S, Arnaout R, Cheng A, Ditelberg S, et al. SARS-CoV-2 Antigen Tests Predict Infectivity Based on Viral Culture: Comparison of Antigen, PCR Viral Load, and Viral Culture Testing on a Large Sample Cohort. medRxiv [Preprint]; published December 23, 2021.