Reader Comments

Post a new comment on this article

Commentary on: “A Meta-Analysis of Typhoid Diagnostic Accuracy Studies: A Recommendation to Adopt a Standardized Composite Reference”

Posted by MvanSmeden on 13 Jan 2016 at 12:39 GMT

Authors: Maarten van Smeden, Johannes B Reitsma, Ian Schiller, Nandini Dendukuri

Storey et al. (1) recently presented a thorough systematic review of diagnostic accuracy studies for detecting Typhoid, followed by a meta-analysis. They found that, due to the absence of an agreed-upon ‘gold’ reference standard for Typhoid, a variety of different reference standards have been used. They concluded that the use of different reference standards makes it difficult to compare results across studies and may explain the high level of heterogeneity in accuracy estimates across the reviewed studies (1). To remove this heterogeneity, the authors recommend adoption of a standardized composite reference standard (CRS) for future studies. They argue that a standardized CRS will lead to identification of better reference tests and improved confidence in Typhoid prevalence estimates. A standardized CRS requires development of an agreed-upon classification rule relying on the results of two or more ‘component’ diagnostic tests for making a final Typhoid diagnosis. The authors recommend that multiple studies in diverse regions should then be carried out with this CRS.

We would like to point out that the recommendation of Storey et al. (1) is in conflict with their own example (simulation) study. The authors presented numerical examples which showed that substantial bias in estimates of sensitivity and specificity can arise when applying a CRS. Further, the amount of bias depends on the true Typhoid prevalence and possible conditional dependence between the tests. This means that when the same index test is evaluated against a standardized CRS in settings with different prevalence, different estimates of sensitivity and specificity will be obtained. Thus, the heterogeneity that the authors seek to remove will in fact remain a problem, particularly given they report that prevalence of Typhoid can vary considerably across settings from 2% to 70% (see Table S4 of their paper).

The numerical examples results presented by Storey et al. (1) are in agreement with our recent study in Statistics in Medicine (2). In addition to their observations we also found that bias in index test sensitivity, index test specificity and disease prevalence estimates when using a CRS depends on: the number of component tests included in the CRS and the classification rule that makes up the CRS. Interestingly, the amount of bias due to the CRS can worsen with every added component test.

Storey et al. (1) draw attention to the important problem of lack of a gold standard diagnostic tests for the detection of Typhoid fever. A move towards using the same panel of reference tests in different settings will indeed increase the comparability between studies. However, collapsing the results of multiple reference tests into a standardized, dichotomous CRS would, in our view, be wasteful of the information gathered. Statistical models that recognize the imperfect nature of the different tests and take into account the conditional dependencies between them would be more appropriate. Such latent class models have been suggested both for individual accuracy studies (3,4) as well as for meta-analyses of accuracy studies (5,6). Future work should examine what combination of standard Typhoid tests and what sample size are required to provide a statistically robust model that can contribute to our understanding of accuracy of Typhoid diagnostic tests and Typhoid prevalence.

References
1. Storey HL, Huang Y, Crudder C, Golden A, de los Santos T, Hawkins K. A Meta-Analysis of Typhoid Diagnostic Accuracy Studies: A Recommendation to Adopt a Standardized Composite Reference. Schallig HDFH, editor. PLoS One. 2015; 10(11):e0142364.
2. Schiller I, van Smeden M, Hadgu A, Libman M, Reitsma JB, Dendukuri N. Bias due to composite reference standards in diagnostic accuracy studies. Stat Med. 2015;
3. Collins J, Huynh M. Estimation of diagnostic test accuracy without full verification: a review of latent class methods. Stat Med. 2014; 33(24):4141–4169.
4. van Smeden M, Naaktgeboren CA, Reitsma JB, Moons KGM, de Groot JAH. Latent class models in diagnostic studies when there is no reference standard-a systematic review. Am J Epidemiol. 2014; 179(4):423–431.
5. Dendukuri N, Schiller I, Joseph L, Pai M. Bayesian Meta-Analysis of the Accuracy of a Test for Tuberculous Pleuritis in the Absence of a Gold Standard Reference. Biometrics. 2012;68:1285–1293.
6. Eusebi P, Reitsma JB, Vermunt JK. Latent Class Bivariate Model for the Meta-Analysis of Diagnostic Test Accuracy Studies. BMC Med Res Methodol. 2014;14:88.

No competing interests declared.