Reader Comments

Post a new comment on this article

Reporting confounding

Posted by plosmedicine on 31 Mar 2009 at 00:16 GMT

Author: Robert Platt
Position: Associate Professor
Institution: McGill University
Additional Authors: Ian Shrier
Submitted Date: November 20, 2007
Published Date: November 21, 2007
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.

We applaud Vandenbroucke et al (PLoS Med 4(10):e297.doi:10.1371/
journal.pmed.0040297) and the STROBE statement proposing standardized reporting methods for observational studies. The wide range of designs and methods in observational studies makes standardization even more important than similar approaches in randomized trials (i.e. CONSORT).

The STROBE group requested feedback and we therefore call attention to Table 1 (point 16) where the guidelines recommend that authors “Make clear which confounders were adjusted for and why they were included.” We believe this section should be expanded. For example, commonly used criteria to determine if a variable is a potential confounder are if the variable is 1) associated with the exposure, 2) associated with the outcome and 3) changes the effect estimate when included in the model. However, authors / readers need to understand two important, related, exceptions. First, including a putative confounder that satisfies the above criteria may lead to bias instead of eliminate it if the variable is affected by exposure (whether it lies along the causal pathway or not). Second, conditioning on a variable that is caused by two other variables creates a conditional association between the two causes. This can result in a variable appearing to be a confounder when it is not. Again, including the variable in the model can introduce bias (commonly called selection bias) rather than eliminate it (1-3).

Determining the variables necessary to obtain an unbiased estimate requires assumptions about the causal structure of the problem and statistical methods alone cannot distinguish between a confounding variable and one that introduces selection bias. Therefore, we believe the STROBE recommendations should explicitly ask authors to make all of their assumptions about causal relationships transparent when they report their methods. Causal diagrams (encoded through directed acyclic graphs or DAGs) are a convenient way to describe these assumptions. Two excellent examples of the potential benefits of the use of causal diagrams are the biased estimates that occur when 1) CD4 counts are included as a covariate for the effects of HIV treatment (4), and 2) birth weight is included as a covariate in studies examining perinatal mortality (a common but incorrect practice)(5).

In summary, reporting causal assumptions would allow the reader to better evaluate whether the underlying assumptions of the statistical model were appropriate, and would minimize the inadvertent introduction of selection bias by researchers who include all covariates associated with outcome and exposure regardless of the causal structure.


(1) Pearl J. Causality. Cambridge: Cambridge Univ Pr, 2007.

(2) Weinberg CR. Toward a clearer definition of confounding. Am J Epidemiol 1993; 137(1):1-8.

(3) Hernan MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology 2004; 15(5):615-625.

(4) Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 2000; 11(5):561-570.

(5) Hernandez-Diaz S, Schisterman EF, Hernan MA. The birth weight "paradox" uncovered? Am J Epidemiol 2006; 164(11):1115-1120.

No competing interests declared.