Reader Comments
Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Thank You!
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.
closeWhat type of review is needed?
Posted by Vandenbroucke on 18 Oct 2010 at 16:23 GMT
The paper by Bastian et al. [1] contains the seeds of another debate: what type of systematic review is needed to decide which study needs to be done next? Should the review aim at helping medical doctors to decide which treatment is best for their patients, or should it be a summary of studies with one type of design (RCTs)? In a decision oriented review, RCTs that use an obsolete comparison treatment or placebo might be discarded when a reasonable standard treatment is available. If no RCTs with a relevant comparator exist, and if it is unlikely that the relevant comparison will ever be done, the review might attempt network meta-analysis, or might search for observational comparative studies of effectiveness in large data-bases - provided the circumstances make it possible [2]. Given newer insights, the right circumstances in which data-bases might give an answer to some therapeutic questions may be less utopian as Bastian et al. seem to think in their response to Richard Smith [3].
In the decision to prefer one treatment over another, adverse effects may play a large role. RCTs do not only fail to be informative about adverse effects when these effects are rare or occur late - in contrast to what is stated by Bastian et al., and repeated in their response to Richard Smith [3]. RCTs commonly fail because they study selected populations and because interventions are carried out in extraordinary circumstances. Observational studies on adverse effects study the real life application of the intervention and may therefore lead to strongly different estimates of common adverse effects. For instance, when the patient population enrolled in the trials was screened for risks of adverse effects, or when concomitant medications that may trigger an adverse effect were ruled out, or when special skills need to be learned to apply the intervention. Thus, a systematic review aimed at making decisions needs to have the best evidence about adverse effects – which will often be observational [4].
A decision oriented approach is the reason why an organization like NICE prefers to look for the best possible evidence, inclusive of observational data and insight in the mechanism of action of the interventions [5]. Likewise, AHRQ states that inclusion of observational data will almost always be necessary to study adverse effects, and adds that the potential for confounding by indication is much less for adverse effects – which obviates the need for randomization [6]. These views are echoed in a recent Institute of Medicine letter on ethical issues in studying the safety of approved drugs [7]. Such insights may have evolved from a paper that was written with two of the present authors and that was critical of the usual ‘hiearchy’ of study designs [8].
The current procedures of the Cochrane collaboration are best suited for the second type of review: a compilation of RCTs, as also advocated by Bastian et al., who see the inclusion of observational studies as a distraction. In a commentary about the status of Cochrane in the US, the current Cochrane Editor-in-chief cites the opinions of US medical researchers: that current practices may place too much emphasis on the methodology and that newer sophisticated skills are needed to produce reviews that are relevant for Comparative Effectiveness Research [9].
Jan P Vandenbroucke, MD, PhD
Department of Clinical Epidemiology
Leiden University Medical Center, The Netherlands
1. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
2. Schneeweiss S, Patrick AR, Stürmer T, Brookhart MA, Avorn J, Maclure M, Rothman KJ, Glynn RJ. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med
Care. 2007;45(10 Supl 2):S131-42.
3. Bastian H. Authors’ (Canute?) respond to: Smith R. Too Canute like? Comment on: Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
4. Vandenbroucke JP, Psaty BM. Benefits and risks of drug treatments: how to combine the best evidence on benefits with the best data about adverse effects.
JAMA. 2008;300:2417-9.
5. Rawlins M. De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Lancet. 2008;372:2152-61.
6. Agency for Healthcare Research and Quality. Methods Reference Guide for Effectiveness and Comparative Effectiveness Reviews, Version 1.0 [draft posted
October 2007]. Rockville, MD: Agency for Healthcare Research and Quality; 2007.
http://effectivehealthcar....
7. Ethical and Scientific Issues in Studying the Safety of Approved Drugs; Institute of Medicine 2010. http://www.nap.edu/catalo...
8. Glasziou P, Vandenbroucke JP, Chalmers I. Assessing the quality of research. BMJ. 2004;328:39-41. Erratum in: BMJ. 2004;329:621.
9. Tovey D, Dellavalle R. Cochrane in the United States of Amcerica. The Cochrane Collaboration. http://www.thecochranelib...