Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
More evidence on why we need radical reform of science publishing
Posted by plosmedicine on 31 Mar 2009 at 00:30 GMT
Author: Richard Smith
Position: Chief Executive of UnitedHealth Europe,
Institution: London, United Kingdom
Submitted Date: October 07, 2008
Published Date: October 7, 2008
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.
<i>PLoS Medicine</i> invited Richard Smith, former editor of the BMJ and on the
Board of Directors at PLoS [ http://www.plos.org/about... ], to discuss an essay published this week by Neal Young, John Ioannidis and Omar Al-Ubaydli [ http://medicine.plosjourn... ] that argues that the current system of publication in biomedical research provides a distorted view of the reality of scientific data.
<b>More evidence on why we need radical reform of science publishing</b>, <i>Richard Smith</i>
Ask scientists whether they'd prefer an all expenses paid fortnight in the best hotel in San Tropez, a Ferrari, a Cezanne painting, or the publication of one of their original papers in <i>Nature</i> - and most, I'd bet, would go for <i>Nature</i>. Getting published in one of the few elite journals is a very big deal for researchers, but, argues a stimulating paper published in <i>PLoS Medicine</i> [ http://medicine.plosjourn... ], (1) the fact that it is so important is distorting science. And I think that the authors are right.
Neal Young, John Ioannidis, and Omar Al-Ubaydli unusually for a scientific publication use economic concepts to make their case, and by doing so they illustrate the value of crossing disciplinary boundaries. Their argument is built around "the winner's curse." Imagine many firms competing for a television franchise. Each will try to work out the value of the franchise, and inevitably there will be a range of bids. If the franchise is simply awarded to the highest bidder then there's a high chance that that bid is too high, meaning that the winner will lose money — hence "the winner's curse." Those who run such bids often recognise the problem of the curse and discount the highest bid or go for a lower bid.
This phenomenon operates in science publishing because the elite journals that accept only a fraction of papers submitted to them go for the "best" and are thus likely to be publishing papers that are suffering from the winner's curse — for example, in that they give dramatic results that are a considerable distance from the “true” results. They are exciting outliers — and so very attractive to the elite journals. The articles that the high impact journals publish are bound to be atypical and will present a distorted view of science, leading to false conclusions and "misallocation of resources."
The authors have some empirical evidence to support their argument. A study [ http://jama.ama-assn.org/... ] of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 showed that a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005. (2) We know too that "positive" drug trials are much more likely to be published than "negative" trials, although we don't know how much this is the result of conscious manipulation by authors and sponsors and how much the result of "the winner's curse." (3-5)
Most scientists read a few high profile journals — and so are fed a systematically distorted view of the evidence. It's also these journals that are most widely reported in the media and fed to policy makers, so increasing the impact of the distortion.
The hope of many is, of course, that the elite journals are selecting "the best" research — hence providing a way of coping with information overload. But we know from good evidence that peer review is a deeply flawed system and that it's very hard to know what will be important in the long term. So readers of <i>Nature</i>, <i>Science</i>, and the <i>New England Journal of Medicine</i> are not reading "the best" but the “systematically distorted.”
What might we do about this problem? Young and others suggest a range of options, including preferring publication of negative over positive results — a version of those choosing among bids discounting the highest. It's hard to see, however, how building such an explicit bias into the system would be helpful. Better might be for editors to pay no attention to whether the results are positive or negative but rather to concentrate simply on the importance of the question being asked and the rigour of the methods. We tried to do this when I was editor of the BMJ, but I'm not sure how successful we were. Inevitably you are excited by an unusual result, and the winner's curse can surely operate not only in relation to whether the results are positive or negative but also in relation to the “importance” of the question.
For me this paper simply adds to the growing evidence and argument that we need radical reform of how we publish science. I foresee rapid publication of studies that include full datasets and the software used to manipulate them without prepublication peer review onto a large open access database that can be searched and mined. Instead of a few studies receiving disproportionate attention we will depend more on the systematic reviews that will be updated rapidly (and perhaps automatically) as new results appear.
1. Young NS, Ioannidis JPA, Al-Ubaydli O (2008) Why Current Publication Practices May Distort Science. PLoS Med 5(10): e201 doi:10.1371/journal.pmed.0050201. Find this paper online [ http://medicine.plosjourn... ].
2. Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 2005: 294:218-28. Find this paper online <http://jama.ama-assn.org/...>.
3. Lee K, Bacchetti P, Sim I (2008) Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis PLoS Medicine Vol. 5, No. 9, e191 doi:10.1371/journal.pmed.0050191. Find this paper online [ http://medicine.plosjourn... ].
4. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. NEJM 2008; 358: 252-60. Find this paper online [ http://content.nejm.org/c... ].
5. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003) Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 2003;326:1171-1173 (31 May), doi:10.1136/bmj.326.7400.1171 Find this paper online [ http://www.bmj.com/cgi/co... ].