Reader Comments

Post a new comment on this article

Author's reply to Goodman and Greenland

Posted by plosmedicine on 31 Mar 2009 at 00:09 GMT

Author: John Ioannidis
Position: Professor and Chairman
Institution: Dept. of Hygiene and Epidemiology, University of Ioannina School of Medicine
E-mail: jioannid@cc.uoi.gr
Submitted Date: May 10, 2007
Published Date: May 10, 2007
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.

I thank Goodman and Greenland for their interesting comments. Our methods and results are practically identical. However, some of my arguments are misrepresented.

1. I did not “claim that no study or combination of studies can ever provide convincing evidence”. In the illustrative examples (table 4) there is a wide credibility gradient (0.1% to 85%) for different research designs and settings.

2. I did not assume that all significant p-values are around 0.05. Tables 1-3 and the respective PPV equations can use any p-value (alpha). Nevertheless, the p=0.05 threshold is unfortunately entrenched in many scientific fields. Almost half of the “positive” findings in recent observational studies have p-values of 0.01-0.05 [1,2]; most “positive” trials and meta-analyses also have modest p-values.

3. I provided equations for calculating the credibility of research findings with or without bias. Even without any bias, PPV probably remains below 0.50 for most non-randomized, non-large-scale circumstances. Large trials and meta-analyses represent a minority of the literature.

4. Figure 1 shows that bias can indeed make a difference. The proposed modelling has an additional useful feature: As type I and II errors decrease, PPV(max)=1-[u/(R+u)], meaning that to allow a research finding to become >50% credible, we must first reduce bias at least below the pre-study odds of truth (u less than R). Numerous studies demonstrate the strong presence of bias across research designs: indicative reference lists appear in [3-5]. We should understand bias and minimize it, not ignore it.

5. “Hot fields”: Table 3 and figure 2 present “the probability that at least one study, among several done on the same question, claims a statistically significant finding”. They are not erroneous. Fields with many furtive competing teams may espouse significance-chasing behaviours, selectively highlighting “positive” results. Conversely, having many teams with transparent availability of all results and integration of data across teams leads to genuine progress. We need replication, not just discovery [3].

6. The claim by two leading Bayesian methodologists that a Bayesian approach is somewhat circular and questionable contradicts Greenland’s own writings: “One misconception (of many) about Bayesian analyses is that prior distributions introduce assumptions that are more questionable than assumptions made by frequentist methods.” [6]

7. Empirical data on the refutation rates for various research designs agree with the estimates obtained in the proposed modelling [7], not with estimates ignoring bias. Additional empirical research on these fronts would be very useful.

Scientific investigation is the noblest pursuit. I think we can improve the respect of the public for researchers by showing how difficult success is. Confidence in the research enterprise is probably undermined primarily when we claim that discoveries are more certain than they really are and then the public, scientists, and patients suffer the painful refutations.

References

1. Pocock SJ, Collier TJ, Dandreo KJ, de Stavola BL, Goldman MB, et al (2004) Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ 329:883.

2. Kavvoura FK, Liberopoulos G, Ioannidis JP (2007) Selection in reported epidemiological risks: an empirical assessment. PLoS Med 2007;4:e79.

3. Ioannidis JP (2006) Evolution and translation of research findings: from bench to where? PLoS Clin Trials 1:e36.

4. Gluud LL (2006) Bias in clinical intervention research. Am J Epidemiol 163:493-501.

5. Cochrane Methodology Register. http://www3.cochrane.org/...

6. Greenland S (2006) Bayesian perspectives for epidemiological research: I Foundations and basic methods. Int J Epidemiol 35:765-75.

7. Ioannidis JP (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294:218-28.

No competing interests declared.