Citation: Ioannidis JPA (2007) Why Most Published Research Findings Are False: Author's Reply to Goodman and Greenland. PLoS Med4(6): e215. https://doi.org/10.1371/journal.pmed.0040215
Published: June 26, 2007
Copyright: © 2007 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The author received no specific funding for this article.
Competing interests: The author has declared that no competing interests exist.
- I did not “claim that no study or combination of studies can ever provide convincing evidence.” In the illustrative examples (Table 4), there is a wide credibility gradient (0.1% to 85%) for different research designs and settings.
- I did not assume that all significant p-values are around 0.05. Tables 1–3 and the respective positive predictive value (PPV) equations can use any p-value (alpha). Nevertheless, the p = 0.05 threshold is unfortunately entrenched in many scientific fields. Almost half of the “positive” findings in recent observational studies have p-values of 0.01–0.05 [3,4]; most “positive” trials and meta-analyses also have modest p-values.
- I provided equations for calculating the credibility of research findings with or without bias. Even without any bias, PPV probably remains below 0.50 for most non-randomized, non-large-scale circumstances. Large trials and meta-analyses represent a minority of the literature.
- Figure 1 shows that bias can indeed make a difference. The proposed modeling has an additional useful feature: As type I and II errors decrease, PPV(max) = 1 - [u/(R + u)], meaning that to allow a research finding to become more than 50% credible, we must first reduce bias at least below the pre-study odds of truth (u less than R). Numerous studies demonstrate the strong presence of bias across research designs: indicative reference lists appear in [5–7]. We should understand bias and minimize it, not ignore it.
- “Hot fields”: Table 3 and Figure 2 present “the probability that at least one study, among several done on the same question, claims a statistically significant research finding.” They are not erroneous. Fields with many furtive competing teams may espouse significance-chasing behaviors, selectively highlighting “positive” results. Conversely, having many teams with transparent availability of all results and integration of data across teams leads to genuine progress. We need replication, not just discovery .
- The claim by two leading Bayesian methodologists that a Bayesian approach is somewhat circular and questionable contradicts Greenland's own writings: “One misconception (of many) about Bayesian analyses is that prior distributions introduce assumptions that are more questionable than assumptions made by frequentist methods” .
- Empirical data on the refutation rates for various research designs agree with the estimates obtained in the proposed modeling , not with estimates ignoring bias. Additional empirical research on these fronts would be very useful.
Scientific investigation is the noblest pursuit. I think we can improve the respect of the public for researchers by showing how difficult success is. Confidence in the research enterprise is probably undermined primarily when we claim that discoveries are more certain than they really are, and then the public, scientists, and patients suffer the painful refutations.
- 1. Goodman S, Greenland S (2007) Why most published research findings are false: Problems in the analysis. PLoS Med 4: e168. S. GoodmanS. Greenland2007Why most published research findings are false: Problems in the analysis.PLoS Med4e168
- 2. Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2: e124. JPA Ioannidis2005Why most published research findings are false.PLoS Med2e124
- 3. Pocock SJ, Collier TJ, Dandreo KJ, de Stavola BL, Goldman MB, et al. (2004) Issues in the reporting of epidemiological studies: A survey of recent practice. BMJ 329: 883.SJ PocockTJ CollierKJ DandreoBL de StavolaMB Goldman2004Issues in the reporting of epidemiological studies: A survey of recent practice.BMJ329883
- 4. Kavvoura FK, Liberopoulos G, Ioannidis JP (2007) Selection in reported epidemiological risks: An empirical assessment. PLoS Med 4: e79. FK KavvouraG. LiberopoulosJP Ioannidis2007Selection in reported epidemiological risks: An empirical assessment.PLoS Med4e79
- 5. Ioannidis JP (2006) Evolution and translation of research findings: From bench to where? PLoS Clin Trials 1: e36. JP Ioannidis2006Evolution and translation of research findings: From bench to where?PLoS Clin Trials1e36
- 6. Gluud LL (2006) Bias in clinical intervention research. Am J Epidemiol 163: 493–501.LL Gluud2006Bias in clinical intervention research.Am J Epidemiol163493501
- 7. The Cochrane Collaboration (2007) Cochrane methodology register. The Cochrane Collaboration2007Cochrane methodology register.Available: http://www3.cochrane.org/access_data/cmr/accessDB_cmr.asp. Accessed 23 May 2007. Available: http://www3.cochrane.org/access_data/cmr/accessDB_cmr.asp. Accessed 23 May 2007.
- 8. Greenland S (2006) Bayesian perspectives for epidemiological research: I. Foundations and basic methods. Int J Epidemiol 35: 765–775.S. Greenland2006Bayesian perspectives for epidemiological research: I. Foundations and basic methods.Int J Epidemiol35765775
- 9. Ioannidis JP (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218–228.JP Ioannidis2005Contradicted and initially stronger effects in highly cited clinical research.JAMA294218228