plospmedPLoS MedplosmedPLoS Medicine1549-12771549-1676 Public Library of Science San Francisco, USA 10.1371/journal.pmed.0020395 Correspondence Genetics and Genomics Science Policy Public Health and Epidemiology Mathematics/Statistics Public Health and Epidemiology Communication in Health Care Editorial policies (including conflicts of interest) Medical journals The Clinical Interpretation of ResearchCorrespondence Pauker Stephen G 1 * Tufts-New England Medical Center, Boston, Massachusetts, United States of America E-mail: spauker@tufts-nemc.org

The author has declared that no competing interests exist.

11 2005 29 11 2005 211e3952005Stephen G. PaukerThis is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Truth, Probability, and Frameworks Author's Reply Why Most Published Research Findings Are False Minimizing Mistakes and Embracing Uncertainty
<p>John P. A. Ioannidis emphasizes the central role of prior probabilities [<xref ref-type="bibr" rid="pmed-0020395-b1">1</xref>]. His conclusion rests on the presumed low probability that a hypothesis was true before the study.</p> <p>Unfortunately, his formulation relates the post-study probability that the study's conclusion is true to the pre-study odds. The results might have been clearer had he also plotted the relation of odds to probability, a curvilinear relationship, assuming the study carried no information. Further, the various graphs are right-truncated at pre-study odds, <italic>R</italic>, of 1.0 (a probability of 0.5), although his examples go as high as <italic>R</italic> = 2.0. A positive study must, by definition, increase the likelihood that the hypothesis is true. It might have been clearer had Ioannidis chosen to relate odds to odds or probability to probability; in both cases, a neutral study would produce a straight line along a 45-degree diagonal.</p> <p>The pre-study to post-study relation can more simply be expressed using the odds-likelihood form of Bayes rule—i.e., the post-study odds equal the pre-study odds multiplied times the likelihood ratio (LR) of the study. Then, the equations for positive predictive value (PPV) become the simple product of <italic>R</italic> × LR. For a single unbiased study, LR = (1 − β)/α. When incorporating study bias, <italic>u</italic>, as defined by Ioannidis, LR = (1 − β[1 − <italic>u</italic>])/(α[1 − <italic>u</italic>] + <italic>u</italic>). For a typical study with α = 0.05 and β = 0.2 (i.e., with a power of 0.8), LR = 16. When <italic>R</italic> is less than 1:16 (a probability of 0.0588), the post-study odds will be less than one—i.e., the study's hypothesis will be more likely false than true.</p> <p>For non-Bayesians, statistical significance testing presumes uninformative prior probability—i.e., <italic>R</italic> = 1. Then, LR would merely need to exceed one for the study's conclusions to be more likely true than false. At the common significance levels (α) of 0.05 and 0.01, the requisite study powers would merely need to exceed 0.05 and 0.01 respectively, corresponding to maximum type II error rates (β) of 0.95 and 0.99. Such lax requirements would almost always be met for a published study. Hence, the common belief that the vast majority of studies have valid conclusions would be correct if we can assume that the pre-study odds are truly uninformative. However, as Ioannidis suggests, this is unlikely to be the case.</p> <p>Two more corollaries might be added. The higher the pre-study odds that the study's hypothesis is true, the lower the requisite power (study size and effect size) required to make the study's findings more likely true than false. When studies are published, the investigator should estimate the pre-study odds and report the LR implied by the observed effect.</p> <p>From the perspective of an epidemiologist or a statistician, the relevant question is whether the study's hypothesis is true—i.e., is the probability of the hypothesis greater than 0.5? For clinicians and their patients, the relevant question is whether a particular strategy should be followed in an individual patient or a subset of similar patients. That decision (or recommendation to the patient) will depend on the pre-study likelihood of benefit in that patient and on the relative magnitude of benefits and risks of that strategy, if the diagnosis in that patient is uncertain. For many such decisions, the “more likely true than false” criterion may not be the best decision rule. For serious diseases and treatments of only modest risk, post-study probabilities of considerably less than 0.5 may be sufficient to justify treatment [<xref ref-type="bibr" rid="pmed-0020395-b2">2</xref>].</p> <p>Ioannidis's provocative Essay is a timely call for careful consideration of published studies. The odds-likelihood formulation suggested herein may be helpful in providing a more intuitive model. Clinicians now need to take it to the next step.</p> </sec> </body> <back> <ref-list> <title>References Ioannidis JPA Why most published research findings are false. PLoS Med 2005 2 e124 doi: 10.1371/journal.pmed.0020124 Pauker SG Kassirer JP Therapeutic decision making: A cost-benefit analysis. N Eng J Med 1975 293 229 234