Reader Comments

Post a new comment on this article

A Critique of Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8) II: proposals

Posted by vetter on 19 Jan 2013 at 10:01 GMT

A Critique of Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8)

II: proposals

by Hermann Vetter

Point null hypotheses are virtually always false. The question is how far the true value of the parameter is distant from zero. If the power function of the test is sound: the greater the distance, the greater the probability of rejecting H0; and the larger the sample, the more quickly the probability rises. Some consequences:

1. The larger the sample, the closer the probability of rejecting H0 will be to unity for any given difference of the value of the parameter from zero. If the sample is too large, differences so small will turn out significant that they would better be considered insignificant in a substantive sense. The golden road is interval estimation (which, by the way, can also be used for hypothesis testing).

2. If, say, in 10% of N tests the difference from zero is large enough to be detected by virtue of the power of the test, and to be considered as substantively significant; and tiny in the other 90%; then there will be 0.1N "genuinely" significant results and pN or 0.05N "spuriously" significant ones; i.e., one third of the significant results is spurious. This rate of infection can be made arbitrarily small by choosing a sufficiently low p. This can be done within a single study, or within any number of studies seen together.

I do not think that as far as hypothesis testing is concerned, the larger the sample, the better; cf. (1) above. But I think that as far as hypothesis testing is not avoided - so to speak -, a very elastic choice of significance level will prevent that most research findings are false.



No competing interests declared.