Post a new comment on this article
Post Your Discussion Comment
Please follow our guidelines for comments and review our competing interests policy. Comments that do not conform to our guidelines will be promptly removed and the user account disabled. The following must be avoided:
- Remarks that could be interpreted as allegations of misconduct
- Unsupported assertions or statements
- Inflammatory or insulting language
Why should this posting be reviewed?
See also Guidelines for Comments and Corrections.
Thank you for taking the time to flag this posting; we review flagged postings on a regular basis.close
Why Most Published Research Findings Are False: How about Functional Brain Mapping?
Posted by plosmedicine on 31 Mar 2009 at 00:10 GMT
Author: Donald F. Smith
Position: Center for Psychiatric Research
Institution: Psychiatric Hospital of Aarhus University
Submitted Date: June 29, 2007
Published Date: June 29, 2007
This comment was originally posted as a “Reader Response” on the publication date indicated above. All Reader Responses are now available as comments.
The article published in PLoS Medicine by Ioannidis  is a masterpiece in drawing attention to errors that can arise from massive multiple testing during phases of rapid growth in research publications. The focus of that article was on genomics, while I believe that the lessons to be learned apply equally well to functional brain mapping because that topic has several features that may also be highly conducive to publishing false findings.
As noted by Ioannidis , the probability of obtaining a false finding is determined by the p-value, the statistical power (SP), and the pre-study probability (R) that a finding is true. Thus, a research finding is more likely false than true if (SP)(R) lesser than p-value . For a functional brain mapping study with a statistical power of 90% and the p-value of 5%, a finding is more likely false than true if the pre-study probability (R) that the finding is true is less than 5.6% (i.e. R lesser than (0.05)/(0.9). What, then, is the pre-study probability that a change in signal strength will occur in a particular site of the brain in response to a particular mental task? Although that question may often be difficult to answer, I believe that the pre-study probability in most studies of functional brain mapping is well below 1%. If that guesstimate is correct, then most published research findings in functional brain mapping may be false.
Unsubstantiated assumptions weaken research findings. One unsubstantiated assumption behind much research in functional brain mapping is that the brain consists of many separate parts, and that each part has a specific mental function. That notion has been criticized strongly , but it still persists despite recent interest in functional connectivity . It is surprising that so many studies in functional brain mapping have assumed that complex mental functions are localized at specific sites in the brain, despite strong evidence to the contrary [4;5].
Another unsubstantiated assumption beneath much research in functional brain mapping is that cognitive processing in the brain is linear, such that new cognitive components can be inserted sequentially without being affected by previous ones . Friston and coworkers have given a detailed account of errors that can result from assuming that brain function is linear during cognitive tasks .
A third unsubstantiated assumption of much research in functional brain mapping is that the blood oxygen-level dependent signal (BOLD) measured by magnetic resonance provides a valid measure of neuronal activity at sites in the brain [7;8]. BOLD arises from susceptibility effects of deoxyhemoglobin in venous blood and reflects, thereby, the blood oxygen level . Research has shown, however, that BOLD may reflect neuronal activity only under certain, as yet undefined, conditions [9;10].
The probability of detecting a true effect, known as statistical power, is low in experiments with small sample size [11;12]. 16 – 17 subjects are often required to obtain reliable results in studies of functional brain mapping [12-14], whereas the number of subjects used in 75 studies of functional brain mapping 10 years ago ranged from 4 to 34 with a median of 9 . In preparation for this article, I found that the number of subjects ranged from 3 to 38 with a median of 10 in 90 recent functional brain mapping reports, and that such studies continue to use predominantly right-handed, college-age males .
Data processing and statistical analysis in functional brain mapping are complicated and involve massive multiple testing [15-19]. Decisions made by a researcher during data processing and analysis may reflect confirmatory bias [20;21]. Confirmatory bias usually involves unwitting selectivity in experimental design, data analysis, and reporting of evidence, and is not necessarily synonymous with fraud or conscious data manipulation [20;22;23]. For example, confirmatory bias may come into play when negative (i.e. non-statistically significant) findings are initially obtained. Under such circumstances, one may be tempted to devise secondary research questions and to do further data analyses in search of hypothesis-supporting evidence, a behavior commonly known as data-dredging [22;24;25].
Reducing the likelihood of publishing false findings in functional brain mapping may require a major effort. Perhaps particular attention should be given to publishing of negative findings and of results from experiments carried out specifically to replicate someone else’s findings. Failure to reduce false findings in functional brain mapping may eventually undermine the field.
References available on request.
word count: 711 in main text
1. Ioannidis JP (2005): Why most published research findings are false. PLoS.Med 2:e124.
2. Friston KJ, Price CJ, Fletcher P, Moore C, Frackowiak RS, Dolan RJ (1996): The trouble with cognitive subtraction. NeuroImage 4:97-104.
3. Ramnani N, Behrens TE, Penny W, Matthews PM (2004): New approaches for exploring anatomical and functional connectivity in the human brain. Biol.Psychiatry 56:613-619.
4. Cabeza R, Nyberg L (2000): Imaging cognition II: An empirical review of 275 PET and fMRI studies. J.Cogn.Neurosci. 12:1-47.
5. Vigneau M, Beaucousin V, Herve PY, Duffau H, Crivello F, Houde O, Mazoyer B, Tzourio-Mazoyer N (2006): Meta-analyzing left hemisphere language areas: phonology, semantics, and sentence processing. NeuroImage 30:1414-1432.
6. Lee A, Kannan V, Hillis AE (2006): The contribution of neuroimaging to the study of language and aphasia. Neuropsychol.Rev. 16:171-183.
7. Matthews PM, Jezzard P (2004): Functional magnetic resonance imaging. J Neurol.Neurosurg.Psychiatry 75:6-12.
8. Logothetis NK (2003): The underpinnings of the BOLD functional magnetic resonance imaging signal. J Neurosci. 23:3963-3971.
9. Lauritzen M (2001): Relationship of spikes, synaptic activity, and local changes of cerebral blood flow. J Cereb.Blood Flow Metab 21:1367-1383.
10. Thomsen K, Offenhauser N, Lauritzen M (2004): Principal neuron spiking: neither necessary nor sufficient for cerebral blood flow in rat cerebellum. J Physiol 560:181-189.
11. Guilford JP (1965): Fundamental statistics in psychology and education. McGraw-Hill Book Company, New York.
12. Andreasen NC, Arndt S, Cizadlo T, O'Leary DS, Watkins GL, Ponto LL, Hichwa RD (1996): Sample size and statistical power in [15O]H2O studies of human cognition. J Cereb Blood Flow Metab 16:804-816.
13. Jernigan TL, Gamst AC, Fennema-Notestine C, Ostergaard AL (2003): More "mapping" in brain mapping: statistical comparison of effects. Hum.Brain Mapp. 19:90-95.
14. Gold S, Arndt S, Johnson D, O'Leary DS, Andreasen NC (1997): Factors that influence effect size of 15O PET studies: a meta-analytic review. NeuroImage 5:280-291.
15. Worsley K, Marrett S, Neelin P, Vandal AC, Friston KJ, Evans AC (1996): A unified statistical approach for determining significant signals in images of cerebral activation. Hum.Brain Mapp. 4:58-73.
16. Worsley KJ (2005): An improved theoretical P value for SPMs based on discrete local maxima. NeuroImage 28:1056-1062.
17. Friston K (2002): Beyond phrenology: what can neuroimaging tell us about distributed circuitry? Annu.Rev.Neurosci. 25:221-250.
18. Friston KJ, Holmes AP, Worsley KJ, Poline JB, Frith CD, Frackowiak RSJ (1995): Statistical parametric maps in functional imaging: a general linear approach. Human Brain Mapping 2:189-210.
19. Friston KJ, Rotshtein P, Geng JJ, Sterzer P, Henson RN (2006): A critique of functional localisers. NeuroImage 30:1077-1087.
20. Nickerson RS (1998): Confirmation Bias: a ubiquitous phenomenon in many guises. Review of General Psychology 2:175-220.
21. Wacholder S, Chanock S, Garcia-Closas M, El Ghormli L, Rothman N (2004): Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. J Natl.Cancer Inst. 96:434-442.
22. Lindner MD, Frydel BR, Francis JM, Cain CK (2003): Analgesic effects of adrenal chromaffin allografts: contingent on special procedures or due to experimenter bias? J Pain 4:64-73.
23. Mele AR (1997): Real self-deception. Behav.Brain Sci. 20:91-102.
24. Lord SJ, Gebski VJ, Keech AC (2004): Multiple analyses in clinical trials: sound science or data dredging? Med J Aust. 181:452-454.
25. Lee WC, Huang HY (2005): Data-dredging gene-dose analyses in association studies: biases and their corrections. Cancer Epidemiol.Biomarkers Prev. 14:3004-3006.