Figures
Abstract
Electrical brain stimulation (EBS) is a trendy new technique used to change brain function and treat neurological, psychiatric and psychological disorders. We were curious whether the published literature, which is dominated by positive results, reflects the experience of researchers using EBS. Specifically, we wanted to know whether researchers are able to reproduce published EBS effects and whether they engage in, but fail to report, questionable research practices. We invited 976 researchers to complete an online survey. We also audited 100 randomly-selected published EBS papers. A total of 154 researchers completed the survey. Survey respondents had a median of 3 [1 to 6, IQR] published EBS papers (1180 total) and 2 [1 to 3] unpublished ones (380 total). With anodal and cathodal EBS, the two most widely used techniques, 45–50% of researchers reported being able to routinely reproduce published results. When asked about how study sample size was determined, 69% of respondents reported using the sample size of published studies, while 61% had used power calculations, and 32% had based their decision on pilot data. In contrast, our audit found only 6 papers where power calculations were used and a single paper in which pilot data was used. When asked about questionable research practices, survey respondents were aware of other researchers who selectively reported study outcomes (41%) and experimental conditions (36%), adjusted statistical analysis to optimise results (43%), and engaged in other shady practices (20%). Fewer respondents admitted to engaging in these practices themselves, although 25% admitted to adjusting statistical analysis to optimize results. There was strong agreement that such practices should be reported in research papers; however, our audit found only two such admissions. The present survey confirms that questionable research practices and poor reproducibility are present in EBS studies. The belief that EBS is effective needs to be replaced by a more rigorous approach so that reproducible brain stimulation methods can be devised and applied.
Citation: Héroux ME, Loo CK, Taylor JL, Gandevia SC (2017) Questionable science and reproducibility in electrical brain stimulation research. PLoS ONE 12(4): e0175635. https://doi.org/10.1371/journal.pone.0175635
Editor: Jelte M. Wicherts, Tilburg University, NETHERLANDS
Received: November 28, 2016; Accepted: March 28, 2017; Published: April 26, 2017
Copyright: © 2017 Héroux et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported by National Health and Medical Research Council (https://www.nhmrc.gov.au/), APP1055084. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Scientists agree that we are facing a crisis of confidence [1]. Research results are irreproducible, from dozens of psychology findings [2] to hundreds and even thousands of genetic [3] and fMRI [4] discoveries. Some have even argued that the majority of the published literature must be false [5]. Neuroscience, a field filled with statistically underpowered studies [6], unfortunately is at the forefront of this reproducibility crisis.
Transcranial magnetic stimulation is a popular, non-invasive and non-painful technique used by researchers and clinicians to assess and modulate brain function. Recently, we surveyed researchers on their ability to reproduce findings from studies that used transcranial magnetic stimulation to modulate non-invasively the excitability of the human motor cortex [7]. Only 40–55% of survey respondents were able to routinely reproduce previously published results. Worrisome was the finding that researchers engaged in, but failed to report, questionable research practices.
Electrical brain stimulation (EBS) is a trendy method to modify brain function that has received considerable media attention [8]. Exploding on the scene less than a decade ago, the number of EBS papers has doubled to more than 3000 in less than three years. Much cheaper to perform than magnetic stimulation, EBS is claimed to improve everything from stroke motor recovery and depression to food cravings and language acquisition. However, EBS is not without controversy. Several high-profile laboratories have been unable to reproduce previously published findings [9–12].
We were curious about whether the published literature reflects the experience of researchers using EBS. Specifically, we wanted to know whether researchers are able to reproduce published EBS effects and whether they engage in but fail to report questionable research practices.
Materials and methods
Online survey
To assess the use of EBS to alter human brain excitability and function, we invited corresponding authors of identified publications to complete an anonymous internet-based survey (S1 and S2 Files). The study was approved by the University of New South Wales Human Research Ethics Committee (HC13326), and was conducted in accordance with the principles expressed in the Declaration of Helsinki. As the survey was anonymous and online, written or oral consent was not obtained.
Briefly, the survey asked respondents about their area of study, the number of years they had worked with EBS, the number of published and unpublished EBS papers, and how sample sizes were determined for these studies. For unpublished papers, respondents specified the reason for the failure to publish their results. Next, we asked respondents about the types of EBS protocols they had used and, for each protocol, their ability to reproduce previously published effects. If respondents indicated they only investigated unpublished, novel effects, their responses were not considered when determining the ability of researchers to reproduce previously published results. Finally, we asked respondents how they thought other researchers performed and reported EBS studies and, using the same questions, we asked how they themselves performed and reported EBS studies. On completion of the survey, respondents were invited to provide additional comments. Then the respondents were entered into a draw, independently conducted by the local IT department, to win an iPad.
Pubmed search and e-mail address extraction
A PubMed search was conducted on 31 December 2015 for all studies using tDCS or one of its common variants: direct current stimulation[Title/Abstract] OR tDCS[Title/Abstract] OR transcranial alternating current stimulation[Title/Abstract] OR transcranial random noise stimulation[Title/Abstract] OR HD-tDCS[Title/Abstract] OR tACS[Title/Abstract] OR transcranial electrical stimulation [Title/Abstract]. Titles and abstracts of identified references (n = 3,106) were reviewed and all human neuromodulation, brain function and clincal studies were retained. We excluded reviews, meta-analyses, errata, comments, letters, and single subject case studies as well as studies on animals, clinical trial planning, modelling electrical currents in the brain, intra-operative monitoring, and electrical stimulus perception. This resulted in a total of 1,258 references. E-mail addresses of corresponding authors and those available in the Author Information field of Pubmed references were retrieved; this resulted in 976 unique e-mail addresses and these researchers were invited to complete the survey.
Audit of published research
A sub-sample of 100 published papers (S5 File) were selected randomly from the 1,258 identified references to determine whether the questionable research practices listed in our survey are routinely reported in publications. For each paper, we also noted: if primary study findings were positive or negative; if the Methods included a statistical analysis section; the sample size and the strategy used to determine sample size; whether error bars in figures were undefined or were standard error of the mean; whether figures included individual subject data and whether p-values of 0.1 > P > 0.05 were interpreted as statistical trends or statistically significant.
Results
In all, 154 researchers from a variety of research disciplines completed the survey (S1 Table). Respondents had a median of 5 years [3.25 to 7.75; interquartile range] experience using EBS, and published a median of 3 [1 to 6] EBS papers (1180 total). Respondents had a median of 2 [1 to 3] unpublished EBS studies (380 total); reasons for not publishing results are presented in S2 Table.
Almost all respondents reported using anodal or cathodal transcranial direct current stimulation, whereas roughly a quarter of respondents had used transcranial alternating current stimulation, transcranial random noise stimulation or multi-channel transcranial direct current stimulation, and 5% had used pulsed transcranial direct current stimulation (Table 1). For anodal and cathodal EBS, 45–50% of respondents reported being able to routinely reproduce previously published effects (Table 1), although the size of the effect was smaller 26–27% of the time (S3 Table).
When asked how they determined the sample sizes of their EBS studies, 69% of respondents had used the sample size of published papers (Table 2), while 61% of respondents had previously used power calculations and 32% had based their decision on pilot data. As for the estimated number of studies for which these strategies were used, the percentages were much lower: 25% used the sample size of published papers, 26% used power calculations and only 8% used pilot data. In stark contrast to these responses, an audit of 100 randomly selected EBS papers found only 6 studies that reported power calculations and only 1 study that used pilot data to determine its sample size. All other papers failed to report how their sample size was determined.
When asked about questionable research practices, survey respondents were aware of other researchers who adjusted statistical analysis to optimise results (43%) and selectively reported study outcomes (41%) and experimental conditions (36%) (Table 3). About 20% of respondents knew researchers who engaged in other shady practices (Table 3). Fewer respondents admitted to engaging in these practices themselves (Table 3), although 25% admitted to adjusting statistical analysis to optimize results.
Almost all respondents (92%) indicated that these questionable practices should be disclosed in research papers. In contrast, the audit of 100 published papers revealed only two admissions of questionable practices. Both related to the exclusion of data or subjects without the support of statistical analyses. Furthermore, 90% of audited papers reported positive primary findings, i.e. publication bias, and 30% interpreted p-values between 0.05 and 0.1 as statistical trends or statistically significant, i.e. spin [13]. In addition, few studies plotted individual subject data points in their figures so that within and between subject behaviour could be observed directly (9%) and the majority of papers (68%) erroneously used the standard error of the mean to plot data variability [14], while others failed to define the type of variability measure which was used in plots (17%).
Several researchers voiced their concerns about EBS research (S4 File):
“This field is in urgent need of both guidelines for research and clinical use, and regulations by law.”
ID217
“I think there is a huge publication bias in this field and, in my opinion, the positive results of tDCS are highly overestimated. It would have been nice to have some questions on that topic.”
ID474
“There does seem to be a suspiciously large number of positive tDCS trials published, and in almost any discipline it has been used in.”
ID31
“Although the consensus within publications in that electrical stimulation works well and is reliable, my experience of talking to other researchers at conferences and within my department suggests that there is a huge amount of unpublished, unsuccessful attempts at using the stimulation. Many of which have no clear methodological issues.”
ID583
“It would not be fair to have publication mentioning that “tDCS researchers have mentioned that are aware of other researchers that may adjust the statistics to optimize their results” or something like this. In a publish or perish academia, these practices like that are used by researchers of many fields, unfortunately. These are not specific problems for the tDCS community. I urge to be thoughtful when reporting this data.”
ID71
“I feel that a small “special group” that can publish all their research even though they have a small sample size, lack of fidelity with protocol previously registered, sub-group statistical analysis, etc. On the other hand others researchers have many difficulties to publish their works even though they followed all the requirements needed to conduct a trustful research.”
ID180
Discussion
On the surface, EBS seems like a panacea. What other technique can claim to improve so many disparate brain functions? Warning bells have been sounded, and highlight the difficulty some research groups have reproducing published EBS effects [9–12]. Unfortunately, these concerns are largely drowned out by the never ending torrent of new papers. The present anonymous web-based survey of EBS scientists indicates that, as with transcranial magnetic stimulation, this field is not immune to issues of reproducibility, questionable research practices and publication bias.
While early EBS studies reported large, significant effects, what evidence is there that this technique is truly effective? Several meta-analyses have recently addressed this issue. For example, there is good evidence that EBS is effective in major depression [15], but not fibromyalgia pain [16], food craving and consumption [17], Parkinson’s disease [18] and stroke aphasia [19]. A common finding from these meta-analyses is that EBS studies are often of low research quality [20, 21] and that, when present, EBS effects are often small [20–24]. For example, EBS reduces chronic pain by only 12% (95% CI 8% to 15%), below the threshold for a minimal clinically important difference [22], and anodal EBS is associated with a significant reduction in reaction time, but the magnitude of this effect is small (Hedges’ g: −0.10, 95% CI −0.16 to −0.04) [24]. Importantly, these estimates exaggerate the true effect sizes because they do not take into account results from unpublished studies [25, 26].
Neuroscience research is often grossly underpowered [6], so how can so many papers report significant (i.e. p < 0.05) results when true EBS effects tend to be small? Low statistical power and publication bias may be to blame. Statistically significant effects from underpowered studies are necessarily inflated [25, 26], and often reflect false-positive results [5]. This fact explains why the first study to report an effect is often the most likely to overestimate its size (i.e., the winner’s curse) [6]. However, as more studies are published, effect sizes tend to decrease, sometimes to the point of being inconsequential. A classic example comes from transcranial magnetic stimulation research when the first paper published using a novel stimulation protocol—theta-burst stimulation—reported consistent and powerful effects in a sample of 8 subjects [27]. Years later, when the technique had been adopted by dozens if not hundreds of laboratories, the same group of researchers conducted a larger scale study involving 52 subjects; this time results were highly variable with “no overall effect” [28]. These issues are particularly troublesome because researchers continuously want to publish new discoveries. Stimulation techniques and paradigms are varied or applied to new patient groups, rendering the findings novel. Thus, many papers may suffer from the winner’s curse. Only when meta-analyses are performed and the effects of these related, but at the time novel, effects are pooled is it possible to estimate the true size of an effect. Thus, researchers using EBS must use care when designing studies. With small effects, sample sizes need to be increased to obtain adequate statistical power [6] and precise estimates of studied effects [29]. When sample size calculations are performed, they should not be based on inflated effects reported by small underpowered studies as this will result in too few subjects being tested [6].
Publication bias—where significant results are more likely to get published—was highlighted as a problem by several respondents. While our audit found 90% of papers reported significant effects for the primary research outcome, only 45–50% of respondents reported being able to routinely reproduce published effects for anodal and cathodal EBS. Even if we consider the additional 30–35% of respondents who were sometimes able to reproduce published effects, the discrepancy between the published literature and the experience of respondents likely reflects publication bias in EBS research. At the heart of publication bias is the thirst to publish novel findings and the reliance on p-values and α = 0.05 [30, 31]. Because statistically significant, not to be confused with scientifically or functionally significant, results are more likely to be published, practices such as p-hacking (trying several analyses and data inclusion/exclusion criteria and selectively reporting those that produce significant results) and HARKing (hypothesising after results are known) are part of the research landscape [32–34]. In our survey, for example, 25% of respondents admitted to, at one time or another, modifying their statistical analysis to obtain a favourable p-value. Other questionable practices that favour significant results in EBS research were also identified. Sadly, institutional incentives that reward the number of papers published lead to the natural selection of practices that produce significant results, and unfortunately, bad science gets results [35, 36]. In response to such issues, there have been calls to increase statistical power to 90% and decrease significance thresholds to α = 0.005 or 0.001 to avoid false positive results [37, 38]. With the traditional threshold of α = 0.05, a perfectly performed replication study has only a 50% chance of reproducing a significant effect [6, 37], a coin flip! Focus should be less on p-values and more on the scientific importance of the confidence intervals of the effects. One of the benefits of larger sample sizes is that effect size estimates are more precise [6, 29, 37], and by increasing the level of certainty surrounding the size of investigated effects, readers and editors will be interested in results regardless of their positiveness or negativeness, thus doing away with the fickle p-value [39].
Surveys can be influenced by various forms of bias. For example, those that focus on sensitive issues, questionable research practices in our case, may be biased by socially desirable responding: the tendency for respondents to give overly positive self-descriptions [40]. Unfortunately, only 0.2% of health-related surveys consider the effects of socially desirable responding on their results [41], and the present survey was not specifically designed to identify or correct for this. If present, socially desirable responding may have led us to underestimate negative practices and overestimate positive ones. However, socially desirable responding is less prevalent in anonymous self-report surveys [42], especially online ones such as ours [43]. It was recently noted that survey wording and interpretation may cause the prevalence of questionable research practices to be overestimated [44] and it is possible that this phenomenon influenced our results. Surveys are also at risk of self-selection and non-response biases [45, 46]. These biases may in part explain the glaring discrepancy between our audit and survey results. Nevertheless, the audit represents a large sample of randomly selected EBS papers and thus is a representative sample of published EBS papers. In sum, obtaining accurate estimates of questionable research practices is not simple.
The lack of transparency and scientific rigor we have uncovered likely reflects the pressure on researchers to publish significant results in high impact journals [14, 26, 35, 47–50]. This pressure drives a vicious cycle in which journals, institutions and funding agencies expect more, and, to survive and reach these expectations, scientists consciously or unconsciously adopt questionable or fraudulent research practices [7, 35, 36, 47–52]. These pressures and problems are not unique to research in EBS, nor are they new. But currently they are casting a shadow on the genuine efforts of researchers to improve brain function, a goal that is as important as ever. Fortunately, awareness of these issues is on the rise [1–7, 14, 26, 35, 36, 47–52] and recommendations and guidelines are emerging. These include justifying samples size with a priori power calculations, pre-registration of methods and analysis plans, reporting research transparently, making data and computer code openly available, and rewarding reproduction and replication studies [29, 53–59]. In EBS studies, researchers should include control brain sites in their stimulation protocols to overcome the shortcomings of sham stimulation and include control tasks to ensure the specificity of reported effects [60]. As highlighted by Poldrack et al. [55], these solutions are uncontroversial, yet their implementation is often challenging for researchers and best practices are not necessarily followed.
The clinical promise of EBS will remain illusory until the practice of neuroscience becomes more open and robust.
Supporting information
S2 Table. Reasons why EBS study results were not published.
https://doi.org/10.1371/journal.pone.0175635.s007
(PDF)
S3 Table. Size of effect when able to reproduce published findings and steps taken when not able to reproduce findings.
https://doi.org/10.1371/journal.pone.0175635.s008
(PDF)
Acknowledgments
We thank Dr. Andrew Cartwright for help implementing the online survey. This work was supported by an NHMRC grant (JLT, SCG).
Author Contributions
- Conceptualization: MEH CKL JLT SCG.
- Data curation: MEH SCG.
- Formal analysis: MEH CKL JLT SCG.
- Funding acquisition: JLT SCG.
- Investigation: MEH JLT SCG.
- Methodology: MEH CKL JLT SCG.
- Project administration: MEH.
- Resources: JLT SCG.
- Software: MEH.
- Supervision: SCG.
- Validation: MEH CKL JLT SCG.
- Writing – original draft: MEH SCG.
- Writing – review & editing: MEH CKL JLT SCG.
References
- 1. Baker M. 1,500 scientists lift the lid on reproducibility. Nature 2016; 533:452–454. pmid:27225100
- 2. Open Science Collaboration. Estimating the reproducibility of psychological science. Science 2015; 349:aac4716. pmid:26315443
- 3. Sullivan PF. Spurious genetic associations. Biol Psychiatry 2007; 61:1121–6. pmid:17346679
- 4. Eklund A, Nichols TE, Knutsson H. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci USA. 2016; 113:7900–5. pmid:27357684
- 5. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005; 2:e124. pmid:16060722
- 6. Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013; 14:365–76. pmid:23571845
- 7. Héroux ME, Taylor JL, Gandevia SC. The use and abuse of transcranial magnetic stimulation to modulate corticospinal excitability in humans. PLoS One. 2015; 10:e0144151. pmid:26629998
- 8. Dubljević V, Saigle V, Racine E. The rising tide of tDCS in the media and academic literature. Neuron. 2014; 82:731–6. pmid:24853934
- 9. Wiethoff S, Hamada M, Rothwell JC. Variability in response to transcranial direct current stimulation of the motor cortex. Brain Stimul. 2014; 7:468–75. pmid:24630848
- 10. Koenigs M, Ukueberuwa D, Campion P, Grafman J, Wassermann E. Bilateral frontal transcranial direct current stimulation: Failure to replicate classic findings in healthy subjects. Clin Neurophysiol. 2009; 120:80–4. pmid:19027357
- 11. Horvath JC, Vogrin S, Carter O, Cook MJ, Forte JD. Effects of a common transcranial direct current stimulation (tDCS) protocol on motor evoked potentials found to be highly variable within individuals over 9 testing sessions. Exp Brain Res. 2016; 234:2629–42. pmid:27150317
- 12. Horvath JC, Forte JD, Carter O. Evidence that transcranial direct current stimulation (tDCS) generates little-to-no reliable neurophysiologic effect beyond MEP amplitude modulation in healthy human subjects: A systematic review. Neuropsychologia. 2005; 66:213–36.
- 13. Boutron I, Dutton Svan Dijk, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA. 2010; 303:2058–64. pmid:20501928
- 14. Curran-Everett D, Benos DJ. Guidelines for reporting statistics in journals published by the American Physiological Society. Am J Physiol Regul Integr Comp Physiol. 2004; 287:R247–9. pmid:15789454
- 15. Brunoni AR, Moffa AH, Fregni F, Palm U, Padberg F, Blumberger DM, et al. Transcranial direct current stimulation for acute major depressive episodes: meta-analysis of individual patient data. Br J Psychiatry. 2016; 208:522–31. pmid:27056623
- 16. Zhu CE, Yu B, Zhang W, Chen WH, Qi Q, Miao Y. Effectiveness and safety of transcranial direct current stimulation in fibromyalgia: A systematic review and meta-analysis. J Rehabil Med. 2017; 49:2–9. pmid:27983739
- 17. Lowe CJ, Vincent C, Hall PA. Effects of noninvasive brain Stimulation on food cravings and consumption: a meta-analytic review. Psychosom Med. 2017; 79:2–13. pmid:27428861
- 18. Elsner B, Kugler J, Pohl M, Mehrholz J. Transcranial direct current stimulation (tDCS) for idiopathic Parkinson’s disease. Cochrane Database Syst Rev. 2016; 7:CD010916. pmid:27425786
- 19. Elsner B, Kugler J, Pohl M, Mehrholz J. Transcranial direct current stimulation (tDCS) for improving aphasia in patients with aphasia after stroke. Cochrane Database Syst Rev. 2016; 5:CD009760.
- 20. Shirahige L, Melo L, Nogueira F, Rocha S, Monte-Silva K. Efficacy of noninvasive brain stimulation on pain control in migraine patients: a systematic review and meta-analysis. Headache. 2016; 56:1565–96. pmid:27869996
- 21. Elsner B, Kugler J, Pohl M, Mehrholz J. Transcranial direct current stimulation (tDCS) for improving activities of daily living, and physical and cognitive functioning, in people after stroke. Cochrane Database Syst Rev. 2016; 3:CD009645. pmid:26996760
- 22. O’Connell NE, Wand BM, Marston L, Spencer S, Desouza LH. Non-invasive brain stimulation techniques for chronic pain. Cochrane Database Syst Rev. 2014; 4:CD008208.
- 23. Mancuso LE, Ilieva IP, Hamilton RH, Farah MJ. Does transcranial direct current stimulation improve healthy working memory?: a meta-analytic review. J Cogn Neurosci. 2017 28:1063–89.
- 24. Dedoncker J, Brunoni AR, Baeken C, Vanderhasselt MA. A systematic review and meta-analysis of the effects of transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex in healthy and neuropsychiatric samples: influence of stimulation parameters. Brain Stimul. 2016; 9:501–17. pmid:27160468
- 25. Ioannidis JP. Why most discovered true associations are inflated. Epidemiology. 2008; 19:640–8. pmid:18633328
- 26. Schooler J. Unpublished results hide the decline effect. Nature. 2011; 470:437. pmid:21350443
- 27. Huang YZ, Edwards MJ, Rounis E, Bhatia KP, Rothwell JC. Theta burst stimulation of the human motor cortex. Neuron. 2005; 45:201–6. pmid:15664172
- 28. Hamada M, Murase N, Hasan A, Balaratnam M, Rothwell JC. The role of interneuron networks in driving human motor cortical plasticity. Cereb Cortex. 2013; 23:1593–605. pmid:22661405
- 29.
Cumming G, Calin-Jageman R. Introduction to the new statistics. New York: Routledge; 2016.
- 30. Sterling TD. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J Am Stat Assoc. 1959; 54:30–34.
- 31. Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: unlocking the file drawer. Science. 2014; 345:1502–5. pmid:25170047
- 32. Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD. The extent and consequences of p-hacking in science. PLoS Biol. 2015; 13:e1002106. pmid:25768323
- 33. Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998; 2:196–217. pmid:15647155
- 34. Forstmeier W, Wagenmakers EJ, Parker TH. Detecting and avoiding likely false-positive findings—a practical guide. Biol Rev Camb Philos Soc. [Epub ahead of print] pmid:27879038
- 35. Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016; 3:160348.
- 36. Higginson AD, Manufò MR. Current incentives for scientists lead to underpowered studies with erroneous conclusions. Plos Biol. 2016; 14:e2000995. pmid:27832072
- 37. Curran-Everett D. Minimizing the changes of false positives and false negatives J Appl Physiol. 2017; 122:91–5.
- 38. Johnson VE. Revised standards for statistical evidence. Proc Natl Acad Sci USA. 2013; 110:19313–7. pmid:24218581
- 39. Halsey LG, Curran-Everett D, Vowler SL, Drummond GB. The fickle P value generates irreproducible results Nat Methods. 2015; 12:179–85. pmid:25719825
- 40.
Paulhus DL. Socially desirable responding: The evolution of a construct. In: Braun HI, Jackson DN, Wiley DE, editors. The role of constructs in psychological and educational measurements. NJ: Erlbaum; 2002. p. 44–69.
- 41. van de Mortel TF. Faking It: Social Desirability Response Bias in Self-report Research. Aust J Adv Nurs. 2008; 25:40–8.
- 42. Krumpal I. Determinants of social desirability bias in sensitive surveys: a literature review Qual Quant. 2013; 47:2025–47.
- 43. Kreuter F, Presser S, Tourangeau R. Social desirability bias in CATI, IVR, and Web surveys the effects of mode and question sensitivity. Public Opin Q. 2008; 72:847–65.
- 44. Fiedler K, Schwarz N. Questionable research practices revisited. Soc Psychol Person Sci. 2016; 7:45–52.
- 45. Sills SJ, Song C. Innovations in Survey Research. Soc Sci Comput Rev. 2002; 20:22–30.
- 46. Eysenbach G, Wyatt J. Using the Internet for Surveys and Health Research. J Med Internet Res. 2002; 4:e13. pmid:12554560
- 47. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias Group. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLoS One. 2013; 8:e66844. pmid:23861749
- 48. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005; 435:737–8. pmid:15944677
- 49. Anderson MS, Ronning EA, De Vries R, Martinson BC. The perverse effects of competition on scientists’ work and relationships. Sci Eng Ethics. 2007; 13:437–61. pmid:18030595
- 50. van Dijk D, Manor O, Carey LB. Publication metrics and success on the academic job market. Curr Biol. 2014; 24:R516–7. pmid:24892909
- 51. Héroux M. Inadequate reporting of statistical results. J Neurophysiol. 2016; 116:1536–7. pmid:27678073
- 52. John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012; 23:24–32.
- 53. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, et al. Promoting an open research culture. Science. 2015; 348:1422–5. pmid:26113702
- 54.
The Academy of Medical Sciences. [cited 2016 Aug 31] Reproducibility and reliability of biomedical research: improving research practice. Available from: http://www.acmedsci.ac.uk/policy/policy-projects/reproducibility-and-reliability-of-biomedical-research/
- 55. Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, et al. Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci. 2017; 18:115–126. pmid:28053326
- 56. Chambers CD. Registered reports: a new publishing initiative at Cortex. Cortex. 2013; 49:609–10 pmid:23347556
- 57. Chambers CD, Forstmann B, Pruszynski JA. Registered reports at the European Journal of Neuroscience: consolidating and extending peer-reviewed study pre-registration. Eur J Neurosci. 2016 Dec 27. [Epub ahead of print]
- 58. Asendorpf JB, Conner M, Fruyt FD, De Houwer J, Denissen JJA, Fiedler K, et al. Recommendations for increasing replicability in psychology. Eur J Pers. 2013; 27:108–119.
- 59. Wicherts JM, Borsboom D, Kats J, Molenaar D. The poor availability of psychological research data for reanalysis. Am Psychol. 2006;61:726–8. pmid:17032082
- 60. Parkin BL, Ekhtiari H, Walsh VF. Non-invasive human brain stimulation in cognitive neuroscience: a primer. Neuron. 2015; 87:932–45. pmid:26335641