Advertisement
  • Loading metrics

The bench is closer to the bedside than we think: Uncovering the ethical ties between preclinical researchers in translational neuroscience and patients in clinical trials

  • Mark Yarborough ,

    mayarborough@ucdavis.edu

    Affiliation Bioethics Program, University of California Davis, Sacramento, California, United States of America

  • Annelien Bredenoord,

    Affiliation Julius Centrum, Universitair Medisch Centrum Utrecht, Utrecht, The Netherlands

  • Flavio D’Abramo,

    Affiliations Dahlem Research School, Freie Universitat Berlin, Berlin, Germany, Max Planck Institute for the History of Science, Berlin, Germany

  • Nanette C. Joyce,

    Affiliations Bioethics Program, University of California Davis, Sacramento, California, United States of America, Department of Physical Medicine and Rehabilitation, University of California Davis, Sacramento, California, United States of America

  • Jonathan Kimmelman,

    Affiliation Studies of Translation, Ethics, and Medicine (STREAM), Biomedical Ethics Unit, McGill University, Montreal, Canada

  • Ubaka Ogbogu,

    Affiliation Faculty of Law, University of Alberta, Edmonton, Canada

  • Emily Sena,

    Affiliation Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom

  • Daniel Strech,

    Affiliations Institute for Ethics, History, and Philosophy of Medicine, Medizinische Hochshule Hannover, Hannover, Germany, Charité Universitätsmedizin Berlin, Berlin, Germany, QUEST – Center for Transforming Biomedical Research, Berlin Institute of Health, Berlin, Germany

  • Ulrich Dirnagl

    Affiliations Charité Universitätsmedizin Berlin, Berlin, Germany, QUEST – Center for Transforming Biomedical Research, Berlin Institute of Health, Berlin, Germany

The bench is closer to the bedside than we think: Uncovering the ethical ties between preclinical researchers in translational neuroscience and patients in clinical trials

  • Mark Yarborough, 
  • Annelien Bredenoord, 
  • Flavio D’Abramo, 
  • Nanette C. Joyce, 
  • Jonathan Kimmelman, 
  • Ubaka Ogbogu, 
  • Emily Sena, 
  • Daniel Strech, 
  • Ulrich Dirnagl
PLOS
x

Abstract

Millions of people worldwide currently suffer from serious neurological diseases and injuries for which there are few, and often no, effective treatments. The paucity of effective interventions is, no doubt, due in large part to the complexity of the disorders, as well as our currently limited understanding of their pathophysiology. The bleak picture for patients, however, is also attributable to avoidable impediments stemming from quality concerns in preclinical research that often escape detection by research regulation efforts. In our essay, we connect the dots between these concerns about the quality of preclinical research and their potential ethical impact on the patients who volunteer for early trials of interventions informed by it. We do so in hopes that a greater appreciation among preclinical researchers of these serious ethical consequences can lead to a greater commitment within the research community to adopt widely available tools and measures that can help to improve the quality of research.

For those who have the misfortune of suffering a stroke or being diagnosed with a progressive neurodegenerative disease, there are few, if any, treatments for them that will either retard or reverse symptoms, prevent major disability, or extend life. However, some will qualify for early trials testing novel drugs or biologics, representing what many see as a welcome option. Whether they realize it or not, those who enroll in these early trials will be trusting a long line of research and countless investigators whose preclinical work will have laid the foundation for the trial.

Unfortunately, the prospects for success for such trials are exceedingly low. For example, although more than 60 molecules have been investigated in the 22 years since Riluzole received marketing authorization from the United States Food and Drug Administration (FDA) for treatment of amyotrophic lateral sclerosis (ALS), there has been only one new FDA-approved drug, edaravone, as a result of all these trials [1,2]. In the case of Alzheimer disease, although clinical trials have been conducted for decades, there remains no approved drug that effectively combats the disease, as the most recent report of a failed phase III trial sadly reminds us [3]. As for stroke, despite the numerous neuroprotective drugs that ameliorate the consequences of a stroke in preclinical models, none of these drugs has been effective in patients [4].

This high rate of failure undoubtedly reflects the complexity of neurological diseases and injuries and the current limits of our understanding of their pathophysiology [5]. Further adding to the scientific challenges is the fact that few animal models mimic complex human brain phenomena, including human-type cognition, emotion, and behavior [6]. And, given their high moral status, the nonhuman primates who do share these traits are generally not available for study, either at all or in sufficient numbers.

Ethical challenges with the design of clinical trials themselves create additional hurdles that can impede progress. There are often safety concerns associated with novel interventions, such as the use of genetically modified stem cells, so phase I trials are often initially conducted on the sickest people with disorders like ALS that cause short life expectancies. This means the opportunity is lost to look for and learn about delayed safety and efficacy issues that may arise long after transplantation, information that can prove critical in subsequent initial trials in other diseases that have longer life expectancies. In addition, since many neurological disorders are disorders of suffering—e.g., severe depression, neuropathic pain—their very nature creates ethical challenges for both research ethics committees (RECs) and participant recruitment. Other degenerative disorders similarly prove ethically complex to investigate because they necessitate intervention in prodromal stages that expose “healthy at-risk” individuals to unproven and possibly unsafe treatments. Further, such studies must be of long duration, proving costly to industry sponsors.

These challenges notwithstanding, and despite the dedication of researchers, multiple, ubiquitous, and, most importantly, avoidable impediments further hinder the progress sought by all concerned. (See Fig 1) Impediments stem from a broad range of features of preclinical research that can cause problems for virtually all early clinical trials. These include, but are not limited to, matters such as low internal, construct, and external validity; exceedingly low sample sizes; nonvalidated antibodies and biologicals; and substantial publication bias. Space does not permit us to review all of these threats to the validity of the results of preclinical translational research, but meta-research of the last decade has exposed them in great detail [717]. To illustrate their magnitude and subsequent potential impact on the patients who enroll in early clinical trials, we will look first at matters related to publication bias.

thumbnail
Fig 1. Avoidable deficiencies in preclinical research cause detrimental ripple effects all along the translation pathway that erode both the safety and ethics of early clinical trials.

https://doi.org/10.1371/journal.pbio.2006343.g001

If we are going to use data from preclinical studies to inform clinical trials, then the available data that describe how effective an intervention is for a given disease need to reflect adequately the entirety of data that exist testing such an assertion. This requires the publication of all experiments and outcomes assessed, irrespective of their findings. Unfortunately, experiments that find a positive effect are substantially more likely to be published than similar experiments testing the same intervention that find it not to be effective. In addition, studies that assess multiple outcomes often only report the outcomes that show a positive effect.

The bias that can result from such selective reporting is apparent in an assessment of animal studies describing neurological diseases. It observed an excess of significant findings compared to what was expected, suggesting reporting biases in the literature [18]. In the preclinical stroke literature, conservative estimates of the magnitude of the impact of publication bias have been made, and they suggest one in six experiments remain unpublished. This leads to an overestimation of treatment effects of about 30% [19]. Such studies show the extent to which current overrepresentation of positive studies—as well as the low statistical power, or “winner’s curse,” of neuroscience studies that reduces the chance that a statistically significant result is indicative of a true effect [20]—can erroneously lead us to deem an intervention to be substantially more effective than it is.

Such publication bias and the problems it poses for patients in early trials would be less prevalent if more preclinical researchers would follow the many recommendations that are available to improve the design and conduct of in vivo animal experiments [21]. Evidence from just one example of thoughtful recommendations, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines, is illustrative. Developed in 2010 to improve reporting about animal research, they are now endorsed by more than 1,000 journals. The most current reports about their use show that the preclinical research community remains both largely unaware of them and recalcitrant in its uptake of them [22].

Experience with expert guidance from the Stroke Treatment Academic Industry Roundtable (STAIR) is equally troubling because it shows that it is not just individual research teams that are ignoring useful recommendations that could strengthen early trials. Federal drug approval agencies do as well. Among other things, STAIR publishes and updates recommendations for preclinical standards in the development of drugs for acute ischemic stroke [23,24]. These include expert guidance for clinical trialists on preclinical evidence requirements for launching trials. However, the corresponding European Medicines Agency (EMA) guideline for planning stroke trials does not refer to any STAIR preclinical recommendations [25], and the FDA does not provide a stroke-specific guideline.

The cumulative weight of the foregoing considerations shows that patients can enter early trials based on preclinical studies that may not have been sufficiently powered, whose investigators may not have been blinded, and the results of which may never have been replicated. One might hope that regulatory review processes would winnow out such problematic research, but the evidence on this front as well is far from encouraging. To begin with, in the US, applications to the FDA to launch initial human studies can be approved exclusively on the basis of preclinical safety data, not evidence of efficacy, revealing a narrow focus [26,27].

RECs have a broader focus, since they must make a positive determination that the potential benefits of a study outweigh its risks. They rely heavily on investigator brochures (IBs) to help them weigh risks against benefits. A recently completed study about the information from preclinical efficacy studies (PCESs) produced discouraging results [28]. It reviewed the nonclinical sections of 109 IBs for phase I/II trials submitted to German RECs over a period of six years (2010–2015). It found that reporting on PCESs infrequently describes study elements essential for evaluating those studies, including sample size (26%), baseline characterization of animals (18%), randomization (4%), sample size calculation (0%), and blinded outcome assessment (0%). For 81% of all IBs, no included PCESs had a reference to published reports. In 82% of all IBs, preclinical efficacy studies were exclusively positive. The study authors concluded that most IBs for phase I/II studies do not allow RECs—nor others such as federal regulators, investigators, or data and safety monitoring boards, for that matter—to systematically appraise the strength of the supporting preclinical findings.

Collectively, the foregoing considerations about preclinical research raise substantive concerns about whether early trials actually meet the ethical threshold found in all international codes of research ethics. Those codes stipulate that risks must be minimized and that risks must be outweighed by anticipated benefits. Equally critical is a minimum threshold for anticipated social value of a given trial [29,30]. RECs by necessity must draw upon preclinical safety and efficacy evidence in their assessment of the risks, benefits, and anticipated social value of early trials. Given the embedded problems in preclinical evidence of the sorts we have highlighted, two conclusions are unavoidable. First, the reliability of RECs’ assessments is questionable, given the documented weaknesses of the evidence they draw upon. Second, it is clear that trial participants are exposed to much more uncertainty about risks, benefits, and social value than they should be.

There is, of course, one other important ethics safeguard besides REC review that we can look to that is meant to stand as a buffer between early studies that receive REC approval and the people with serious neurological diseases and injuries who are candidates for those studies, and that is the informed consent process. But available evidence about informed consent also raises major questions. (See Fig 2) While consent documents are required to quantify risks, information about benefits is typically tied to the portion of informed consent forms explaining the purpose(s) of the study. Consequently, while forms state study objectives—i.e., what investigators hope to learn during the course of the trial—and disclose the fact that these objectives may not occur, there is no mention of how much uncertainty there is regarding whether a trial might result in the expected benefits and risks. For example, it is almost a certainty that no information is ever disclosed to potential volunteers about whether the strength of the scientific evidence relied upon to launch a trial meets basic standards of reliability, such as whether critical studies were adequately powered, whether investigators were blinded in preclinical studies, or whether regulatory approval agencies examined any efficacy data.

thumbnail
Fig 2. Ethically sound informed consent requires disclosure of complete and accurate information about the potential risks and benefits of early trials.

Methodologically deficient preclinical studies preclude such adequate disclosures. This only compounds other well-documented problems in the informed consent process, resulting in potentially misinformed research participants.

https://doi.org/10.1371/journal.pbio.2006343.g002

Thus, the informed consent process will do little, if anything, to counter patient expectations that a trial is built on solid science. Nor will it offset the well-documented tendency of research participants to misunderstand critical aspects of what it means to be in a clinical trial. Research shows that participants are likely both to misunderstand how the clinical trial will differ from their regular clinical care, what is known as therapeutic misconception (TM), and to overestimate the potential benefits of participating in the trial, what is known as therapeutic misestimation (TME). Both undermine the effectiveness of informed consent for clinical trials in general and early trials of novel modalities in particular [3133].

Informed consent processes are further weakened by well-documented problems with exaggerated portrayals of, or hype regarding, biomedical research [34,35]. This hype not only reaches participants through popular media discourse around innovative research; it also influences the discourse about research within the scientific community itself [36,37]. Hype can positively dispose clinical investigators toward trial launch and can cause trial participants to have unrealistic expectations about their trial as well, as the evidence about both TM and TME attests. Thus, it is quite likely that most volunteers enter early trials without appreciating the extent to which they are running the risk that they might make themselves even worse off than they already are.

The landscape of preclinical research and the clinical trials it supports that we have just described is the reality faced by those with serious neurological diseases and injuries who may wish to enter early trials. Their suffering is compounded by the bleak prospects that we described for breakthrough treatments that might lessen their burdens. We know that many features of their reality will not be changing anytime soon. First, there is little that can be done about the ethical complexities intrinsic to the design and conduct of early trials involving people from the affected populations. Second, regulatory bodies will be slow to change, as current efforts that began in 2011 to make changes to federal regulations governing human subject research in the US attest. This means that RECs will continue to exercise broad and, at times, flawed discretion over the trials they review [38]. And it further means that the informed consent process will continue to mask the uncertainty pertaining to the potential for both risks and benefits in early trials, since what information gets disclosed during the process is largely determined by the requirements set forth by research regulations.

Some features of the landscape of translational neurosciences that we have described are subject to change, but only if the research community musters the requisite willingness [39]. Multiple groups have long focused on matters that erode the quality and reliability of research, and they have promulgated several remedies to help address them [13,40,41]. These include measures to reduce bias and increase statistical conclusion validity [40,4246], enforcing adherence to guidelines and recommendations [21], transparent reporting [47], and discriminating between exploratory and confirmatory research [48], among others. Adopting these kinds of reforms can have a positive impact. For example, some recent studies [49,50] have indicated that reporting of preclinical studies can be improved when journals adapt their instructions to authors.

How much of a difference widespread uptake of them would make remains unclear. Definitive evidence is lacking that robust, reliable, and reproducible in vivo modeling can, in fact, improve the prediction of success in subsequent clinical trials and the protection of patients against harm. It has to be noted, however, that the current model of drug development, as well as its regulatory framework, is based on the assumption that preclinical research regularly meets critical quality thresholds. Conversely, regardless of the model, research lacking rigor and reporting results selectively is not fit to either efficiently develop novel therapeutic strategies or assist RECs to weigh harms and benefits for patients in a meaningful way.

That is why the limited uptake of proposed remedies to improve the robustness of preclinical research is so troubling. It perpetuates many of the real-life consequences described above for the patients who volunteer for early trials. If, on the other hand, there were more uptake of them, the picture presented in Fig 1 could be significantly altered because most of the vulnerabilities of the translational process it identifies could at least be mitigated, if not eliminated. That would mean that patients could have greater trust that regulatory approval authorities and RECs could consistently draw upon strong evidence when they review and approve trials investigating new drugs and devices. As a result, the negative downstream consequences that problematic preclinical research presently bestows on patients could be lessened. That would mean that the current landscape we have described throughout this essay could be a bit brighter and the path forward in it a bit clearer.

References

  1. 1. Petrov D, Mansfield C, Moussy A, Hermine O. ALS Clinical Trials Review: 20 Years of Failure. Are We Any Closer to Registering a New Treatment? Front Aging Neurosci. 2017;9:68. pmid:28382000.
  2. 2. Mitsumoto H, Brooks BR, Silani V. Clinical trials in amyotrophic lateral sclerosis: why so many negative trials and how can trials be improved? Lancet Neurol. 2014;13(11):1127–38. pmid:25316019.
  3. 3. Garde D. Another Alzheimer’s failure: Axovant’s drug flops in late-stage trial. STAT [Internet]. 2017 November 14, 2017. https://www.statnews.com/2017/09/26/axovant-intepirdine-trial/. [cited 2017 Nov 17].
  4. 4. Dirnagl U. Thomas Willis Lecture: Is Translational Stroke Research Broken, and if So, How Can We Fix It? Stroke. 2016;47(8):2148–53. pmid:27354221.
  5. 5. London AJ, Kimmelman J. Why clinical translation cannot succeed without failure. Elife. 2015;4:e12844. pmid:26599839.
  6. 6. Mathews DJ, Sugarman J, Bok H, Blass DM, Coyle JT, Duggan P, et al. Cell-based interventions for neurologic conditions: ethical challenges for early human trials. Neurology. 2008;71(4):288–93. Epub 2008/05/09. pmid:18463365.
  7. 7. Ioannidis JP. Acknowledging and Overcoming Nonreproducibility in Basic and Preclinical Research. JAMA. 2017;317(10):1019–20. Epub 2017/02/14. pmid:28192565.
  8. 8. Vogt L, Reichlin TS, Nathues C, Wurbel H. Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor. PLoS Biol. 2016;14(12):e2000598. Epub 2016/12/03. pmid:27911892.
  9. 9. Reichlin TS, Vogt L, Wurbel H. The Researchers’ View of Scientific Rigor-Survey on the Conduct and Reporting of In Vivo Research. PLoS ONE. 2016;11(12):e0165999. Epub 2016/12/03. pmid:27911901.
  10. 10. Egan KJ, Vesterinen HM, Beglopoulos V, Sena ES, Macleod MR. From a mouse: systematic analysis reveals limitations of experiments testing interventions in Alzheimer’s disease mouse models. Evid Based Preclin Med. 2016;3(1):e00015. Epub 2016/08/01. pmid:29214041.
  11. 11. Hartung T. Look back in anger—what clinical studies tell us about preclinical work. ALTEX. 2013;30(3):275–91. Epub 2013/07/19. pmid:23861075.
  12. 12. Hirst JA, Howick J, Aronson JK, Roberts N, Perera R, Koshiaris C, et al. The need for randomization in animal trials: an overview of systematic reviews. PLoS ONE. 2014;9(6):e98856. Epub 2014/06/07. pmid:24906117.
  13. 13. Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, et al. Risk of Bias in Reports of In Vivo Research: A Focus for Improvement. PLoS Biol. 2015;13(10):e1002273. pmid:26460723.
  14. 14. Howells DW, Sena ES, Macleod MR. Bringing rigour to translational medicine. Nat Rev Neurol. 2014;10(1):37–43. Epub 2013/11/20. pmid:24247324.
  15. 15. O’Connor AM, Sargeant JM. Critical appraisal of studies using laboratory animal models. ILAR J. 2014;55(3):405–17. Epub 2014/12/30. pmid:25541543.
  16. 16. Lindner MD. Clinical attrition due to biased preclinical assessments of potential efficacy. Pharmacol Ther. 2007;115(1):148–75. Epub 2007/06/19. pmid:17574680.
  17. 17. Peers IS, South MC, Ceuppens PR, Bright JD, Pilling E. Can you trust your animal study data? Nat Rev Drug Discov. 2014;13(7):560. Epub 2014/06/07. pmid:24903777.
  18. 18. Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, Howells DW, et al. Evaluation of excess significance bias in animal studies of neurological diseases. PLoS Biol. 2013;11(7):e1001609. pmid:23874156.
  19. 19. Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8(3):e1000344. pmid:20361022.
  20. 20. Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews Neuroscience. 2013;14(5):365–76. pmid:23571845.
  21. 21. Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments. PLoS Med. 2013;10(7):e1001489. Epub 2013/08/13. pmid:23935460.
  22. 22. Enserink M. Sloppy reporting on animal studies proves hard to change. Science. 2017;357(6358):1337–8. Epub 2017/10/01. pmid:28963232.
  23. 23. Stroke Therapy Academic Industry Roundtable (STAIR). Recommendations for standards regarding preclinical neuroprotective and restorative drug development. Stroke. 1999;30(12):2752–8. Epub 1999/12/03. pmid:10583007.
  24. 24. Fisher M, Feuerstein G, Howells DW, Hurn PD, Kent TA, Savitz SI, et al. Update of the stroke therapy academic industry roundtable preclinical recommendations. Stroke. 2009;40(6):2244–50. Epub 2009/02/28. pmid:19246690.
  25. 25. European Medicines Agency. Clinical investigation of medicinal products for the treatment of acute stroke 2001 [07/06/2017]. http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/general/general_content_001188.jsp&mid=WC0b01ac0580034cf5.
  26. 26. FDA. Investigational New Drug (IND) Application [cited 2017 November 14]. https://www.fda.gov/Drugs/DevelopmentApprovalProcess/HowDrugsareDevelopedandApproved/ApprovalApplications/InvestigationalNewDrugINDApplication/default.htm.
  27. 27. Kimmelman J, Federico C. Consider drug efficacy before first-in-human trials. Nature. 2017;542(7639):25–7. Epub 2017/02/06. pmid:28150789.
  28. 28. Wieschowski S, Chin WWL, Federico C, Sievers S, Kimmelman J, Strech D. Preclinical Efficacy Studies in Investigator Brochures: Do They Enable Risk-Benefit Assessment? PLoS Biol. 2018;16(4):e2004879. Epub 2018/04/06. pmid:29621228.
  29. 29. Habets MG, van Delden JJ, Bredenoord AL. The unique status of first-in-human studies: strengthening the social value requirement. Drug Discov Today. 2017;22(2):471–5. pmid:27894931.
  30. 30. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA: the journal of the American Medical Association. 2000;283(20):2701–11. Epub 2000/05/20. pmid:10819955.
  31. 31. Appelbaum PS, Roth LH, Lidz C. The therapeutic misconception: informed consent in psychiatric research. Int J Law Psychiatry. 1982;5(3–4):319–29. Epub 1982/01/01. pmid:6135666.
  32. 32. Kimmelman J. The therapeutic misconception at 25: treatment, research, and confusion. Hastings Cent Rep. 2007;37(6):36–42. Epub 2008/01/09. pmid:18179103.
  33. 33. Horng S, Grady C. Misunderstanding in clinical research: distinguishing therapeutic misconception, therapeutic misestimation, and therapeutic optimism. Irb. 2003;25(1):11–6. pmid:12833900.
  34. 34. Kamenova K, Caulfield T. Stem cell hype: media portrayal of therapy translation. Sci Transl Med. 2015;7(278):278ps4. Epub 2015/03/13. pmid:25761887.
  35. 35. Wyse RK, Brundin P, Sherer TB. Nilotinib—Differentiating the Hope from the Hype. J Parkinsons Dis. 2016;6(3):519–22. Epub 2016/07/20. pmid:27434298.
  36. 36. Caulfield T. Biotechnology and the popular press: hype and the selling of science. Trends Biotechnol. 2004;22(7):337–9. Epub 2004/07/13. pmid:15245905.
  37. 37. Caulfield T, Condit C. Science and the sources of hype. Public Health Genomics. 2012;15(3–4):209–17. Epub 2012/04/11. pmid:22488464.
  38. 38. Grankvist H, Kimmelman J. How do researchers decide early clinical trials? Med Health Care Philos. 2016;19(2):191–8. Epub 2016/02/03. pmid:26833467.
  39. 39. Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012;483(7391):531–3. Epub 2012/03/31. pmid:22460880.
  40. 40. Steward O, Balice-Gordon R. Rigor or mortis: best practices for preclinical research in neuroscience. Neuron. 2014;84(3):572–81. pmid:25442936.
  41. 41. ISSCR (International Society for Stem Cell Research). Guidelines for stem cell science and clinical translation. 2016. http://www.isscr.org/docs/default-source/guidelines/isscr-guidelines-for-stem-cell-research-and-clinical-translation.pdf?sfvrsn=2
  42. 42. Peers IS, Ceuppens PR, Harbron C. In search of preclinical robustness. Nat Rev Drug Discov. 2012;11(10):733–4. Epub 2012/10/02. pmid:23023666.
  43. 43. Aban IB, George B. Statistical considerations for preclinical studies. Exp Neurol. 2015;270:82–7. Epub 2015/03/01. pmid:25725352.
  44. 44. Ioannidis JP. How to make more published research true. PLoS Med. 2014;11(10):e1001747. pmid:25334033.
  45. 45. American Society for Cell Biology. How Can Scientists Enhance Rigor in Conducting Basic Research and Reporting Research Results? 2015. http://www.ascb.org/wp-content/uploads/2015/11/How-can-scientist-enhance-rigor.pdf
  46. 46. FASEB. Enhancing Research Reproducibility: Recommendations from the Federation of American Societies for Experimental Biology. 2016. https://www.faseb.org/Portals/2/PDFs/opa/2016/FASEB_Enhancing%20Research%20Reproducibility.pdf
  47. 47. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490(7419):187–91. Epub 2012/10/13. pmid:23060188.
  48. 48. Kimmelman J, Mogil JS, Dirnagl U. Distinguishing between exploratory and confirmatory preclinical research will improve translation. PLoS Biol. 2014;12(5):e1001863. pmid:24844265.
  49. 49. The NPQIP Collaborative Group. Findings of a retrospective, controlled cohort study of the impact of a change in Nature journals’ editorial policy for life sciences research on the completeness of reporting study design and execution. bioRxiv [Internet]. 2017. http://dx.doi.org/10.1101/187245. [cited 2018 Apr 6].
  50. 50. Minnerup J, Zentsch V, Schmidt A, Fisher M, Schabitz WR. Methodological Quality of Experimental Stroke Studies Published in the Stroke Journal: Time Trends and Effect of the Basic Science Checklist. Stroke. 2016;47(1):267–72. Epub 2015/12/15. pmid:26658439.