Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Need for Randomization in Animal Trials: An Overview of Systematic Reviews


Background and Objectives

Randomization, allocation concealment, and blind outcome assessment have been shown to reduce bias in human studies. Authors from the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) collaboration recently found that these features protect against bias in animal stroke studies. We extended the scope the work from CAMARADES to include investigations of treatments for any condition.


We conducted an overview of systematic reviews. We searched Medline and Embase for systematic reviews of animal studies testing any intervention (against any control) and we included any disease area and outcome. We included reviews comparing randomized versus not randomized (but otherwise controlled), concealed versus unconcealed treatment allocation, or blinded versus unblinded outcome assessment.


Thirty-one systematic reviews met our inclusion criteria: 20 investigated treatments for experimental stroke, 4 reviews investigated treatments for spinal cord diseases, while 1 review each investigated treatments for bone cancer, intracerebral hemorrhage, glioma, multiple sclerosis, Parkinson's disease, and treatments used in emergency medicine. In our sample 29% of studies reported randomization, 15% of studies reported allocation concealment, and 35% of studies reported blinded outcome assessment. We pooled the results in a meta-analysis, and in our primary analysis found that failure to randomize significantly increased effect sizes, whereas allocation concealment and blinding did not. In our secondary analyses we found that randomization, allocation concealment, and blinding reduced effect sizes, especially where outcomes were subjective.


Our study demonstrates the need for randomization, allocation concealment, and blind outcome assessment in animal research across a wide range of outcomes and disease areas. Since human studies are often justified based on results from animal studies, our results suggest that unduly biased animal studies should not be allowed to constitute part of the rationale for human trials.


Bias in Animal Studies

Clinical epidemiologists and proponents of evidence-based medicine (EBM) have been using methods to reduce bias in human studies for over four decades. [1][5] Random allocation of participants to treatment groups, concealing the allocation sequence from those assigning participants to intervention groups (allocation concealment), and blinding of investigators assessing outcomes are now viewed as fundamental ways of ensuring quality and minimizing bias in clinical trials. [6] This is because concealed random allocation reduces selection bias and blinding outcome assessors reduces detection bias. [5] Armed with these methods, researchers have exposed several common medical practices as ineffective. For example, observational studies led us to believe that sodium fluoride reduced vertebral fractures, [7] that vitamin E reduced major coronary events, [8] and that high-dose aspirin was more effective than low-dose aspirin. [9] But subsequent randomized trials exposed all these treatments as useless or harmful. [10], [11] Benefits of randomization, allocation concealment, and blinding have been confirmed in larger meta-epidemiological studies. In the earliest of these, Schulz et al. (1995) found that odds ratios were exaggerated by 30% in trials lacking allocation concealment and by 17% in studies that lacked blind outcome assessment. [12] Subsequent larger investigations have confirmed these results and also shown that adequate randomization reduces bias in human studies. [13], [14]

A growing body of evidence is beginning to suggest that randomization, allocation concealment, and blinding outcome assessment can also reduce the risk of bias of animal studies. [15][25] Some researchers hypothesize that avoidable biases in animal studies contribute to the failure to translate much experimental work for human benefit. [26], [27] For example, while 503 of 835 candidate drugs for use in the management of stroke appeared effective in animal models, only one (tissue plasminogen activator) has proved sufficiently efficacious in humans. [28]

Much research into the empirical dimensions of bias in animal studies has been conducted by investigators from the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) group. [29] CAMARADES researchers recently conducted an overview of systematic reviews of animal studies researching treatments for experimental stroke, and showed that failure to conceal allocation (but not failure to randomize or blind) exaggerated apparent treatment benefits in animal studies. [30] Despite this research, evidence-based principles have not yet been widely adopted in animal research; a recent study showed that only one in six controlled animal studies use randomization and only one in five use blind outcome assessment [31]. We therefore aimed to replicate the CAMARADES study independently and to expand its scope to include all conditions.


We conducted an overview of systematic reviews. The protocol (unpublished) was finalized by JH, CH, RP, and JA in October 2012. We modified the protocol once to add the secondary analysis (testing the “unpredictability paradox”; see below). We searched MEDLINE and Embase databases (19 April 2012) and scanned reference lists for systematic reviews of animal studies that measured effects of randomization, allocation concealment, or blinding of outcome assessment. We included reviews in any disease area, using any intervention, any control group, any outcome measure and any animal model. We limited our search to the last 20 years and excluded human studies (search strategy in Appendix S1). We also excluded conference papers, studies not reported in English, ecological studies, and epidemiological studies.

Two reviewers (JH and JAH) independently extracted data on numbers of studies, numbers of animals, disease/condition, outcomes, effect measures, and effect sizes with confidence intervals, using piloted data extraction forms. Disagreements were resolved by discussion with other authors. Authors were contacted to request data which were not reported. To enable inclusion of one review [32] we estimated the number of animals in randomized and non-randomized groups by calculating the mean number of animals per study. To test whether this estimation affected our results we carried out a sensitivity analysis by removing the study from the meta-analysis. We assessed the risk of bias of included systematic reviews using the Assessment of Multiple Systematic Reviews (AMSTAR) criteria. [33]

We pooled results using the DerSimonian and Laird random effects model. [34] We reported outcomes for which differences between randomization/no randomization, allocation concealment/no allocation concealment, and blinding/no blinding were reported. We combined different outcomes and measurement units using standardized mean differences (SMDs), and quantified heterogeneity using the I-squared statistic. [35] We used meta-regression in a post-hoc analysis to examine whether various features influenced outcomes. Specifically, we investigated whether study size, disease state (stroke versus all other outcomes), or outcome measure were significantly associated with the effect size or could explain some of the heterogeneity.

For our secondary analysis we investigated the “unpredictability paradox”, which was proposed in a similar study involving human subjects. [13] The paradox states that the difference between inadequately randomized and randomized studies, although real, is unpredictable in terms of direction. This is plausible, given that the direction of bias may relate to differences in expected results. To investigate the paradox we ignored direction to see whether there was an absolute difference between results in randomized and non-randomized studies. We used the same method to investigate the unpredictability paradox for adequate allocation concealment and blinding. This approach is useful only as a guide, since with a large enough sample some absolute difference is likely to arise by chance alone.


We identified 238 articles from our electronic search, and a further 24 articles by hand searching references and contacting CAMARADES authors. Two authors (JH, JAH) excluded 199 articles after reading titles and abstracts. We assessed the full text of the remaining 63 articles and excluded a further 32 for not including outcome data. CAMARADES authors generously shared data from 19 reviews in which data were not included in the published reports. We were left with 31 systematic reviews involving 7339 comparisons (estimated 123,437 animals) to include in the meta-analysis (see Figure 1). Characteristics of the 31 included reviews are shown in Table 1, and our data are available freely from the authors.

Table 1. Outcome measures, interventions, diseases, and effect sizes in included studies.

Twenty systematic reviews investigated treatments for experimental stroke, [17][20], [24], [28], [32], [36][47] four reviews investigated treatments for spinal cord diseases, [48][51] one review each investigated treatments for bone cancer, [52] intracerebral hemorrhage, [39] glioma, [53] multiple sclerosis, [54] Parkinson's disease, [55] and any treatments used in emergency medicine. Animal types included baboons, cats, dogs, ewes, gerbils, guinea pigs, lambs, marmosets, mice, monkeys, pigs, rabbits, rats, and sheep. In our sample 29% of studies reported randomization, 15% reported allocation concealment, and 35% reported blinded outcome assessment.

1. Randomization

Thirty reviews with 7249 comparisons (121,784 animals) reported the effects of randomization. Randomized trials reduced effect sizes by a moderate and statistically significant amount (SMD  =  −0.07, 95% CI −0.12 to −0.02, I2 = 89.1%, P  =  0.008) (Figure 2). In a subgroup analysis examining the effect of randomization by disease (stroke versus other), we found that randomization resulted in a lower effect size in areas other than stroke (SMD −0.18, 95% CI −0.30 to −0.06) but not stroke itself (SMD −0.03 95% CI −0.08 to 0.02). However, using meta-regression we found no significant difference between stroke and non-stroke on outcome measures (P  = 0.08); additionally, meta-regression could not explain more than 3% of the heterogeneity. A sensitivity analysis excluding the single review [32] in which we had to estimate the number of animals, did not alter the overall result (SMD =  −0.08 95% CI −0.13 to −0.03). In our secondary analysis (where we ignored direction of effect) we found a larger difference between randomized and non-randomized studies (SMD −0.16, 95% CI −0.21 to −0.11, I2 = 86.6%, P<0.0001) compared with the effect size in which we took direction into consideration.

Figure 2. Forest plot showing the effect of random allocation on effect size.

2. Allocation concealment

Eighteen reviews with 2696 comparisons (39,405 animals) reported the effect of allocation concealment. Studies in which allocation concealment was used resulted in slightly decreased effect sizes, but this was not statistically significant (SMD  =  −0.04, 95% CI −0.09 to 0.00, I2 = 51.6%, P = 0.059) (Figure 3). Subgroup analysis examining different diseases (stroke and non-stroke) showed that allocation concealment in studies of stroke resulted in significantly lower effect sizes (SMD =  −0.07, 95% CI −0.12 to −0.02, I2 = 48.5%, P = 0.009), whereas allocation concealment in other disease areas resulted in higher effect sizes (SMD 0.05, 95% CI −0.01 to 0.11, I2 = 0%, P = 0.128) but the difference between these groups was not found to be significant using meta-regression (P = 0.073). Meta-regression of the combination of disease and outcome measure was did not explain more than 9% of the heterogeneity. In our secondary analysis (where we ignored direction of effect) we found a larger difference between concealed and non-concealed studies (SMD −0.08, 95% CI −0.11 to −0.05, I2 = 13.8%, P<0.0001) compared with the effect size in which we took direction into consideration.

Figure 3. Forest plot showing the effect of allocation concealment on effect size.

3. Blinding

Twenty-eight reviews involving 7140 comparisons (119,597 animals) reported the effects of blinding of outcome assessment. Effect sizes in studies that involved blind outcome assessment were not significantly different from studies that did not (SMD =  −0.01, 95% CI −0.04 to 0.03; I2 = 68.3%; P = 0.667) (Figure 4). A sensitivity analysis excluding one study in which some estimates were made did not change results. [16] We did not find any differences in effect sizes when we sub-divided studies into stroke and non-stroke groups. In a post-hoc subgroup analysis, we showed that blinding in studies reporting infarct volume did not significantly change effect size (SMD = 0.03, 95% CI −0.02 to 0.08, P = 0.187)), whereas blinding in those reporting neurobehavioral outcomes did (SMD =  −0.06, 95% CI −0.10 to −0.02, P = 0.003) and this difference was significant when tested using meta-regression (P = 0.014). In our secondary analysis (in which effect direction was ignored) we found a larger difference between blinded and non-blinded studies (SMD =  −0.08; 95% CI −0.11, −0.06; I2 = 49.5%; P < 0.001) compared with the effect size in which we took direction into consideration.

Figure 4. Forest plot showing the effect of blinding of outcome assessment on effect size.

4. Risk of bias

Using AMSTAR (Table 2), we found a moderate risk of bias. It was encouraging that all 31 reviews assessed the quality of included studies, all but two reviews used clearly used appropriate methods, and all but two reviews performed comprehensive literature searches. Yet only 9 studies provided a protocol, and only 17 studies searched the grey literature.


In this overview of systematic reviews we found that failure to randomize is likely to result in overestimation of the apparent treatment benefits of interventions across a range of disease areas and outcome measures. We also found a borderline effect of allocation concealment but no overall effect of blinding in our primary analysis. We hypothesize that the reason for an effect of randomization but not allocation concealment or blinding is that subjective judgments are less likely to influence outcomes in trials of (relatively homogeneous) animal models compared with (relatively heterogeneous) humans. While animal heart rates [56], blood flow [57], and behavior can be conditioned by human handling so that placebo controls are sometimes also used in animal studies, [58] there are no ‘patient-reported’ (subjective) outcomes in animal studies. This may make some measures of expectancy effects (for which blinding is useful [5]) smaller in animal studies. Our hypothesis is supported by our post hoc analyses, which showed that blinding reduced effect sizes for (more subjective) neurobehavioral scores, but not for (more objective) infarct volume. It may also be relevant that the comparison of allocation concealment versus non-allocation concealment was reported far less frequently (about half as) as the other comparisons, so the failure to find an effect of allocation concealment could be due to insufficient power. A future individual major study of individual trials is now warranted to investigate the direction, magnitude, and conditions that must hold for randomization, allocation concealment, and blinding to reduce bias in animal studies.

Our results corroborate those of the CAMARADES study, in the sense that we also identified significant bias in animal studies. However, whereas they found a borderline effect of allocation concealment, but no effect for blinding or randomization, we found an effect of randomization, a borderline effect for allocation concealment, and no effect for blinding. The differences between the two reviews could be because our review covered all disease areas, whereas theirs was limited to experimental stroke. In addition, our methods were different; we calculated standardized mean differences rather than (the less widely used and more difficult to replicate) normalized mean differences used by the CAMARADES researchers.

Our study had several potential limitations. First, outcomes, animal models, and disease types were heterogeneous. The high levels of between-study heterogeneity of our overview could not be explained using meta-regression but may result from heterogeneity of the included reviews (and it was beyond the scope of our study to examine the sources of heterogeneity within our included reviews). Secondly, we relied on reports of systematic reviews; these, in turn, relied on reports of individual trials. Some trials may have failed to report randomization, allocation concealment, and blinding when in fact these were used, and vice versa. Evidence from clinical trials suggests that reporting quality is a good surrogate for actual risk of bias. If a similar relationship between reporting quality and study quality in animal studies holds, incomplete reporting may not have affected our results [59]. Based on reporting standards for clinical studies (that require, among other things, descriptions of how randomization, concealment, and blinding were achieved [60]) reporting standards for animal studies have been are emerging. [61] The Animal Research: Reporting In Vivo Experiments (ARRIVE) guidelines, developed in 2010, [62] arguably constitute the leading candidate for becoming a requirement, although development work in this area continues [63]. More recently, it has been suggested that until formal reporting guidelines become required: “at a minimum, authors of grant applications and scientific publications should report on randomization, blinding, sample-size estimation, and the handling of all data”. [61]

Thirdly, it is unclear whether publication bias may have affected our results. It has been estimated that 1 in 6 animal trials remain unpublished, [64] so publication bias may have affected our results. If we assume that unpublished studies were equally likely to be randomized, allocation concealed, and blinded as they were to be non-randomized, not adequately concealed, and unblinded, then publication bias may not have affected the direction of our results. As with human studies, [65] compulsory registration of preclinical studies [66] would reduce publication bias and allow more precise estimates of the empirical dimensions of bias in animal studies.

Fourthly, many of the individual trials included in the systematic reviews applied randomization, allocation concealment, and blinding together, whereas we examined these features independently. Of the 31 included reviews, 19 investigated experimental stroke. If stroke studies tend to be different from other types of studies this might have influenced the results, although we explored this using sub-group analysis and meta-regression. Fifthly, there were a disproportionate number of stroke studies included in out overview of systematic reviews. This was due to the fact that stroke researchers have spearheaded empirical investigations of bias in animal research. Finally, this study was restricted to an investigation of the effects of randomization, allocation concealment, and blinding. Other features, such as lack of power, publication bias, choice of animal models, choice of sex of animals, and choice of outcome may also contribute to the internal and external validity of animal studies. [22], [31], [54], [67] A future individual study systematic review and meta-analysis is now warranted to address these potential limitations.

Our study has implications that extend beyond the conduct of animal studies. Only animal studies that do not suffer from avoidable bias should be accepted as justification for human studies. For this reason, the United States Food and Drug Administration (FDA), [68] the Medical Research Council (MRC) in the United Kingdom, [69] and the World Health Organization (WHO) [70] insist on fair tests, often involving systematic reviews of high quality randomized trials. Our study therefore supports the requirement for adequate conduct and reporting of animal studies, including those being promoted by CAMARADES, and SABRE Research UK. [71]


Our overview of systematic reviews and meta-analyses revealed that failure to randomize leads to exaggerated effect sizes in animal studies across a wide range of disease areas. In our secondary analysis we found that failure to conceal allocation or employ blind outcome assessment exaggerates effect sizes in animal studies. Biased animal research is less likely to provide trustworthy results, is less likely to provide a rationale for research that will benefit humans, and wastes scarce resources. Requiring compulsory study registration and adherence to emerging evidence-based standards for the conduct and reporting of animal research is likely to reduce the risk of bias in animal studies and improve translatability of animal research.


Sir Iain Chalmers made comments on earlier drafts of this paper, and authors from the CAMARADES Collaboration (Al-Shahi Salman, R, Amarasingh, S, Antonic, A, Banwell, V, Batchelor, PE, Bath, PM, Battistuzzo, CR, Bennett, MI, Bernhardt, J, Briscoe, CL, Brommer, B, Carter, S, Chandran, S, Colvin, LA, Currie, GL, Delaney, A, Dickenson, AH, Dirnagl, U, Donnan, GA, Egan, KJ, Fallon, MT, ffrench-Constant, C, Forsberg, K, Frantzias, J, Gibson, C, Gray, L, Hirst, TC, Horky, LL, Howells, DW, Janssen, H, Jerndal, M, Koblar, SA, Kopp, MA, Lees, JS, Linden, T, Longley, L, Macleod, MR, Mead, GE, Mee, S, Murphy, S, Nilsson, M, O'Collins, VE, Pedder, H, Rooke, ED, Sandercock, PA, Schwab, JM, Sena, ES, Skeers, P, Speare, S, Spratt, NJ, van der Worp, HB, Vesterinen, HM, Wardlaw, JM, Watzlawick, R, Wheble, PC, Whittle, IR, Williams, A, Willmot, M, and Wills, TE) generously shared data from the studies their group had published. Malcolm Macleod was especially generous with his support in helping to gather CAMARADES data.

Author Contributions

Conceived and designed the experiments: JAH JH JKA RP CK CH. Performed the experiments: JAH JH NR CK. Analyzed the data: JAH JH CK. Contributed reagents/materials/analysis tools: RP. Wrote the paper: JAH JH JKA CK CH.


  1. 1. Sackett DL (1969) Clinical epidemiology. Am J Epidemiol 89: 125–128.
  2. 2. Sackett DL (1986) Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest 89: 2S–3S.
  3. 3. Sackett DL, Richardson WS, Rosenberg W, Haynes B (1997) Evidence-based medicine: How to Practice & Teach EBM. London: Churchill Livingstone.
  4. 4. Chalmers I (2007) The lethal consequences of failing to make full use of all relevant evidence about the effects of medical treatments: the importance of systematic reviews. In: Rothwell PM, editor. Treating individuals: from randomised trials to personalized medicine. London: The Lancet.
  5. 5. Howick J (2011) The Philosophy of Evidence-Based Medicine. Oxford: Wiley-Blackwell.
  6. 6. Juni P, Altman DG, Egger M (2001) Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ 323: 42–46.
  7. 7. Farley SM, Libanati CR, Odvina CV, Smith L, Eliel L, et al. (1989) Efficacy of long-term fluoride and calcium therapy in correcting the deficit of spinal bone density in osteoporosis. J Clin Epidemiol 42: 1067–1074.
  8. 8. Knekt P, Reunanen A, Jarvinen R, Seppanen R, Heliovaara M, et al. (1994) Antioxidant vitamin intake and coronary mortality in a longitudinal population study. Am J Epidemiol 139: 1180–1189.
  9. 9. Decousus H, Leizorovicz A, Parent F, Page Y, Tardy B, et al. (1998) A clinical trial of vena caval filters in the prevention of pulmonary embolism in patients with proximal deep-vein thrombosis. Prevention du Risque d'Embolie Pulmonaire par Interruption Cave Study Group. N Engl J Med 338: 409–415.
  10. 10. Riggs BL, Hodgson SF, O'Fallon WM, Chao EY, Wahner HW, et al. (1990) Effect of fluoride treatment on the fracture rate in postmenopausal women with osteoporosis. N Engl J Med 322: 802–809.
  11. 11. Yusuf S, Dagenais G, Pogue J, Bosch J, Sleight P (2000) Vitamin E supplementation and cardiovascular events in high-risk patients. The Heart Outcomes Prevention Evaluation Study Investigators. N Engl J Med 342: 154–160.
  12. 12. Schulz KF, Chalmers I, Hayes RJ, Altman DG (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273: 408–412.
  13. 13. Odgaard-Jensen J, Vist GE, Timmer A, Kunz R, Akl EA, et al. (2011) Randomisation to protect against selection bias in healthcare trials. Cochrane database of systematic reviews: MR000012.
  14. 14. Savovic J, Jones HE, Altman DG, Harris RJ, Juni P, et al. (2012) Influence of Reported Study Design Characteristics on Intervention Effect Estimates From Randomized, Controlled Trials. Annals of internal medicine.
  15. 15. Bath PM, Macleod MR, Green AR (2009) Emulating multicentre clinical stroke trials: a new paradigm for studying novel interventions in experimental models of stroke. International journal of stroke: official journal of the International Stroke Society 4: 471–479.
  16. 16. Bebarta V, Luyten D, Heard K (2003) Emergency medicine animal research: does use of randomization and blinding affect the results? Academic emergency medicine: official journal of the Society for Academic Emergency Medicine 10: 684–687.
  17. 17. Jerndal M, Forsberg K, Sena ES, Macleod MR, O'Collins VE, et al. (2010) A systematic review and meta-analysis of erythropoietin in experimental stroke. Journal of cerebral blood flow and metabolism: official journal of the International Society of Cerebral Blood Flow and Metabolism 30: 961–968.
  18. 18. Macleod MR, O'Collins T, Horky LL, Howells DW, Donnan GA (2005) Systematic review and metaanalysis of the efficacy of FK506 in experimental stroke. Journal of cerebral blood flow and metabolism: official journal of the International Society of Cerebral Blood Flow and Metabolism 25: 713–721.
  19. 19. Macleod MR, O'Collins T, Howells DW, Donnan GA (2004) Pooling of animal experimental data reveals influence of study design and publication bias. Stroke; a journal of cerebral circulation 35: 1203–1208.
  20. 20. Macleod MR, van der Worp HB, Sena ES, Howells DW, Dirnagl U, et al. (2008) Evidence for the efficacy of NXY-059 in experimental focal cerebral ischaemia is confounded by study quality. Stroke; a journal of cerebral circulation 39: 2824–2829.
  21. 21. Perel P, Roberts I, Sena E, Wheble P, Briscoe C, et al. (2007) Comparison of treatment effects between animal experiments and clinical trials: systematic review. BMJ 334: 197.
  22. 22. Sena E, Wheble P, Sandercock P, Macleod M (2007) Systematic review and meta-analysis of the efficacy of tirilazad in experimental stroke. Stroke 38: 388–394.
  23. 23. van der Worp HB, de Haan P, Morrema E, Kalkman CJ (2005) Methodological quality of animal studies on neuroprotection in focal cerebral ischaemia. Journal of neurology 252: 1108–1114.
  24. 24. van der Worp HB, Sena ES, Donnan GA, Howells DW, Macleod MR (2007) Hypothermia in animal models of acute ischaemic stroke: a systematic review and meta-analysis. Brain: a journal of neurology 130: 3063–3074.
  25. 25. Vesterinen HM, Egan K, Deister A, Schlattmann P, Macleod MR, et al. (2011) Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism. Journal of cerebral blood flow and metabolism: official journal of the International Society of Cerebral Blood Flow and Metabolism 31: 1064–1072.
  26. 26. MacLeod M (2010) How to avoid bumping into the translational roadblock. . Rodent Models of Stroke Neuromethods. 47: 7–15.
  27. 27. Ioannidis JP (2006) Evolution and translation of research findings: from bench to where? PLoS Clin Trials 1: e36.
  28. 28. Sena ES, Briscoe CL, Howells DW, Donnan GA, Sandercock PA, et al. (2010) Factors affecting the apparent efficacy and safety of tissue plasminogen activator in thrombotic occlusion models of stroke: systematic review and meta-analysis. Journal of cerebral blood flow and metabolism: official journal of the International Society of Cerebral Blood Flow and Metabolism 30: 1905–1913.
  29. 29. Macleod M (2011) CAMARADES. Edinburgh: CAMARADES.
  30. 30. Crossley NA, Sena E, Goehler J, Horn J, Van Der Worp B, et al. (2008) Empirical evidence of bias in the design of experimental stroke studies: A metaepidemiologic approach. Stroke 39: 929–934.
  31. 31. Macleod M, van der Worp HB (2010) Animal models of neurological disease: are there any babies in the bathwater? Practical neurology 10: 312–314.
  32. 32. Janssen H, Bernhardt J, Collier JM, Sena ES, McElduff P, et al. (2010) An enriched environment improves sensorimotor function post-ischemic stroke. Neurorehabilitation and neural repair 24: 802–813.
  33. 33. Shea BJ, Bouter LM, Peterson J, Boers M, Andersson N, et al. (2007) External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One 2: e1350.
  34. 34. DerSimonian R, Laird N (1986) Meta-analysis in clinical trials. Controlled Clinical Trials 7: 177–188.
  35. 35. Higgins JJ, Green S (2008) The Cochrane Handbook for Systematic Reviews of Interventions. Chichester: The Cochrane Collaboration.
  36. 36. Banwell V, Sena ES, Macleod MR (2009) Systematic review and stratified meta-analysis of the efficacy of interleukin-1 receptor antagonist in animal models of stroke. Journal of stroke and cerebrovascular diseases: the official journal of National Stroke Association 18: 269–276.
  37. 37. Bath PMW, Gray LJ, Bath AJG, Buchan A, Miyata T, et al. (2009) Effects of NXY-059 in experimental stroke: an individual animal meta-analysis. British Journal of Pharmacology 157: 1157–1171.
  38. 38. Egan KJ, Janssen H, Sena ES, Longley L, Speare S, et al. (2014) Exercise Reduces Infarct Volume and Facilitates Neurobehavioral Recovery: Results From a Systematic Review and Meta-analysis of Exercise in Experimental Models of Focal Ischemia. Neurorehabil Neural Repair.
  39. 39. Frantzias J, Sena ES, Macleod MR, Al-Shahi Salman R (2011) Treatment of intracerebral hemorrhage in animal models: meta-analysis. Annals of neurology 69: 389–399.
  40. 40. Gibson CL, Gray LJ, Murphy SP, Bath PM (2006) Estrogens and experimental ischemic stroke: a systematic review. J Cereb Blood Flow Metab 26: 1103–1113.
  41. 41. Horn J, De Haan RJ, Vermeulen M, Luiten PGM, Limburg M (2001) Nimodipine in animal model experiments of focal cerebral ischemia: A systematic review. Stroke 32: 2433–2438.
  42. 42. Lees JS, Sena ES, Egan KJ, Antonic A, Koblar SA, et al. (2012) Stem cell-based therapy for experimental stroke: a systematic review and meta-analysis. Int J Stroke 7: 582–588.
  43. 43. Macleod MR, O'Collins T, Horky LL, Howells DW, Donnan GA (2005) Systematic review and meta-analysis of the efficacy of melatonin in experimental stroke. Journal of pineal research 38: 35–41.
  44. 44. Pedder H, Vesterinen HM, Macleod MR, Wardlaw JM (2014) Systematic review and meta-analysis of interventions tested in animal models of lacunar stroke. Stroke 45: 563–570.
  45. 45. Sena E, van der Worp HB, Howells D, Macleod M (2007) How can we improve the pre-clinical development of drugs for stroke? Trends in neurosciences 30: 433–439.
  46. 46. Wheble PCR, Sena ES, Macleod MR (2008) A systematic review and meta-analysis of the efficacy of piracetam and piracetam-like compounds in experimental stroke. Cerebrovascular Diseases 25: 5–11.
  47. 47. Vesterinen HM, Currie GL, Carter S, Mee S, Watzlawick R, et al. (2013) Systematic review and stratified meta-analysis of the efficacy of RhoA and Rho kinase inhibitors in animal models of ischaemic stroke. Syst Rev 2: 33.
  48. 48. Antonic A, Sena ES, Lees JS, Wills TE, Skeers P, et al. (2013) Stem cell transplantation in traumatic spinal cord injury: a systematic review and meta-analysis of animal studies. PLoS Biol 11: e1001738.
  49. 49. Batchelor PE, Skeers P, Antonic A, Wills TE, Howells DW, et al. (2013) Systematic review and meta-analysis of therapeutic hypothermia in animal models of spinal cord injury. PLoS One 8: e71317.
  50. 50. Batchelor PE, Wills TE, Skeers P, Battistuzzo CR, Macleod MR, et al. (2013) Meta-analysis of pre-clinical studies of early decompression in acute spinal cord injury: a battle of time and pressure. PLoS One 8: e72659.
  51. 51. Watzlawick R, Sena ES, Dirnagl U, Brommer B, Kopp MA, et al. (2014) Effect and reporting bias of RhoA/ROCK-blockade intervention on locomotor recovery after spinal cord injury: a systematic review and meta-analysis. JAMA Neurol 71: 91–99.
  52. 52. Currie GL, Delaney A, Bennett MI, Dickenson AH, Egan KJ, et al. (2013) Animal models of bone cancer pain: systematic review and meta-analyses. Pain 154: 917–926.
  53. 53. Hirst TC, Vesterinen HM, Sena ES, Egan KJ, Macleod MR, et al. (2013) Systematic review and meta-analysis of temozolomide in animal models of glioma: was clinical efficacy predicted? Br J Cancer 108: 64–71.
  54. 54. Vesterinen HM, Sena ES, ffrench-Constant C, Williams A, Chandran S, et al. (2010) Improving the translational hit of experimental treatments in multiple sclerosis. Multiple sclerosis 16: 1044–1055.
  55. 55. Rooke ED, Vesterinen HM, Sena ES, Egan KJ, Macleod MR (2011) Dopamine agonists in animal models of Parkinson's disease: A systematic review and meta-analysis. Parkinsonism & related disorders 17: 313–320.
  56. 56. Lynch JJ, Fregin GF, Mackie JB, Monroe RR Jr (1974) Heart rate changes in the horse to human contact. Psychophysiology 11: 472–478.
  57. 57. Newton JE, Ehrlich W (1993) Coronary blood flow in dogs: effect of person. Integrative physiological and behavioral science: the official journal of the Pavlovian Society 28: 280–286.
  58. 58. Breuer K, Hemsworth PH, Barnett JL, Matthews LR, Coleman GJ (2000) Behavioural response to humans and the productivity of commercial dairy cows. Applied animal behaviour science 66: 273–288.
  59. 59. Liberati A, Himel HN, Chalmers TC (1986) A quality assessment of randomized control trials of primary treatment of breast cancer. Journal of clinical oncology: official journal of the American Society of Clinical Oncology 4: 942–951.
  60. 60. Simera I (2008) EQUATOR Network collates resources for good research. BMJ 337: a2471.
  61. 61. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, et al. (2012) A call for transparent reporting to optimize the predictive value of preclinical research. Nature 490: 187–191.
  62. 62. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG (2010) Improving bioscience research reporting: The ARRIVE guidelines for reporting animal research. Journal of pharmacology & pharmacotherapeutics 1: 94–99.
  63. 63. Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG (2013) Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments. PLoS Med 10: e1001489.
  64. 64. Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR (2010) Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS biology 8: e1000344.
  65. 65. International Committee of Medical Journal Editors (2009) Uniform Requirements for Manuscripts Submitted to Biomedical Journals.
  66. 66. Kimmelman J, Anderson JA (2012) Should preclinical studies be registered? Nat Biotechnol 30: 488–489.
  67. 67. Beery AK, Zucker I (2011) Sex bias in neuroscience and biomedical research. Neurosci Biobehav Rev 35: 565–572.
  68. 68. FDA (2005) CFR 314.126: Applications for FDA Approval to Market a New Drug. United States Food and Drug Administration.
  69. 69. Medical Research Council (2013) The MRC and Clinical Trials. London: MRC.
  70. 70. Vilar J, Duley L (2003) The need for large and simple randomized trials in reproductive health. Geneva: The World Health Organization Library.
  71. 71. SABRE (2012) SABRE Research UK. In: SABRE, editor.