Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Measurement challenges and causes of incomplete results reporting of biomedical animal studies: Results from an interview study

  • Till Bruckner,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Writing – original draft

    Affiliations QUEST Center for Responsible Research, Berlin Institute of Health at Charité –Universitätsmedizin, Berlin, Germany, Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany

  • Susanne Wieschowski,

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Validation, Writing – review & editing

    Affiliations QUEST Center for Responsible Research, Berlin Institute of Health at Charité –Universitätsmedizin, Berlin, Germany, Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany

  • Miriam Heider,

    Roles Conceptualization, Methodology, Validation, Writing – review & editing

    Affiliation Institute for Laboratory Animal Science, Hannover Medical School, Hannover, Germany

  • Susanne Deutsch,

    Roles Conceptualization, Methodology, Validation, Writing – review & editing

    Affiliation Institute for Laboratory Animal Science, RWTH Aachen University, Faculty of Medicine, Aachen, Germany

  • Natascha Drude,

    Roles Conceptualization, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation QUEST Center for Responsible Research, Berlin Institute of Health at Charité –Universitätsmedizin, Berlin, Germany

  • Ulf Tölch,

    Roles Conceptualization, Methodology, Supervision, Validation, Writing – review & editing

    Affiliation QUEST Center for Responsible Research, Berlin Institute of Health at Charité –Universitätsmedizin, Berlin, Germany

  • André Bleich,

    Roles Conceptualization, Funding acquisition, Supervision, Writing – review & editing

    Affiliation Institute for Laboratory Animal Science, Hannover Medical School, Hannover, Germany

  • René Tolba,

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – review & editing

    Affiliation Institute for Laboratory Animal Science, RWTH Aachen University, Faculty of Medicine, Aachen, Germany

  • Daniel Strech

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    daniel.strech@bih-charite.de

    Affiliations QUEST Center for Responsible Research, Berlin Institute of Health at Charité –Universitätsmedizin, Berlin, Germany, Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany

Abstract

Background

Existing evidence indicates that a significant amount of biomedical research involving animals remains unpublished. At the same time, we lack standards for measuring the extent of results reporting in animal research. Publication rates may vary significantly depending on the level of measurement such as an entire animal study, individual experiments within a study, or the number of animals used.

Methods

Drawing on semi-structured interviews with 18 experts and qualitative content analysis, we investigated challenges and opportunities for the measurement of incomplete reporting of biomedical animal research with specific reference to the German situation. We further investigate causes of incomplete reporting.

Results

The in-depth expert interviews revealed several reasons for why incomplete reporting in animal research is difficult to measure at all levels under the current circumstances. While precise quantification based on regulatory approval documentation is feasible at the level of entire studies, measuring incomplete reporting at the more individual experiment and animal levels presents formidable challenges. Expert-interviews further identified six drivers of incomplete reporting of results in animal research. Four of these are well documented in other fields of research: a lack of incentives to report non-positive results, pressures to ‘deliver’ positive results, perceptions that some data do not add value, and commercial pressures. The fifth driver, reputational concerns, appears to be far more salient in animal research than in human clinical trials. The final driver, socio-political pressures, may be unique to the field.

Discussion

Stakeholders in animal research should collaborate to develop a clear conceptualisation of complete reporting in animal research, facilitate valid measurements of the phenomenon, and develop incentives and rewards to overcome the causes for incomplete reporting.

Introduction

The issue of incomplete reporting, when study outcomes are reported only partially or not at all, has attracted growing attention across numerous fields of natural and social science, raising questions about the rigour, efficiency, ethics and integrity of the scientific process, and the reliability, replicability and robustness of research findings [13].

In biomedical research, incomplete reporting and its effects on publication bias is well documented for human clinical trials [4, 5], but far less so for animal studies [69]. The lack of information about publication rates in animal research could be partly explained by practical barriers and conceptual uncertainties for the empirical measurement of publication rates. In principle, the measurement of incomplete reporting of results in biomedical research is facilitated by the requirement for researchers to obtain advance approval for studies involving humans or live animals. To obtain approval, researchers must pre-specify all planned experiments within a study and the number of ‘participants’ in each experiment. Reporting gaps in clinical trials have repeatedly been quantified by using ethics committee approvals or funder cohorts to establish a cohort of all studies conducted, and then searching the literature for their published outcomes [1012]. In European Union member states, legally mandated approvals by official bodies could be used to establish similar cohorts of animal studies.

Two separate groups recently used this approach to quantify non-publication of animal research on the level of entire animal studies and on the level of approved animals numbers. One group, where several authors from this paper participated, found that of 158 approved studies at two German university medical centres that had verifiably been initiated, 33% had published outcomes neither in the scientific literature nor within doctoral theses [13]. Another group found that of 67 approved studies at a Dutch university, 40% did not result in publications [14]. But has the publication rate on the level of approved animal studies sufficient construct validity? This question is important because entire animal studies might reflect several experiments which all include different animals. A 2011 survey among 454 Dutch animal researchers asked for the publication rate on the experiment level. The surveyed researchers estimated that about 50% of all experiments remain unpublished [15]. Furthermore, the above mentioned follow-up study at a Dutch university also assessed the publication rate on the level of animal numbers mentioned in the 67 approved study applications and found that journal articles did not report outcomes for 74% of the mentioned animals.

What of these three different concepts for non-publication has the highest construct validity? The 33–40% at the study level [13, 14], the 50% at the experiment level [15], or the 74% at the animal level [14]? Furthermore, how good is the internal and external validity of these three types of measures? Because reporting on the number of animals used in specific experiments both in approval documents [16] and in journal publications [6] is limited the measurement of reporting rates at all three levels might face substantial challenges.

The goal of this study was to explore qualitatively the measurement challenges and causes of incomplete reporting of the results of animal studies, differentiating between three levels of measurement: (1) overall study as approved by regulators, (2) discrete experiments nested within the study, and (3) individual animals used within experiments.

Methods

This study is reported in line with the Consolidated criteria for reporting qualitative research (COREQ) guideline.

We used purposive sampling to gain perspectives from different animal researchers and other stakeholders. The primary purpose was to obtain as complete a picture as possible of the different causes for incomplete results reporting in animal research and the opportunities and challenges in measuring incomplete reporting. Within the interviews we introduced three levels of analysis for results publication: study, experiment, and subjects. See Table 1 for examples and how to compare these three level of analysis with the area of clinical research.

thumbnail
Table 1. Incomplete publication of results: Three levels of analysis.

https://doi.org/10.1371/journal.pone.0271976.t001

Due to the sensitivity of the subject matter, initial participants were recruited from the study team’s professional networks, with further participants recruited via snowballing. As our study team focuses on responsible research, this likely biased our sample towards respondents with a high awareness of issues relevant for incomplete reporting. Interviewees were offered 150 Euros to compensate them for the time required to participate in the study. Following a purposive and iterative sampling strategy we recruited 18 interviewees (from 26 contacted, response rate 69%) until we reached thematic saturation of mentioned topics. While respondents were drawn from multiple regions in Germany and multiple levels of seniority, ranging from postdoctoral researchers to senior academics, we did not attempt to recruit a representative but aimed for a purposive sample with diverse backgrounds and perspectives on the topic of comprehensive reporting. While governmental competencies might differ in some details across German federal states the overarching regulatory requirements and the concept of animal studies incorporating several animal experiments is the same throughout Germany. All interviewees met our inclusion criterion of having experience in conducting, supervising and/or publishing the findings of animal research (S1 File).

All participants were asked to sign a consent form that outlined the basic research questions, informed participants that interviews would be recorded and transcribed, and that their anonymity would be safeguarded. Participants consented to selected quotes from interviews being cited verbatim in a future publication (S2 File). All interviews were conducted via video call by the same member of the study team (TB), in 2 cases in conjunction with another team member (ND or UT), based on a written interview guide that had been developed by the team of authors in an iterative process (S3 File). Because the team of authors include five persons with background in animal research, we did not conduct pilot interviews. The lead interviewer is a German postdoctoral researcher with extensive experience in conducting qualitative research, including on publication bias, but with no personal experience of conducting animal research; his professional background was disclosed verbally at the outset of each interview. All interviews were between 50 and 70 minutes in length, with a mean length of 60 minutes. All interviews were transcribed by a professional transcription company that had signed a non-disclosure agreement.

The lead interviewer (TB) reviewed all transcripts and manually grouped responses into thematic categories initially broadly mirroring key items in the interview guide and subsequently further sub-categorising them until thematic saturation was reached as per the criteria elaborated by Fusch and Ness [17]. Further team members (SW, NT, UT, DS) reviewed and commented the categorisation; disputes were resolved by consensus. Quotes cited in the paper (as “Q99”) were selected by eliminating duplicate quotes on the same topic until arriving at the quote or quotes that best summarised the tenor of all responses received on that issue. Quotes were translated by a bilingual researcher (TB) and are available in German and English language (S4 File). The interview transcripts, slightly redacted to further safeguard the anonymity of participants, were archived on a password-protected server at the Charité, Berlin, Germany.

The study was preregistered on the Open Science Framework (https://osf.io/34qny/) and was approved by the Medizinische Hochschule Hannover, Hannover, Germany ethics committee (number 9504_BO_K_2020).

Results

Interviews

We conducted confidential semi-structured interviews with 18 experts (14 animal researchers, 2 methodology experts, 1 journal editor, 1 industry group representative; 16 located in Germany and 2 in UK) conducted during May-June 2021. The main categories identified in the interviews became increasingly saturated after approximately 10 interviews. While the next eight interviews provided further perspectives on sub-categories and particularities no new major categories emerged. While thematic saturation was reached for main categories we may not have captured minor or rare factors.

Quotes exemplifying the themes presented in the following sections are displayed in Table 2 and more exhaustively in S4 File.

Measurement challenges of incomplete reporting

Incomplete reporting of results can take place on three levels. Researchers can decide to not report the results of an entire study, or of discrete experiments nested within each study, or of individual animals used (see Table 1).

Study level tracking challenges.

Respondents concurred that generating animal study cohorts from regulatory approvals (including amendments) and then searching the literature for related publications is an appropriate way to measure incomplete reporting at the study level, albeit with three caveats.

First, some commercial and non-commercial funders require approvals to be in place before reviewing funding applications, and some studies that receive regulatory approval subsequently fail to secure funding (Q1). Second, there is a substantial delay between filing a project evaluation (Tierversuchsantrag) with regional German authorities and receiving authorization; multiple respondents cited nine months as a typical time span, though this may vary by federal region (Bundesland) and individual study. Turnover of staff (Q2) or new scientific developments (Q3) during that waiting period may lead a study team to decide not to initiate an authorized project. Third, literature searches for animal study results face substantial challenges (see further below).

Experiment level tracking challenges.

Respondents concurred that in the German context, application documents for project evaluations (Tierversuchsantraege) by themselves cannot be used to meaningfully and reliably measure incomplete publication at the experiment level.

Several respondents highlighted that the German project evaluation and authorization system requires all experiments to be specified in great detail many months before work on a study begins, while the exploratory nature of their research requires flexibility to modify the study design as work progresses and new insights emerge.

Respondents concurred that German researchers routinely seek to maintain this flexibility by crafting applications that incorporate a very wide range of possible experiments and a large number of animals, to cover possible future contingencies (Q4, Q5, Q6, Q7, Q8, Q50).

In vitro studies or early stage experiments often show an initially envisaged line of enquiry to be futile, or required compounds or materials cannot be secured. When this happens, the originally planned experiments are never performed (Q9, Q10).

Conversely, when a line of enquiry appears fruitful, new research questions may emerge. To be able to address those, researchers file requests to modify the original approvals (Aenderungantraege) by replacing predefined experiments with new ones, while keeping animal numbers constant. This way, an original application may be modified dozens of times (Q11). However, if discrete experiments are never performed, this is rarely reported back to the authorities (Q12).

Measuring incomplete reporting at the experiment level thus requires taking the original approval as the starting point, and then working sequentially through all subsequent Aenderungantraege. While time consuming (Q13), this can be used to establish an upper limit for the number of experiments that might have been performed, but not the precise number of experiments actually performed.

Animal level tracking challenges.

Similarly, researchers may terminate experiments early, using fewer animals than planned (Q14, Q15), but such reductions are rarely reported back to the authorities (Q16, Q17). Therefore, applications for animal research (Tierversuchsantraege) in conjunction with subsequent modification requests (Aenderungsantraege) can be used to establish an upper limit for the number of animals that might have been used, but not the precise number of animals actually used.

Literature matching challenges.

In human clinical trials, a single journal article typically describes the outcomes of a single trial. Tracing outcome publications for animal studies is far more challenging because a scientific research project can involve multiple applications for animal research (Tierversuchsantraege), and multiple experiments nested within several applications may later be recombined into a single scientific paper (Q18, Q19). In addition, some outcomes may only get published within doctoral theses or in other formats, or kept on file indefinitely until they can be ‘fitted’ into a future broader publication (Q20, Q21).

Causes of incomplete reporting.

According to respondents, there are six drivers of incomplete reporting of results in animal research: a lack of incentives to report non-positive results, pressures to ‘deliver’ positive results, perceptions that some data do not add value, commercial pressures, reputational concerns, and socio-political and regulatory pressures.

Lack of incentives to report negative and null results.

Respondents unanimously concurred that the lack of incentives for academic researchers to publish ‘negative’ or ‘null’ findings is a major driver of non-reporting at all levels: project, experiment and individual animal (Q22). High impact journals that are crucial to academics’ career progression and ability to attract future funding are commonly not interested (Q51) in publishing non-positive findings (commonly defined through p-values, Q23) or replications (Q28), regardless of methodological rigour or scientific merit (Q24). Some respondents mentioned that if a paper on a ‘positive’ project includes non-positive results for discrete experiments, editors or reviewers often remove these (Q25). Publication in lower impact journals is unattractive, mainly due to opportunity costs (Q26), but also because achieving tenure can hinge on a researcher having a high impact average across all publications (Q27).

However, some respondents noted that non-positive findings can be published high impact if they refute previous landmark findings in the field (Q29). While the evidence bar may be set higher in such cases, such papers can later attract many citations (Q31).

Pressures to deliver positive results.

Career pressures to deliver clearly ‘positive’ results can drive some researchers to omit the data for some animals in journal articles (Q32). Respondents believed that such selective reporting is not uncommon (Q33, Q34).

Perceptions that some data do not add value.

Furthermore, some respondents thought that reporting some data was unnecessary as it would not add any scientific value (Q52).

Examples cited included data ruined by laboratory accidents (Q35), pre-intervention and pre-measurement dropouts (Q36, Q37), unexplained failures due to unknown variables (Q38), and experiments terminated after only very few animals were used (Q39).

Commercial pressures.

When studies are funded by commercial entities, funders may sometimes object to the publication of results because they are viewed as commercially confidential (Q42) or because they reflect negatively on the product being tested (Q41).

Reputational concerns.

Some respondents also pointed out that an absence of ‘positive’ results could indicate that a study was badly conceived (Q43), and that having high drop-out rates pre-experiment could be interpreted as a lack of skills (such as surgical skills) by an individual or study team (Q44, Q45), potentially exposing researchers to criticism even when a study was well designed and implemented.

Socio-political and regulatory pressures.

Such reputational concerns are compounded by legal and reputational pressures. Widespread public and political animosity towards animal research in Germany (Q46, Q47) and close monitoring by activist groups (Q48, Q49) can disincentivise the sharing of failures and ‘negative’ results.

Discussion

Measuring incomplete reporting

Generating meaningful and reliable data on the extent of incomplete publication of biomedical animal studies is challenging in the German context. At the study level, it requires verifying that approved studies were actually initiated post-approval and taking into account complex publication pathways. At the experiment level and animal level, it requires analysing approvals plus numerous modification requests (Aenderungsantraege). This time consuming methodology can establish an upper bound for the numbers of experiments performed and/or animals used, but will typically not capture post-approval reductions in the numbers of experiments of animals. To generate precise data on experiments performed and/or animals used, addition data, for example from laboratory notebooks, specific documentation of animal research facilities, or other source would be required.

Causes of incomplete reporting

Respondents flagged six drivers of incomplete reporting of results in biomedical animal research. Four of these drivers–lack of incentives to report certain results, pressures to ‘deliver’ positive results, perceptions that some data do not add value, and commercial pressures–closely match drivers for incomplete reporting in other areas of research [1822]. The fifth driver–reputational concerns–may play a far greater role in animal research than in human drug trials, possibly because investigators’ technical skills can affect animal survival rates more directly.

The sixth driver of incomplete reporting–socio-political and regulatory pressures–may be a specific feature of animal studies. The lack of social and political consensus in Germany on the desirability of conducting such research in the first place, combined with the vigilance of advocacy groups, generates an environment that discourages reporting the results of ‘failed experiments’ involving animals. In contrast, there is overwhelming social and political consensus that running well-designed clinical trials in humans is desirable, and a tacit understanding that clinical equipoise dictates that some participants in clinical trials may fail to experience benefits or even suffer harms.

The finding that reputational concerns are a strong driver of incomplete reporting in this field may merit further research. For example, surgeons’ skill and experience can affect the success rates for surgery [23, 24]. Future research could explore whether reputational concerns influence the reporting of clinical trials of surgical interventions.

Publicly available information about individual animal experiments

Incomplete reporting of animal studies can be reliably quantified at the study level using approval documentation in Europe, as two previous studies have already done [13, 14]. Individual animal studies, however, mostly comprise several experiments with hundreds of animals. The reporting rate at the study level, therefore, is conceptually flawed as it only captures whether any results of any experiment with any animals were reported. For a more meaningful understanding of the extent of incomplete reporting in animal research, the measurement of results reporting at the level of individual experiments or individual animals is needed.

Efforts at precise quantification at the experiment level and animal level, however, would require additional data about what animal experiments are ultimately conducted. This information is currently not accessible for systematic evaluations. One potential source for this kind of information could be a local or national documentation about the characteristics of all animal experiments started and completed at university-based animal research facilities. The comprehensive preregistration of individual animal experiments could also facilitate measurements on publication rates as demonstrated for clinical research [25, 26]. Preregistration of animal studies, however, is still in its infancy [27, 28].

Limitations

This study has two limitations. First, there may be additional factors contributing to incomplete reporting in animal research that were not identified by respondents. The interview guide asked the open-ended question of what the most common causes of incomplete reporting were, and we have reported all causes flagged by respondents. However, it is possible that additional, less common causes were not flagged during 18 hours of interviews. Second, it is unclear whether and to what extent its findings are generalizable beyond the specific context of animal research conducted within Germany only. EU countries conducting animal research under the directive 2010/63/EU might experience similar challenges for measuring publication rates if they apply a similar study application and approval system that integrate different animal experiments under the umbrella of one animal study.

Concept for complete results reporting in animal research

Our research indicates that the concept of complete reporting in animal research remains contested and underdefined. While reporting guidelines such as ARRIVE (Animal research: reporting in vivo experiments) [29] reflect reporting quality further guidance is needed that specify what data out of all animal studies require reporting to guarantee an unbiased knowledge gain and what data do not merit reporting in this regard? Furthermore, what dissemination routes qualify as appropriate results reporting? Several journals explicitly invite the submission of “negative” or “undesired” results such as PLoS One or BMJ Open Science. Other journals certainly should follow this example to facilitate an unbiased results reporting. Beside peer-reviewed journal articles also preprints, data repositories, or summary results in publicly accessible registries/databases might become important formats for results dissemination. In clinical trials, for example, the reporting of summary results in trial registries has become a broadly accepted alternative to journal publication.

Future efforts to improve results reporting in animal studies should take into account socio-political pressures because in some contexts these can be significant factors discouraging the reporting of pre-experimental dropout rates and ‘null’ and ‘negative’ outcomes. More incentives and rewards, including career incentives, for complete results reporting in animal research might help to improve the status quo. Similar to the development of reporting guidelines [29, 30] or other guidelines from the Laboratory Animal Science Association (LASA) [31] the relevant stakeholder groups in animal research, including animal researchers, animal research facilities, funders, expert networks, and regulators should work together to develop guidance and best practice standards for comprehensive results reporting. Once such guidance is available the valid measurements of the extent and consequences of incomplete reporting in animal research should be facilitated by academic institutions and regulators.

Supporting information

S1 File. Anonymised list of interviewees.

https://doi.org/10.1371/journal.pone.0271976.s001

(DOCX)

S3 File. Interview guide in German and English.

https://doi.org/10.1371/journal.pone.0271976.s003

(DOCX)

References

  1. 1. Fanelli D: Negative results are disappearing from most disciplines and countries. Scientometrics 2012, 90:891–904.
  2. 2. Franco A, Malhotra N, Simonovits G: Social science. Publication bias in the social sciences: unlocking the file drawer. Science 2014, 345(6203):1502–1505. pmid:25170047
  3. 3. Munafo MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, et al.: A manifesto for reproducible science. Nat Hum Behav 2017, 1:0021. pmid:33954258
  4. 4. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al.: Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008, 3(8):e3081. pmid:18769481
  5. 5. Turner EH, Cipriani A, Furukawa TA, Salanti G, de Vries YA: Selective publication of antidepressant trials and its influence on apparent efficacy: Updated comparisons and meta-analyses of newer versus older trials. PLoS Med 2022, 19(1):e1003886. pmid:35045113
  6. 6. Holman C, Piper SK, Grittner U, Diamantaras AA, Kimmelman J, Siegerink B, et al.: Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke. PLoS Biol 2016, 14(1):e1002331. pmid:26726833
  7. 7. Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, et al.: Risk of Bias in Reports of In Vivo Research: A Focus for Improvement. PLoS Biol 2015, 13(10):e1002273. pmid:26460723
  8. 8. Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR: Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol 2010, 8(3):e1000344. pmid:20361022
  9. 9. Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, Howells DW, et al.: Evaluation of excess significance bias in animal studies of neurological diseases. PLoS Biol 2013, 11(7):e1001609. pmid:23874156
  10. 10. Begum R, Kolstoe S: Can UK NHS research ethics committees effectively monitor publication and outcome reporting bias? BMC Medical Ethics 2015, 16(1):51.
  11. 11. Denneny C, Bourne S, Kolstoe SE: Registration audit of clinical trials given a favourable opinion by UK research ethics committees. BMJ Open 2019, 9(2):e026840. pmid:30796130
  12. 12. Driessen E, Hollon SD, Bockting CLH, Cuijpers P, Turner EH: Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta-Analysis of US National Institutes of Health-Funded Trials. PloS one 2015, 10(9):e0137864–e0137864. pmid:26422604
  13. 13. Wieschowski S, Biernot S, Deutsch S, Glage S, Bleich A, Tolba R, et al.: Publication rates in animal research. Extent and characteristics of published and non-published animal studies followed up at two German university medical centres. PLoS One 2019, 14(11):e0223758. pmid:31770377
  14. 14. van der Naald M, Wenker S, Doevendans PA, Wever KE, Chamuleau SAJ: Publication rate in preclinical research: a plea for preregistration. BMJ Open Sci 2020, 4(1):e100051. pmid:35047690
  15. 15. ter Riet G, Korevaar DA, Leenaars M, Sterk PJ, Van Noorden CJ, Bouter LM, et al.: Publication bias in laboratory animal research: a survey on magnitude, drivers, consequences and potential solutions. PLoS One 2012, 7(9):e43404. pmid:22957028
  16. 16. Vogt L, Reichlin TS, Nathues C, Wurbel H: Authorization of Animal Experiments Is Based on Confidence Rather than Evidence of Scientific Rigor. PLoS Biol 2016, 14(12):e2000598. pmid:27911892
  17. 17. Fusch PI, & Ness L. R.: Are We There Yet? Data Saturation in Qualitative Research. The Qualitative Report 2015, 20(9):1408–1416.
  18. 18. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, et al.: Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010, 14(8):iii, ix-xi, 1–193. pmid:20181324
  19. 19. van der Steen JT, van den Bogert CA, van Soest-Poortvliet MC, Fazeli Farsani S, Otten RHJ, Ter Riet G, et al.: Determinants of selective reporting: A taxonomy based on content analysis of a random selection of the literature. PloS one 2018, 13(2):e0188247–e0188247. pmid:29401492
  20. 20. Sharma H, Verma S: Is positive publication bias really a bias, or an intentionally created discrimination toward negative results? Saudi J Anaesth 2019, 13(4):352–355. pmid:31572081
  21. 21. Dirnagl U, Lauritzen M: Fighting publication bias: introducing the Negative Results section. Journal of cerebral blood flow and metabolism: official journal of the International Society of Cerebral Blood Flow and Metabolism 2010, 30(7):1263–1264. pmid:20596038
  22. 22. Turner EH: Publication Bias, with a Focus on Psychiatry: Causes and Solutions. CNS Drugs 2013, 27(6):457–468. pmid:23696308
  23. 23. Blencowe NS, Mills N, Cook JA, Donovan JL, Rogers CA, Whiting P, et al.: Standardizing and monitoring the delivery of surgical interventions in randomized clinical trials. British Journal of Surgery 2016, 103(10):1377–1384. pmid:27462835
  24. 24. Butterworth JW, Boshier PR, Mavroveli S, Van Lanschot JBB, Sasako M, Reynolds JV, et al.: Challenges to quality assurance of surgical interventions in clinical oncology trials: A systematic review. European Journal of Surgical Oncology 2021, 47(4):748–756. pmid:33059943
  25. 25. Riedel N, Wieschowski S, Bruckner T, Holst MR, Kahrass H, Nury E, et al.: Results dissemination from completed clinical trials conducted at German university medical centers remained delayed and incomplete. The 2014–2017 cohort. J Clin Epidemiol 2022, 144:1–7. pmid:34906673
  26. 26. Make it Public: transparency and openness in health and social care research [https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/research-transparency/make-it-public-transparency-and-openness-health-and-social-care-research/]
  27. 27. Bert B, Heinl C, Chmielewska J, Schwarz F, Grune B, Hensel A, et al.: Refining animal research: The Animal Study Registry. PLOS Biology 2019, 17(10):e3000463. pmid:31613875
  28. 28. Written Evidence Submitted by the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs). In.; 2021.
  29. 29. Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG, Group NCRRGW: Animal research: reporting in vivo experiments: the ARRIVE guidelines. Br J Pharmacol 2010, 160(7):1577–1579. pmid:20649561
  30. 30. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al: A call for transparent reporting to optimize the predictive value of preclinical research. Nature 2012, 490(7419):187–191. pmid:23060188
  31. 31. Guiding principles on good practice for Animal Welfare and Ethical Review Bodies. A report by the RSPCA Research Animals Department and LASA Education, Training and Ethics Section [http://www.lasa.co.uk/PDF/AWERB_Guiding_Principles_2015_final.pdf]