Skip to main content
Advertisement
  • Loading metrics

Ending publication bias: A values-based approach to surface null and negative results

  • Stephen Curry ,

    Roles Conceptualization, Project administration, Writing – review & editing

    s.curry@imperial.ac.uk (SC); e.mercado.lara@gmail.com (EM-L)

    Affiliations Department of Life Sciences, Imperial College London, London, United Kingdom, Research on Research Institute, London, United Kingdom

  • Eunice Mercado-Lara ,

    Roles Conceptualization, Project administration, Writing – review & editing

    s.curry@imperial.ac.uk (SC); e.mercado.lara@gmail.com (EM-L)

    Affiliation Open Research Community Accelerator, San Francisco, California, United States of America

  • Virginia Arechavala-Gomeza,

    Roles Conceptualization

    Affiliations Nucleic Acid Therapeutics for Rare Diseases, Biobizkaia Health Research Institute, Barakaldo, Spain, Ikerbasque, Basque Foundation for Science, Bilbao, Spain

  • C. Glenn Begley,

    Roles Conceptualization

    Affiliation Begley Biotech Consulting, Melbourne, Victoria, Australia

  • Christophe Bernard,

    Roles Conceptualization

    Affiliation Aix Marseille Université, INSERM, Institut de Neurosciences des Systèmes, Marseille, France

  • René Bernard,

    Roles Conceptualization

    Affiliation Excellenzcluster NeuroCure, Charité – Universitätsmedizin Berlin, Berlin, Germany

  • Stefano Bertuzzi,

    Roles Conceptualization

    Affiliation American Society for Microbiology, Washington, District of Columbia, United States of America

  • Needhi Bhalla,

    Roles Conceptualization, Writing – review & editing

    Affiliation Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, California, United States of America

  • Dawn Bowers,

    Roles Conceptualization

    Affiliation Department of Clinical and Health Psychology, University of Florida, Gainesville, Florida, United States of America

  • Samuel Brod,

    Roles Conceptualization

    Affiliation BioMed Central, London, United Kingdom

  • Christopher Chambers,

    Roles Conceptualization

    Affiliation School of Psychology, Cardiff University, Cardiff, United Kingdom

  • Michael R. Dougherty,

    Roles Conceptualization, Writing – review & editing

    Affiliation Department of Psychology, University of Maryland, College Park, Maryland, United States of America

  • Yensi Flores Bueso,

    Roles Conceptualization, Writing – review & editing

    Affiliations Cancer Research @UCC, University College Cork, Cork, Republic of Ireland, Institute for Protein Design, University of Washington, Seattle, Washington, United States of America, Global Young Academy, Halle, Germany

  • Stefânia Forner,

    Roles Conceptualization

    Affiliation Alzheimer’s Association, Chicago, Illinois, United States of America

  • Alexandra L. J. Freeman,

    Roles Conceptualization, Writing – review & editing

    Affiliation Octopus Publishing CIC, Abingdon, United Kingdom

  • Magali Haas,

    Roles Conceptualization, Writing – review & editing

    Affiliation Cohen Veterans Bioscience, New York, New York, United States of America

  • Darla P. Henderson,

    Roles Conceptualization, Writing – review & editing

    Affiliation Federation of American Societies for Experimental Biology, Rockville, Maryland, United States of America

  • Kanika Khanna,

    Roles Conceptualization

    Affiliation Gladstone Institute of Virology, University of California, San Francisco, San Francisco, California, United States of America

  • Rebecca Lawrence,

    Roles Conceptualization, Writing – review & editing

    Affiliation F1000 Research Ltd, Taylor & Francis Group, London, United Kingdom

  • Kif Liakath-Ali,

    Roles Conceptualization

    Affiliation School of Biological Sciences, University of Southampton, Southampton, United Kingdom

  • Christine Liu,

    Roles Conceptualization

    Affiliation Department of Psychiatry, University of California, San Francisco, San Francisco, California, United States of America

  • Neil Malhotra,

    Roles Conceptualization

    Affiliation Graduate School of Business, Stanford University, Stanford, California, United States of America

  • José G. Merino,

    Roles Conceptualization, Writing – review & editing

    Affiliation Department of Neurology, Georgetown University Medical Center, Washington, District of Columbia, United States of America

  • Edward Miguel,

    Roles Conceptualization

    Affiliation Department of Economics and Center for Effective Global Action, University of California, Berkeley, Berkeley, California, United States of America

  • Rachel Miles,

    Roles Conceptualization

    Affiliation University Libraries, Virginia Tech, Blacksburg, Virginia, United States of America

  • Mary Munson,

    Roles Conceptualization

    Affiliation Department of Biochemistry and Molecular Biotechnology, University of Massachusetts Chan Medical School, Worcester, Massachusetts, United States of America

  • Shinichi Nakagawa,

    Roles Conceptualization

    Affiliation Department of Biological Sciences, University of Alberta, Edmonton, Alberta, Canada

  • Robert Nobles,

    Roles Conceptualization

    Affiliation Office of Research Administration, Emory University, Atlanta, GeorgiaUnited States of America

  • Joy Owango,

    Roles Conceptualization

    Affiliation Training Centre in Communication, University of Nairobi, Nairobi, Kenya

  • Michel Tuan Pham,

    Roles Conceptualization, Writing – review & editing

    Affiliation Columbia Business School, Columbia University, New York, New York, United States of America

  • Gina Poe,

    Roles Conceptualization

    Affiliation Integrative Biology and Physiology Department, University of California, Los Angeles, Los Angeles, California, United States of America

  • Alexandra N. Ramirez,

    Roles Conceptualization

    Affiliation Neuroscience Program, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America

  • Sarvenaz Sarabipour,

    Roles Conceptualization

    Affiliation Department of Cell Biology, University of Connecticut School of Medicine, Farmington, Connecticut, United States of America

  • Jill L. Silverman,

    Roles Conceptualization, Writing – review & editing

    Affiliation Department of Psychiatry and Behavioral Sciences, MIND Institute, School of Medicine, University of California, Davis, Sacramento, California, United States of America

  • Laura N. Smith,

    Roles Conceptualization

    Affiliation Department of Neuroscience and Experimental Therapeutics, Texas A&M University College of Medicine, Bryan, Texas, United States of America

  • P. Sriramarao,

    Roles Conceptualization

    Affiliation Office of the Vice President for Research and Innovation, Virginia Commonwealth University, Richmond, Virginia, United States of America

  • Paul W. Sternberg,

    Roles Conceptualization

    Affiliation Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, United States of America

  • Geeta K. Swamy,

    Roles Conceptualization

    Affiliation Department of Obstetrics and Gynecology, Duke University, Durham, North Carolina, United States of America

  • Malú Gámez Tansey,

    Roles Conceptualization

    Affiliation McKnight Brain Institute, Department of Neuroscience, University of Florida Health System, Gainesville, Florida, United States of America

  • Gonzalo E. Torres,

    Roles Conceptualization

    Affiliation Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago Stritch School of Medicine, Chicago, Illinois, United States of America

  • Erick H. Turner,

    Roles Conceptualization

    Affiliation Department of Psychiatry, Oregon Health & Science University, Portland, Oregon, United States of America

  • Lauren von Klinggraeff,

    Roles Conceptualization

    Affiliation Department of Community and Behavioral Health Sciences, Institute of Public and Preventive Health, School of Public Health, Augusta University, Augusta, GeorgiaUnited States of America

  •  [ ... ],
  • Frances Weis-Garcia

    Roles Conceptualization

    Affiliation Antibody and Bioresource Core Facility, Memorial Sloan Kettering Cancer Center, New York, New York, United States of America

  • [ view all ]
  • [ view less ]

Abstract

Sharing knowledge is a basic tenet of the scientific community, yet publication bias arising from the reluctance or inability to publish negative or null results remains a long-standing and deep-seated problem, albeit one that varies in severity between disciplines and study types. Recognizing that previous endeavors to address the issue have been fragmentary and largely unsuccessful, this Consensus View proposes concrete and concerted measures that major stakeholders can take to create and incentivize new pathways for publishing negative results. Funders, research institutions, publishers, learned societies, and the research community all have a role in making this an achievable norm that will buttress public trust in science.

Introduction

Scientific progress relies on the dissemination of knowledge so that others can build upon it. Yet not all scientific knowledge is disseminated [1]. Studies with ‘nonsignificant’ or ‘unexciting’ results often remain unpublished, a phenomenon known as ‘the file drawer problem’ or publication bias [2]. This bias persists despite decades of warnings against its consequences [38]. By its very nature, the extent of publication bias is difficult to ascertain, but the available data clearly indicate that null findings are underreported. For instance, according to recent studies, fewer than 2 in 100 articles on prognostic markers or animal models of stroke report null findings [9,10], a result that is unlikely to arise from a system that faithfully reports the outcome of experiments regardless of statistical significance. More concretely, published meta-analyses frequently show evidence of publication bias [11], and the introduction of the ‘registered report’ format, in which journals commit to publication before experimental outcomes are known, substantially increased the proportion of null fundings in some psychology journals [12]. Nevertheless, publication bias remains the subject of ongoing debate. In a wide-ranging critique of the so-called ‘crisis narrative’ that has emerged in recent years, Fanelli [13] recognizes the problem but cautions against universalizing it, citing the lack of evidence across many fields. Silvertown and Conway [14] argue that the impact of publication bias may have been overstated due to underestimates of the tendency for false positives to eventually succumb to further scrutiny.

The extent of publication bias does seem to vary across disciplines. For example, surveys of meta-analyses suggest that publication bias is greater in some social science disciplines than it is in biomedical or physical sciences [11,15]. This variation can arise owing to any number of factors, ranging from choice of research methodology (e.g., observational studies, or research that is more descriptive, qualitative, or theoretical) to distinctive disciplinary norms or cultures. Importantly, both the magnitude and consequences of under-reporting of null findings also vary across disciplines. In biomedicine and clinical research, unreported null results can lead to patient-care risks, whereas in fields like economics or ecology, the societal impact of underreported null findings might be less obvious than the impact it has on research efficiency and advancing knowledge [16].

Notwithstanding the modulating effects of methodological and disciplinary variations, the consequences of publication bias can be severe. Unpublished null studies waste resources, slow the pace of science, and impede career advancement. Unwitting researchers are likely to expend time and money conducting similar experiments, not realizing that prior work has yielded null results: time and money that could have been spent pursuing new and more promising ideas. The failure to publish null findings also distorts the literature, resulting in exaggerated effect sizes [1720], biased meta-analyses [11], inaccurate clinical trial results, flawed policy interventions [2123], and the acceptance of false claims if incorrect ideas are not challenged [20,24]. Moreover, the rise of artificial intelligence (AI) models trained on incomplete data can magnify this problem, potentially contributing to misinformation [25] and undermining public trust in science.

Publication bias is deeply rooted in scientific culture. Null studies are often perceived as less valuable, leading to harsher peer reviews [2628] and lower citation rates [29,30]. Null studies are also commonly supposed to be unlikely to be accepted by higher-impact journals [31], further limiting their visibility and reinforcing biases against reporting such findings. Tenure and promotion systems frequently prioritize journal impact factors over methodological rigor [32,33], further discouraging researchers from sharing null results. Additional discouragement arises because reviewers of funding applications may focus on the perceived prestige of the publication outlet rather than the content of investigators’ prior work [34]. As a result, many investigators are reluctant to publish null studies despite the time and effort spent conducting experiments and other empirical studies [3537]. Even apart from external drivers, investigators may, for good reason, lose confidence in a line of inquiry without having validated or controlled their findings to the rigorous degree that might be applied to a positive result, and therefore be less likely to write them up for publication. There are many ways for an experiment to go awry, and the constant pressure for researchers to focus on what they perceive to be their most productive lines of inquiry may lead them to set aside lines of research that have not yielded positive results.

Journal practices can also exacerbate publication bias. A recent analysis (https://go.nih.gov/looOO3o) by the US National Institute of Neurological Disorders and Stroke (NINDS) found that 180 out of 215 neuroscience journals do not explicitly welcome null studies, while only 14 appeared to accept null studies without additional conditions (e.g., a higher burden of evidence than required for positive studies). Although newer scientific dissemination mechanisms such as study preregistration and Registered Reports [3841] have shown promise in increasing the publication of null results [12,42], they are not universally applicable [4347], despite recent efforts to extend their use to exploratory and observational studies [48,49]. Other more recent formats, such as micropublications [50] and modular publications [51], also offer promising avenues for sharing null studies, as do F1000 open research platforms and preprint and data repository platforms such as bioRxiv, arXiv, OSFPreprints, Zenodo, Figshare, and Dryad (bioRxiv even has a dedicated ‘Contradictory Results’ section), all of which can offer more frictionless avenues to dissemination than traditional journals [52,53]. However, these platforms still underrepresent null findings (https://go.nih.gov/looOO3o).

Given the many and various factors at play in sustaining publication bias, what scope is there to prevent it from happening? In this Consensus View, we present a framework for practical action that seeks to enlist contributions from all the relevant stakeholders within the research ecosystem. Although our perspective is primarily biomedical, we believe there is a fresh opportunity for a wider discussion of the problem of publication bias across all disciplines and the construction of proportionate and effective measures to address it.

Methodology

This ideas and reflections presented here were generated in discussions by participants of the Novel Approaches to Preventing Publication Bias Workshop (https://go.nih.gov/Y6iYPxU), which was run by the NINDS Office of Research Quality in May 2024 in Bethesda, Maryland, USA. This meeting aimed to assess progress and identify remaining barriers to disseminating null studies across the research ecosystem.

Around 50 stakeholders were invited to ensure broad representation by sector (researchers and research institutions; traditional and nontraditional publishers; journal editors; funders; nonprofit organizations; industry; scientific societies), career stage (senior and early career researchers), and geography (United States of America, United Kingdom, Ireland, Germany, Spain, France, Mexico, Australia, plus virtual participation from Kenya). Although most attendees were drawn from biomedical fields, experts in psychology, economics, business, ecology, and public health were included to capture cross-disciplinary perspectives.

The agenda combined plenary presentations with five sector-focused breakout sessions (researchers, research institutions, traditional and nontraditional publishers, scientific societies and nonprofit organizations, and funders and policymakers), each facilitated by a domain expert and an early career participant. Sessions were followed by a closing plenary in which all attendees synthesized key insights.

Facilitators recorded breakout session notes using a shared template. These notes were consolidated in the closing plenary, then thematically coded by two independent analysts. Emergent themes were organized into the framework presented here, ensuring that domain-specific nuances and overarching patterns informed the final recommendations.

Values-based approach to system change

To address the causes of publication bias, scholars from diverse disciplinary and geographical backgrounds gathered at a NINDS-hosted meeting in May 2024. They agreed that while scientists should be free to choose which questions to pursue, whatever the results of their research, the current scientific culture clearly incentivizes the production of ‘significant’ or positive results over methodological rigor [5457]. To transform this culture, we argue that there is a need to shift away from valuing only positive or ‘exciting’ results towards prioritizing the importance of the research question and the quality of the research process, regardless of outcome [55,58,59]. We therefore propose a values-based approach [6066] to reforming policies, activities, and incentives to reduce or eliminate the occurrence of publication bias (Fig 1). At its core, we believe this means removing barriers to sharing all knowledge (regardless of statistical significance or perceived impact) and enabling, incentivizing and centering the key academic values of transparency, accessibility, and openness.

thumbnail
Fig 1. Values-based approach to reducing publication bias.

Conceptual diagram illustrating the cyclic and iterative nature of the steps involved in seeking to end publication bias.

https://doi.org/10.1371/journal.pbio.3003368.g001

Drawing principally on experience from biomedical and related science, technology, engineering and mathematics domains, but with relevance to social science, the humanities, and beyond, we therefore call on all sectors of the scientific enterprise to commit openly to the dissemination of all knowledge and to enact specific, practical changes to accomplish this goal within their respective disciplines. We propose a framework for action that includes improved incentive structures to reward transparency and rigor, the creation of simpler mechanisms for reporting null results, and collaboration among sectors of the scientific community to achieve this goal.

Sustaining these changes will require reinforcement from a wide range of actors in the funding, delivery, and dissemination of research. Below, we outline concrete steps for different stakeholders to reduce publication bias.

Openly commit to the value of disseminating all knowledge

As a first step, we ask all scientific entities (including institutions, departments, core facilities, libraries, ethics committees, scientific societies, funders, journals, laboratory groups, and individual scientists) to reflect on the value of sharing and disseminating all knowledge and to hold internal conversations on why this is crucial for their missions. This is essential to winning internal buy-in from across organizations so that they are positioned to make credible and achievable commitments within their respective domains to sharing all knowledge and eliminating publication bias.

Identify which practices align with sharing all knowledge

Funders, research-performing organizations, and publishers should assess whether current practices facilitate or hinder transparent research processes. For example, do current practices of researcher assessment (e.g., graduation/degree requirements, promotion, and tenure policies) encourage dissemination of all high-quality research, including null results, or do they focus only on the most ‘exciting’ results (e.g., by heavily weighing bibliometrics or media coverage, which are dubious measures of research quality [6770])? Does manuscript peer review give due weight to the questions being addressed and the methodological quality of the experiments when determining scientific merit and impact? Does grant review focus on the methodological rigor of investigators’ prior work rather than on the journals in which their papers were published? In many current research settings, the answer is no [55,71]. Explicitly outlining which practices should be modified enables a systematic approach to conceptualizing and implementing these changes, such as through new resources or incentives. Sharing research findings publicly should be a default part of the research workflow rather than an optional step that depends on study outcomes. Commitments to sharing all knowledge should be accompanied by a clear plan of action so that organizations can publicly be held accountable. Table 1 illustrates specific actions for different entities to consider.

thumbnail
Table 1. High-priority interventions that promote the dissemination of all knowledge, including null studies, across research fields.

https://doi.org/10.1371/journal.pbio.3003368.t001

While the ideas in Table 1 represent possible starting points for different stakeholders, we do not wish to be overly prescriptive in how to effect change. The most effective solutions are likely to be tailored to disciplinary norms and practices and will therefore require ground-level input, but we would nevertheless highlight some ideas for practical implementation by different stakeholders. Funders, for example, could pilot a ‘null-results summary’ program that requires a one-page report of null findings linked to project registries that are accessible to reviewers of subsequent funding applications; such reports could also usefully include mention of work packages or proposed research questions that were not pursued (e.g., because of unanticipated developments or shifts in priorities). Institutions could further support this effort by hosting discipline-specific ‘null-data clinics’, periodic forums where researchers present null results alongside methodological lessons learned. Publishers could place a more explicit emphasis on the review of methodological rigor when handling manuscripts and highlight high-quality papers reporting null findings to encourage submissions. They could also monitor acceptance rates of manuscripts containing positive or negative results to iteratively refine their guidelines for authors and reviewers.

Enable and reinforce change

Given their influence in the system, research funders (public and private) are uniquely positioned to lead the implementation of policies that incentivize research transparency. Funder policies that are adequately enforced provide strong but proportionate incentives to change researcher reporting behavior and institutional researcher assessment practices. Funders also have the resources to spearhead the development of new infrastructure for easy dissemination outside of traditional publishing venues, for example, by supporting platforms that enable simple and widespread sharing of concise reports of null findings or unfinished projects.

Academic institutions will no doubt want to ensure their researchers are responding to funder incentives, but can synergize in other ways with efforts to address publication bias. For example, when assessing researchers, they should prioritize methodological quality in evaluating experimental outcomes. This could encourage a shift away from reliance on publication prestige and support outlets that emphasize methodological quality and openness. Enhancing education and awareness about effective knowledge sharing will further support these changes.

More ambitious actions, such as creating new tools or platforms for publication and review of null results, will likely require careful staged implementation, piloting, and engagement with different stakeholders (e.g., funders, publishers, researchers, and reviewers). Only with proper incentives and well-designed processes will barriers to the publication of null results be overcome.

Evaluate and iteratively improve interventions over time

As with any effort to enact genuine reform, a crucial step will be to document, evaluate, and publicly share whether modifications to processes, policies, and practices increase research transparency. At the same time, reformers should be mindful to avoid any negative unintended consequences (e.g., inequity, inappropriate gaming of the system, or undue burden). There are a number of formal evaluation frameworks that may be helpful for guiding the change process [7274]. These emphasize the importance of a systems-level perspective, collaborating with stakeholders to develop and contextualize the framework, and iterative incorporation of new insights to refine the intervention. Openness to critical evaluation throughout cycles of reform (Fig 1) will be essential if we are to make real progress in addressing publication bias.

Conclusions

While sharing knowledge is a fundamental principle within the scientific community, publication bias has remained a tough nut to crack and requires renewed attention from the broader research community as part of wider debates about the health of the research and scholarly enterprise. Addressing the challenge of publication bias will not just enhance the progress of research, but also buttress the social contract that publicly funded research relies on for continued support. Our roadmap has a role for all stakeholders (funders, institutions, publishers, and researchers) in promoting knowledge sharing, research transparency, and rigor, but change will only happen if we are all willing to play our part.

Acknowledgments

The authors would like to thank the organizers of the 2024 National Institute of Neurological Disorders and Stroke (NINDS) Novel Approaches to Preventing Publication Bias workshop, including Devon Crawford, Mariah Hoye, and Shai Silberberg, who contributed to early drafts of this manuscript. The authors would also like to thank all workshop participants for lively and informative discussions that shaped this paper, including Prachee Avasthi, Anna Hatch, Erin McKiernan, and Bodo Stern, who provided insightful comments on this manuscript. The content of this publication does not reflect the views or policies of the Department of Health and Human Services or of authors’ affiliated organizations, nor does mention of trade names, commercial products, or organizations imply endorsement by the United States Government.

References

  1. 1. Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2011;90(3):891–904.
  2. 2. Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bulletin. 1979;86(3):638–41.
  3. 3. Greenwald AG. Consequences of prejudice against the null hypothesis. Psychol Bulletin. 1975;82(1):1–20.
  4. 4. Coursol A, Wagner EE. Effect of positive findings on submission and acceptance rates: a note on meta-analysis bias. Prof Psychol: Res Pr. 1986;17(2):136–7.
  5. 5. Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986;4(10):1529–41. pmid:3760920
  6. 6. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA. 1990;263(10):1385.
  7. 7. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan A-W, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3(8):e3081. pmid:18769481
  8. 8. Rothstein HR. Publication bias as a threat to the validity of meta-analytic results. J Exp Criminol. 2007;4(1):61–81.
  9. 9. Kyzas PA, Denaxa-Kyza D, Ioannidis JPA. Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007;43(17):2559–79. pmid:17981458
  10. 10. Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8(3):e1000344. pmid:20361022
  11. 11. Bartoš F, Maier M, Wagenmakers E-J, Nippold F, Doucouliagos H, Ioannidis JPA, et al. Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics. Res Synth Methods. 2024;15(3):500–11. pmid:38327122
  12. 12. Scheel AM, Schijen MRMJ, Lakens D. An excess of positive results: comparing the standard psychology literature with registered reports. Adv Methods Pract Psychol Sci. 2021;4(2).
  13. 13. Fanelli D. Is science in crisis? Research integrity. New York: Oxford University Press. 2022. p. 93–121.
  14. 14. Silvertown J, McConway KJ. Does “Publication Bias” lead to biased science? Oikos. 1997;79(1):167.
  15. 15. Fanelli D, Costas R, Ioannidis JPA. Meta-assessment of bias in science. Proc Natl Acad Sci U S A. 2017;114(14):3714–9. pmid:28320937
  16. 16. Fanelli D. Opinion: is science really facing a reproducibility crisis, and do we need it to? Proc Natl Acad Sci U S A. 2018;115(11):2628–31. pmid:29531051
  17. 17. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358(3):252–60. pmid:18199864
  18. 18. Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database. PLoS Med. 2012;9(3):e1001189. pmid:22448149
  19. 19. Roest AM, de Jonge P, Williams CD, de Vries YA, Schoevers RA, Turner EH. Reporting bias in clinical trials investigating the efficacy of second-generation antidepressants in the treatment of anxiety disorders: a report of 2 meta-analyses. JAMA Psychiatry. 2015;72(5):500–10. pmid:25806940
  20. 20. de Vries YA, Roest AM, de Jonge P, Cuijpers P, Munafò MR, Bastiaansen JA. The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: the case of depression. Psychol Med. 2018;48(15):2453–5. pmid:30070192
  21. 21. van der Worp HB, Howells DW, Sena ES, Porritt MJ, Rewell S, O’Collins V, et al. Can animal models of disease reliably inform human studies?. PLoS Med. 2010;7(3):e1000245. pmid:20361020
  22. 22. Marks-Anglin A, Chen Y. A historical review of publication bias. Res Synth Methods. 2020;11(6):725–42. pmid:32893970
  23. 23. Nakagawa S, Lagisz M, Yang Y, Drobniak SM. Finding the right power balance: better study design and collaboration can reduce dependence on statistical power. PLoS Biol. 2024;22(1):e3002423. pmid:38190355
  24. 24. Nissen SB, Magidson T, Gross K, Bergstrom CT. Publication bias and the canonization of false facts. Elife. 2016;5:e21451. pmid:27995896
  25. 25. Brazil R. Illuminating “the ugly side of science”: fresh incentives for reporting negative results. Nature. 2024. pmid:39174776
  26. 26. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Arch Intern Med. 2010;170(21):1934–9. pmid:21098355
  27. 27. Muradchanian J, Hoekstra R, Kiers H, van Ravenzwaaij D. The role of results in deciding to publish: a direct comparison across authors, reviewers, and editors based on an online survey. PLoS One. 2023;18(10):e0292279. pmid:37788282
  28. 28. von Klinggraeff L, Burkart S, Pfledderer CD, Saba Nishat MN, Armstrong B, Weaver RG, et al. Scientists’ perception of pilot study quality was influenced by statistical significance and study design. J Clin Epidemiol. 2023;159:70–8. pmid:37217107
  29. 29. Fanelli D. Positive results receive more citations, but only in some disciplines. Scientometrics. 2012;94(2):701–9.
  30. 30. Jannot A-S, Agoritsas T, Gayet-Ageron A, Perneger TV. Citation bias favoring statistically significant studies was present in medical research. J Clin Epidemiol. 2013;66(3):296–301. pmid:23347853
  31. 31. Joober R, Schmitz N, Annable L, Boksa P. Publication bias: what are the challenges and can they be overcome?. J Psychiatry Neurosci. 2012;37(3):149–52. pmid:22515987
  32. 32. Schimanski LA, Alperin JP. The evaluation of scholarship in academic promotion and tenure processes: past, present, and future. F1000Res. 2018;7:1605. pmid:30647909
  33. 33. McKiernan EC, Schimanski LA, Muñoz Nieves C, Matthias L, Niles MT, Alperin JP. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. Elife. 2019;8:e47338. pmid:31364991
  34. 34. Rockey S. Changes to the biosketch. Extramural Nexus. 2014.
  35. 35. ter Riet G, Korevaar DA, Leenaars M, Sterk PJ, Van Noorden CJF, Bouter LM, et al. Publication bias in laboratory animal research: a survey on magnitude, drivers, consequences and potential solutions. PLoS One. 2012;7(9):e43404. pmid:22957028
  36. 36. Echevarría L, Malerba A, Arechavala-Gomeza V. Researcher’s perceptions on publishing “negative” results and open access. Nucleic Acid Ther. 2021;31(3):185–9. pmid:32730128
  37. 37. Herbet M, Leonard J, Santangelo MG, Albaret L. Dissimulate or disseminate? A survey on the fate of negative results. Learned Publishing. 2022;35(1):16–29.
  38. 38. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci U S A. 2018;115(11):2600–6. pmid:29531091
  39. 39. Drax K, Clark R, Chambers CD, Munafò M, Thompson J. A qualitative analysis of stakeholder experiences with Registered Reports Funding Partnerships. Wellcome Open Res. 2021;6:230. pmid:34957336
  40. 40. Henderson EL, Chambers CD. Ten simple rules for writing a Registered Report. PLoS Comput Biol. 2022;18(10):e1010571. pmid:36301802
  41. 41. Hardwicke TE, Wagenmakers E-J. Reducing bias, increasing transparency and calibrating confidence with preregistration. Nat Hum Behav. 2023;7(1):15–26. pmid:36707644
  42. 42. Allen C, Mehler DMA. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 2019;17(12):e3000587.
  43. 43. Bakker M, Veldkamp CLS, van Assen MALM, Crompvoets EAV, Ong HH, Nosek BA, et al. Ensuring the quality and specificity of preregistrations. PLoS Biol. 2020;18(12):e3000937. pmid:33296358
  44. 44. Pham MT, Oh TT. Preregistration is neither sufficient nor necessary for good science. J Consum Psychol. 2021;31(1):163–76.
  45. 45. Chambers CD, Tzavella L. The past, present and future of Registered Reports. Nat Hum Behav. 2022;6(1):29–42. pmid:34782730
  46. 46. Anthony N, Tisseaux A, Naudet F. Published registered reports are rare, limited to one journal group, and inadequate for randomized controlled trials in the clinical field. J Clin Epidemiol. 2023;160:61–70. pmid:37245701
  47. 47. Manago B. Preregistration and registered reports in sociology: Strengths, weaknesses, and other considerations. Am Soc. 2023;54(1):193–210.
  48. 48. Leavitt VM. Strengthening the evidence: a call for preregistration of observational study outcomes. Mult Scler. 2020;26(12):1608–9. pmid:31668130
  49. 49. Dal-Ré R, Ioannidis JP, Bracken MB, Buffler PA, Chan A-W, Franco EL, et al. Making prospective registration of observational research a reality. Sci Transl Med. 2014;6(224):224cm1. pmid:24553383
  50. 50. Raciti D, Yook K, Harris TW, Schedl T, Sternberg PW. Micropublication: incentivizing community curation and placing unpublished data into the public domain. Database (Oxford). 2018;2018:bay013. pmid:29688367
  51. 51. Dhar P. Octopus and ResearchEquals aim to break the publishing mould. Nature. 2023. pmid:36949136
  52. 52. Tennant J, Bauin S, James S, Kant J. The evolving preprint landscape: introductory report for the Knowledge Exchange working group on preprints. Center for Open Science. 2018.
  53. 53. Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z. On the value of preprints: an early career researcher perspective. PLoS Biol. 2019;17(2):e3000151. pmid:30789895
  54. 54. Casadevall A, Fang FC. Causes for the persistence of impact factor mania. mBio. 2014;5(2):e00064-14. pmid:24643863
  55. 55. Moore S, Neylon C, Eve MP, O’Donnell DP, Pattinson D. Excellence R US: University research and the fetishisation of excellence. Palgrave Communications. 2017;3:16105.
  56. 56. Ellis RJ. Questionable research practices, low statistical power, and other obstacles to replicability: Why preclinical neuroscience research would benefit from registered reports. eNeuro. 2022;9(4). pmid:35922130
  57. 57. Crawford DC, Hoye ML, Silberberg SD. From methods to monographs: fostering a culture of research quality. eNeuro. 2023;10(8). pmid:37553250
  58. 58. Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JPA, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLoS Biol. 2018;16(3):e2004089. pmid:29596415
  59. 59. Laitin DD, Miguel E, Alrababa’h A, Bogdanoski A, Grant S, Hoeberling K, et al. Reporting all results efficiently: a RARE proposal to open up the file drawer. Proc Natl Acad Sci U S A. 2021;118(52):e2106178118. pmid:34933997
  60. 60. Dougherty MR, Slevc LR, Grand JA. Making research evaluation more transparent: aligning research philosophy, institutional values, and reporting. Perspect Psychol Sci. 2019;14(3):361–75. pmid:30629888
  61. 61. Agate N, Kennison R, Konkiel S, Long CP, Rhody J, Sacchi S, et al. The transformative power of values-enacted scholarship. Humanit Soc Sci Commun. 2020;7(1).
  62. 62. Schmidt R, Curry S, Hatch A. Creating SPACE to evolve academic assessment. Elife. 2021;10:e70929. pmid:34554086
  63. 63. HuMetricsHSS. Walking the talk: toward a values-aligned academy. 2022. https://hcommons.org/deposits/item/hc:44631/
  64. 64. Carter C, Dougherty MR, McKiernan EC, Tananbaum G. Promoting values-based assessment in review, promotion, and tenure processes. Commonplace. 2023.
  65. 65. Himanen L, Conte E, Gauffriau M, Strøm T, Wolf B, Gadd E. The SCOPE framework - implementing ideals of responsible research assessment. F1000Res. 2024;12:1241. pmid:38813348
  66. 66. McKiernan E, Carter C, Dougherty MR, Tananbaum G. A framework for values-based assessment in promotion, tenure, and other academic evaluations. Center for Open Science. 2024.
  67. 67. Nieminen P, Carpenter J, Rucker G, Schumacher M. The relationship between quality of research and citation frequency. BMC Med Res Methodol. 2006;6:42. pmid:16948835
  68. 68. Aksnes DW, Langfeldt L, Wouters P. Citations, citation indicators, and research quality: an overview of basic concepts and theories. Sage Open. 2019;9(1).
  69. 69. Saginur M, Fergusson D, Zhang T, Yeates K, Ramsay T, Wells G, et al. Journal impact factor, trial effect size, and methodological quality appear scantly related: a systematic review and meta-analysis. Syst Rev. 2020;9(1):53. pmid:32164791
  70. 70. Dougherty MR, Horne Z. Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. R Soc Open Sci. 2022;9(8):220334. pmid:35991336
  71. 71. Aubert Bonn N, Pinxten W. Rethinking success, integrity, and culture in research (part 1) - a multi-actor qualitative study on success in science. Res Integr Peer Rev. 2021;6(1):1. pmid:33441187
  72. 72. Fernandez ME, Ruiter RAC, Markham CM, Kok G. Intervention mapping: theory-and evidence-based health promotion program planning: perspective and examples. Front Public Health. 2019;7:209. pmid:31475126
  73. 73. Kidder DP, Fierro LA, Luna E, Salvaggio H, McWhorter A, Bowen S-A, et al. CDC program evaluation framework, 2024. MMWR Recomm Rep. 2024;73(6):1–37. pmid:39316770
  74. 74. Longworth GR, Goh K, Agnello DM, Messiha K, Beeckman M, Zapata-Restrepo JR, et al. A review of implementation and evaluation frameworks for public health interventions to inform co-creation: a Health CASCADE study. Health Res Policy Syst. 2024;22(1):39. pmid:38549162