Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

“He who pays the piper calls the tune”: Researcher experiences of funder suppression of health behaviour intervention trial findings

  • Sam McCrabb ,

    Roles Conceptualization, Formal analysis, Investigation, Writing – original draft

    Affiliation School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia

  • Kaitlin Mooney,

    Roles Methodology, Writing – review & editing

    Affiliation School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia

  • Luke Wolfenden,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Writing – review & editing

    Affiliations School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia, Hunter New England Population Health, Hunter New England Local Health District, Wallsend, New South Wales, Australia

  • Sharleen Gonzalez,

    Roles Methodology, Writing – review & editing

    Affiliation School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia

  • Elizabeth Ditton,

    Roles Methodology, Writing – review & editing

    Affiliation School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia

  • Serene Yoong,

    Roles Methodology, Writing – review & editing

    Affiliations School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia, Hunter New England Population Health, Hunter New England Local Health District, Wallsend, New South Wales, Australia, School of Health Sciences, Swinburne University of Technology, Hawthorn, Vic, Australia

  • Kypros Kypri

    Roles Conceptualization, Formal analysis, Investigation, Writing – original draft

    Affiliation School of Medicine and Public Health, Faculty of Health and Medicine, University of Newcastle, Callaghan, New South Wales, Australia



Governments commonly fund research with specific applications in mind. Such mechanisms may facilitate ‘research translation’ but funders may employ strategies that can also undermine the integrity of both science and government. We estimated the prevalence and investigated correlates of funder efforts to suppress health behaviour intervention trial findings.


Our sampling frame was lead or corresponding authors of papers (published 2007–2017) included in a Cochrane review, reporting findings from trials of interventions to improve nutrition, physical activity, sexual health, smoking, and substance use. Suppression events were based on a previous survey of public health academics. Participants answered questions concerning seven suppression events in their efforts to report the trial, e.g., [I was…] “asked to suppress certain findings as they were viewed as being unfavourable.” We also examined the association between information on study funder, geographical location, targeted health behaviour, country democracy rating and age of publication with reported suppression.


We received responses from 104 authors (50%) of 208 eligible trials, from North America (34%), Europe (33%), Oceania (17%), and other countries (16%). Eighteen percent reported at least one of the seven suppression events relating to the trial in question. The most commonly reported suppression event was funder(s) expressing reluctance to publish because they considered the results ‘unfavourable’ (9% reported). We found no strong associations with the subject of research, funding source, democracy, region, or year of publication.


One in five researchers in this global sample reported being pressured to delay, alter, or not publish the findings of health behaviour intervention trials. Regulation of funder and university practices, establishing study registries, and compulsory disclosure of funding conditions in scientific journals, are needed to protect the integrity of public-good research.


Generating scientific knowledge should be, in principle, a key consideration in the design of programmes to improve public health. Governments fund national agencies whose purpose is supporting science (e.g., the National Institutes of Health in the United States of America [USA], National Health and Medical Research Council (NHMRC) [Australia]), and researcher-initiated projects are routinely funded through such agencies. Research funding is also dedicated to addressing the priorities of funders with objectives typically relating to informing public policy or commercial imperatives [1]. Such strategic funding aims to address knowledge gaps important to funders thereby facilitating ‘research translation’ by ensuring relevance to end-users. However, these funding models have been shown to undermine the integrity of science by enabling funders to influence how research is done and reported [24].

As providers of publicly funded health and medical research, universities have a vital role in facilitating independent enquiry. The notion of academic freedom is that researchers bound by the scholarly conventions of peer review and ethical approval, are free to do research without interference or the threat of professional disadvantage [5]. Many see the preservation of such freedom as vital to safeguarding the reflection, critique, and innovation that academia can bring to society [3, 6]. However, academic integrity is increasingly undermined by the influence of vested interests on research [2, 7], and a reproducibility crisis [8], calling into question whether public research institutions actually serve the public interest [9]. That research funders who are also responsible for giving policy advice or implementing intervention programmes have a stake in study findings, puts pressure on the impartiality of the researchers who depend on the funding. This could include subtle pressure on researchers, unconsciously conveyed hopes for ‘positive’ findings, or total suppression or censorship of reports for political advantage [4]. Various mechanisms exist to regulate researcher behaviour, including codes of conduct and ethical review [10, 11]. In Australia, many government funding agreements require researchers to obtain funder approval to publish reports [12].

The suppression of public-good research by funders or other parties is neither well understood nor coherently regulated [13]. A 2006 survey of Australian public health researchers reported that 21% of participants (with a response rate of 46%) had experienced at least one incident in which a government funder suppressed their research in the preceding 5.5 years [14]. The most common forms of suppression reported were blocking, significantly delaying publication and requests to “sanitise” reports [14]. A survey of Australian ecologists and conservation scientists by Driscoll et al. [15] indicated that Government and industry respondents reported higher rates of suppression than university respondents (34%, 30% and 5% respectively) and this suppression was mainly in the form of internal communication and media. A 2015 Canadian survey of federal government scientists showed that within the last 5 years 24% of scientists had been asked to exclude certain findings from their reports, and 37% reported that they were prevented from responding to media enquiries within their area of expertise [16]. In the United Kingdom (UK) an enquiry into public-good research, commissioned by a science charity, presented nine case studies outlining the impact of significant delays in the publication of findings. In several cases delays appeared to be motivated by political considerations [4]. Knowing how often and in what circumstances the suppression of public health research occurs is important because of the potential impact of withholding, delaying, or misrepresenting findings. This is acutely apparent in the COVID-19 pandemic, where delays in releasing early research findings in China allowed significant outbreaks to occur in other countries [1719].

The aims of this study were: (1) to ascertain the reported prevalence of efforts to suppress the findings of primary prevention trials that target nutrition, physical activity, sexual health, tobacco use, alcohol or substance use; and (2) to identify associations between trial characteristics and suppression events.



We invited the lead authors of primary prevention trials included in Cochrane reviews to complete a Computer Assisted Telephone Interview or online survey. This study was part of a larger cross-sectional study that investigated researchers’ experiences in developing, conducting and evaluating public health interventions, the effectiveness of the intervention, any knowledge translation strategies used, and reported impacts on health policy and practice (unpublished). The present study investigates the prevalence of suppression of trial findings, and how these relate to the trial characteristics. The University of Newcastle Human Research Ethics Committee approved the study protocol (H-2014-0070). Completion of the online survey was taken as implied consent.


We searched the Cochrane Library for reviews that were: (1) focused on primary prevention or included trials with setting-based primary prevention components; and (2) related to nutrition, physical activity, sexual health, tobacco use, alcohol use, or other psychoactive substance use. These risk behaviour areas were chosen as the larger cross-sectional study this sub-study is a part of was interested in these health behaviours.

We classified primary studies from the reviews as eligible if they were randomised controlled trials (RCT) or non-randomised controlled trials investigating the effects of efforts to modify nutrition, physical activity, sexual health, tobacco use, alcohol use, or other substance use. We limited eligibility to English language reports published from 2007–2017.

Recruitment and data collection

Authors from identified articles were invited to participate if they were one of the first two authors, the last author, or the corresponding author. Contact information was sourced from the public domain. We contacted corresponding authors first to complete the survey on behalf of all authors. Corresponding authors could nominate co-authors to complete the survey on their behalf. If after four week we had no response to the corresponding author, we invited the first, second and/or last author of the trial manuscript respectively (if different from the corresponding author) to participate. Authors with available telephone contact details were invited to complete the survey via a telephone interview. Prior to contacting via telephone, we emailed an invitation attaching a study information sheet, a summary of the survey topics to be covered, and an opt-out form. Those authors without telephone contact details were contacted via email and sent the same information and a link to complete the same survey online via REDCap, a web survey hosting service [20]. Up to three reminder emails were sent to non-responders at intervals of approximately four weeks.


Suppression events.

We asked respondents seven questions concerning their experiences when disseminating the trial results (see Box 1). The questions, based on those used by Yazahmeidi and Holman (2007) [14], had response options “not at all”, “a little”, or “substantially”.

Box 1. Options respondents were provided regarding funder behaviour

• Reluctance to publish the findings in peer-reviewed journals as they were viewed as being unfavourable.

• Delays in reporting or publishing the findings until a more favourable time (e.g. following elections, after certain policies had been approved).

• Asked to alter your conclusions so that the impact of the intervention was framed in a way that aligned more with their interests.

• Asked to suppress certain findings as they were viewed as being unfavourable.

• Discouraged from presenting your results to certain groups or organisations that may have an interest in the intervention.

• Attempts to discredit members of the research team or other staff involved in the conduct of the study.

• Changes made to study methods or analytical procedures that would have likely resulted in an outcome that aligned more with their interests (e.g. significant finding).

Trial characteristics.

Two researchers (SG and KM) independently extracted the following information from published reports of eligible trials: year of publication, the health risk behaviour(s) targeted (physical activity, nutrition, sexual health, substance use), and the country of first author, where the trial was assumed to have occurred. We aggregated author country into the categories: North America, Europe, Oceania, and Other.

A democracy classification was added for each publication based on the country of origin and the year of study publication using the Economist Intelligence Unit (EIU) Democracy Index reports [2129]. The democracy index is a measure of a country’s democracy and is based on five categories of 60 indicators which are scored to provide a total out of 10. Based on the score, countries are categorised into full democracy (score = 8.01 to 10), or not a full democracy (0 to 8). (N.B. there were no reports for the years 2007 and 2009 so this data is missing for studies published in those years).

While the focus of the study is government funding, we extracted data from all eligible reports and classified them in the following mutually exclusive categories: Dedicated Research agency (government), other Government Agency, Industry, and Philanthropic (see Box 2 for definitions). If funding information was unavailable, we coded the data as Unknown. Where there was more than one source of funding, we coded the study as Multiple and excluded it from the regression analysis to avoid problems of attribution.

Box 2. Definitions of funding categories

Dedicated research agency: A government funded agency solely responsible for medical and public health research.

Other government agency: A government agency, dedicated to pursuits other than research, including local councils, public health and safety departments, and ministerial departments.

Industry: Companies and activities involved in the production of goods for sale.

Philanthropic: A non-government, non-profit organisation, with assets provided by donors and managed by its own officials and with income expended for socially useful purposes.

Unknown: No funding source listed.

Multiple: Reports more than one of the previous funding type.


For each of the seven questions we calculated the proportion (aim 1) who answered “not at all” (coded as “never”) versus “a little” or “substantially” (coded as “at least once”), and then a dichotomous variable indicating the proportion who had experienced any one of the seven suppression events. We conducted a sensitivity analysis to estimate the extremities of possible non-response bias, by assuming (a) that all non-respondents had experienced an act of suppression, and conversely, (b) that all non-respondents had not experienced an act of suppression.

We estimated associations between trial characteristics including the risk behaviour targeted, funder, geographic location, full democracy (yes vs no), and age of the publication (in years) and instances of suppression (aim 2) using logistic regression, recoding year of publication as the continuous variable ‘age of publication in 2017’ (the last year in the sampling frame). We estimated adjusted odds ratios with 95% confidence intervals, and aggregated trials where groups were small: ‘sexual health/substance abuse (risky behaviour)’ and ‘nutrition/physical activity’ based on evidence that these behaviours cluster [30, 31].


From 42 eligible reviews we identified 208 trials and received survey responses from 104 (50%) of their corresponding authors. Papers were published from 2007–2016 and reported trials concerning physical activity and/or nutrition (55%), substance use and/or sexual health (47%). Two thirds were conducted in North America (34%) or Europe/United Kingdom (33%), with the balance in Oceania (17%) and other countries (16%). Examining democracy data, the majority of studies were from full democracy countries (61%). The majority of studies receive funding from Other Government agencies (39%).

S1 Table shows that the characteristics of trials whose authors did not complete the survey were not markedly different from those who did, in terms of the study design, full democracy, or publication date. However, the proportion conducted in North America was higher among non-respondents (53%) than respondents (34%). Too, non-responder reported more funding from an Other Government Agency (49% non-completers vs 39% completers).

Aim 1: Prevalence of suppression events

Eighteen percent (18/98, 6 unknown) of respondents reported at least one instance of suppression. Table 1 shows the number of respondents who reported each type of suppression event having occurred at least once, by funding source. Rates of suppression were highest for studies funded by Other Government agencies. The most commonly reported suppression event was that of the funder expressing reluctance for publication due to ‘unfavourable’ results: with six, two and one suppression events being reported from studies funded by other Government agencies, independent sources, and multiple funding sources, respectively. In comparison, researchers receiving industry or philanthropic funding did not report a single suppression event.

Table 1. Researcher reports of funder efforts to suppress trial findings.

Sensitivity analysis.

Under the extreme assumptions that none or all the non-respondents had experienced a suppression event, the prevalence estimates would be as low as 9% (18/208) or as high as 59% [(18+104)/208], respectively.

Aim 2: Association between trial characteristics and suppression events

Table 2 summarises associations between trial characteristics and suppression events. Researchers receiving Other Government Grants, or who conducted studies in Europe, had higher odds of reporting a suppression event compared to those reporting on studies with independent dedicated research funding or conducting studies in North America, respectively. Researchers who had conducted sexual health/substance use trials more commonly reported a suppression event than those who had conducted nutrition/physical activity trials. Whether the publication came from a democratic country or not did not seem to change the odds of reporting suppression. As the age of the publication increased, so too did the odds of reporting suppression. The confidence intervals for the odd ratios of all the comparisons were wide and included 1.

Table 2. Associations between trial characteristics and suppression events.


In a sample of authors of prevention intervention trial reports published over a decade, of whom 50% responded, we found that one in five reported at least one suppression event. A simple sensitivity analysis suggests that rates of suppression could be as low as 9%, and as high as 59%, depending on the proportion of non-respondents who were subjected to suppression events. Our overall estimate of 18% is similar to estimates from previous studies in Australia (21% to 34%) [15, 32] and Canada (24%) [16]. Notably, we asked specifically about what occurred in relation to a single trial, while other studies evaluated suppression events across many projects over the course of five [16] or 5.5 years [14]. As such, rates of suppression may be much higher.

The sampling frame provides for greater international representation and broader coverage of health research than previous studies [14, 32]. However, relying on published studies to direct us to authors means that we would not have identified authors of studies whose publication was supressed entirely. It is also likely that some authors would not disclose suppression events, even in the confidential context of a research study, fearing repercussions from the funder, or being negatively evaluated by the researchers. The implication of this being that the actual rate of suppression is much higher. Finally, the small numbers in our study constrain the precision of our estimates of prevalence and association. Accordingly, we suggest that 18% is an under-estimate of the true prevalence of studies subject to some form of suppression by funders.

It is hard to determine why older publications or those published in different geographical regions were suggestive of having to have greater odds of suppression (though the confidence intervals for all the comparisons were wide and included 1). A possible explanation may be that older studies which have existed longer may have had more ‘chance’ to experience suppression on their findings. That studies published outside North America were found to have greater odds of suppression is hard to elucidate. It may be that instances of suppression in North America are only reported by individual researchers when it is ‘more severe’ than those reported in other countries, that they were more afraid to report suppression, or that there are tighter regulations on what types of suppression can be enacted on grant holders. More research would be needed to explain the differences noted.

Our results, along with those of previous investigations, suggest that government funders interfere with public-good research. In addition to curtailing independent scientific enquiry, such practices deny the public access to the findings of research paid for through taxation, which in some cases, could have informed policy decisions. On this point, in his ‘Missing Evidence’ report, former High Court Judge Sir Stephen Sedley observed that the UK Parliament made a critical decision concerning the merits of minimum unit pricing of alcohol without the benefit of key findings whose publication had been deliberately delayed by the Department of Health [4]. In addition to the loss borne by taxpayers due to ill-informed policy, is the damage done to democracy when such perversions come to light.

This research has its limitations. For example, we did not attempt to ‘deep dive’ on the different types of suppression. We found that reluctance to publish research findings due to unfavourable results was the most reported type of suppression experiences. However, we did not seek to determine what reluctance meant in this context–whether it meant funders prevented publication or just tried to influence it. Too we did not give participants the opportunity to describe other forms of suppression they may have felt. Further research should investigate this in order to determine all type of suppression experienced in order to develop the most practical ways to overcome them.

Attention is urgently needed to protect the integrity of public health research from the influence of vested interests, whether private or official in origin. Preventive actions are required of all actors involved in the generation of research findings:

  1. Government agencies must ensure appropriate terms in funding agreements formed with research providers that protect academic freedom for e.g. the removal of clauses which require the approval of results prior to publication [32];
  2. Research institutions must not accept funding on terms that permit funders to interfere in public-good research;
  3. Government agencies should establish a registry of studies funded by government agencies including the terms of the funding to encourage openness;
  4. Research Ethics Committees must consider the source and terms of research funding to determine if there are any ethical implications of the funding source;
  5. Scientific journals must require authors to declare the terms of research funding, and potential conflicts of interest;
  6. Researchers must be held to Code of Conduct provisions concerning acceptable terms of funding;
  7. Audits of contracting and research practices of tertiary academic institutions should be undertaken by an independent body with appropriate powers (e.g. an independent government department); and
  8. Universities should consider establishing a mechanism for reporting instances research suppression, and the management of funders/individuals/etc. who are known to attempt to suppress research findings.

Suppression still exists among public-good researchers, with this study suggesting rates of suppression for a single trial may be as high as one in five. Prevention is key and these suggestions, similar to those previously described [32], need to be adopted in order to thwart the occurrence of suppression of public-good health research. As it is unlikely that instances of suppression will ever truly be stopped, further research needs to be conducted to determine ways of handling suppression when they do happen at the researcher level (e.g. what can a researcher do if someone tries to delay their publication), as well as reporting procedures in place if instances of suppression do occur.


  1. 1. Viergever RF, Hendriks TCC. The 10 largest public and philanthropic funders of health research in the world: what they fund and how they distribute their funds. Health Research Policy and Systems. 2016;14(1):12. pmid:26892771
  2. 2. Gornall J. Sugar: spinning a web of influence. Bmj. 2015;350. pmid:25673325
  3. 3. Miller A. Academic Freedom: Defending democracy in the corporate university. Social Alternatives. 2019;38(3):14.
  4. 4. Sedley SS. Missing evidence: An inquiry into the delayed publication of governmentcommissioned research. In: trust TJC, editor. London: Sense about Science; 2016.
  5. 5. Hoepner J. Silencing behaviours in contested research and their implications for academic freedom. Australian Universities’ Review, The. 2019;61(1):31.
  6. 6. Beiter KD. Where have all the scientific and academic freedoms gone? And what is ‘adequate for science’? The right to enjoy the benefits of scientific progress and its applications. Israel Law Review. 2019;52(2):233–91.
  7. 7. McCarthy J. Newcastle University protests on Tuesday over coal connections. Newcastle Herald. 2017.
  8. 8. Ioannidis JP. Why most published research findings are false. PLoS medicine. 2005;2(8):e124. pmid:16060722
  9. 9. Saltelli A, Funtowicz S. What is science’s crisis really about? Futures. 2017;91:5–11.
  10. 10. National Health and Medical Research Council. Australian Code for the Responsible Conduct of Research 2018. National Health and Medical Research Council, Australian Research Council and Universities Australia. Commonwealth of Australia, Canberra. 2018.
  11. 11. National Health and Medical Research Council. National Statement on Ethical Conduct in Human Research 2007 (Updated 2018). The National Health and Medical Research Council, the Australian Research Council and Universities Australia. Commonwealth of Australia, Canberra. 2007.
  12. 12. Power J, Gilmore B, Vallières F, Toomey E, Mannan H, McAuliffe E. Adapting health interventions for local fit when scaling-up: a realist review protocol. BMJ Open. 2019;9(1):e022084. pmid:30679286
  13. 13. National Health and Medical Research Council. Our policy on research integrity. Australian Government, 2019.
  14. 14. Yazahmeidi B, Holman CDAJ. A survey of suppression of public health information by Australian governments. Australian and New Zealand journal of public health. 2007;31(6):551–7. pmid:18081576
  15. 15. Driscoll DA, Garrard GE, Kusmanoff AM, Dovers S, Maron M, Preece N, et al. Consequences of information suppression in ecological and conservation sciences. Conservation Letters. 2021;14(1):e12757.
  16. 16. The Professional Institute of the Public Service of Canada (PIPSC). The big chill, silencing public interest science: a survey. Ottawa: PIPSC; 2013 [cited 04 September 2019]. Available from:
  17. 17. Buckley C, Myers SL. As new coronavirus spread, China’s old habits delayed fight. New York Times. 2020;1.
  18. 18. Wong A. WHO recordings show frustration with China over coronavirus information delays. ABC News. 2020.
  19. 19. Jianjun M. China delayed releasing coronavirus info, frustrating WHO. CNBC. 2020.
  20. 20. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. Journal of biomedical informatics. 2009;42(2):377–81. pmid:18929686
  21. 21. The Economist Intelligence Unit. Democracy index 2006: A pause in democracy’s march. 2007.
  22. 22. The Economist Intelligence Unit. Democracy index 2008. 2008.
  23. 23. The Economist Intelligence Unit. Democracy index 2010: Democracy in retreat. 2010.
  24. 24. The Economist Intelligence Unit. Democracy index 2011: Democracy under stress. 2012.
  25. 25. The Economist Intelligence Unit. Democracy index 2012: Democracy is at a standstill. 2013.
  26. 26. The Economist Intelligence Unit. Democracy index 2013: Democracy in limbo. 2014.
  27. 27. The Economist Intelligence Unit. Democracy Index 2014: Democracy and its discontents. 2015.
  28. 28. The Economist Intelligence Unit. Democracy index 2015: Democracy in an age of anxiety. 2016.
  29. 29. The Economist Intelligence Unit. Democracy Index 2016: Revenge of the “deplorables”. The Economist Intelligence Unit Retrieved from https://wwweiucom/public/topical_reportaspx. 2017.
  30. 30. Xu F, Cohen SA, Lofgren IE, Greene GW, Delmonico MJ, Greaney ML. Relationship between diet quality, physical activity and health-related quality of life in older adults: Findings from 2007–2014 national health and nutrition examination survey. The Journal of Nutrition, Health & Aging. 2018;22(9):1072–9. pmid:30379305
  31. 31. Khadr S, Jones K, Mann S, Hale DR, Johnson A, Viner RM, et al. Investigating the relationship between substance use and sexual behaviour in young people in Britain: findings from a national probability survey. BMJ open. 2016;6(6):e011961. pmid:27363820
  32. 32. Kypri K. Suppression clauses in university health research: case study of an Australian government contract negotiation. The Medical Journal of Australia. 2015;203(2):72–4. pmid:26175239