Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Extent of Non-Publication in Cohorts of Studies Approved by Research Ethics Committees or Included in Trial Registries

  • Christine Schmucker,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Lisa K. Schell,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Susan Portalupi,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Patrick Oeller,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Laura Cabrera,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Dirk Bassler,

    Affiliation: Department of Neonatology, University Hospital Zurich, Zurich, Switzerland

  • Guido Schwarzer,

    Affiliation: Institute of Medical Biometry and Statistics, Medical Center – University of Freiburg, Freiburg, Germany

  • Roberta W. Scherer,

    Affiliation: US Cochrane Center, John Hopkins Bloomberg School of Public Health, Baltimore, Maryland, United States of America

  • Gerd Antes,

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • Erik von Elm,

    Affiliation: Cochrane Switzerland, Institute of Social and Preventive Medicine (IUMSP), University Hospital Lausanne, Biopôle 2, Route de la Corniche 10, 1010 Lausanne, Switzerland

  • Joerg J. Meerpohl ,

    meerpohl@cochrane.de

    Affiliation: German Cochrane Centre, Medical Center – University of Freiburg, Berliner Allee 29, 79110 Freiburg, Germany

  • on behalf of the OPEN consortium

    The complete membership of the OPEN consortium is provided in the Acknowledgements.

Extent of Non-Publication in Cohorts of Studies Approved by Research Ethics Committees or Included in Trial Registries

  • Christine Schmucker, 
  • Lisa K. Schell, 
  • Susan Portalupi, 
  • Patrick Oeller, 
  • Laura Cabrera, 
  • Dirk Bassler, 
  • Guido Schwarzer, 
  • Roberta W. Scherer, 
  • Gerd Antes, 
  • Erik von Elm
PLOS
x

Abstract

Background

The synthesis of published research in systematic reviews is essential when providing evidence to inform clinical and health policy decision-making. However, the validity of systematic reviews is threatened if journal publications represent a biased selection of all studies that have been conducted (dissemination bias). To investigate the extent of dissemination bias we conducted a systematic review that determined the proportion of studies published as peer-reviewed journal articles and investigated factors associated with full publication in cohorts of studies (i) approved by research ethics committees (RECs) or (ii) included in trial registries.

Methods and Findings

Four bibliographic databases were searched for methodological research projects (MRPs) without limitations for publication year, language or study location. The searches were supplemented by handsearching the references of included MRPs. We estimated the proportion of studies published using prediction intervals (PI) and a random effects meta-analysis. Pooled odds ratios (OR) were used to express associations between study characteristics and journal publication. Seventeen MRPs (23 publications) evaluated cohorts of studies approved by RECs; the proportion of published studies had a PI between 22% and 72% and the weighted pooled proportion when combining estimates would be 46.2% (95% CI 40.2%–52.4%, I2 = 94.4%). Twenty-two MRPs (22 publications) evaluated cohorts of studies included in trial registries; the PI of the proportion published ranged from 13% to 90% and the weighted pooled proportion would be 54.2% (95% CI 42.0%–65.9%, I2 = 98.9%). REC-approved studies with statistically significant results (compared with those without statistically significant results) were more likely to be published (pooled OR 2.8; 95% CI 2.2–3.5). Phase-III trials were also more likely to be published than phase II trials (pooled OR 2.0; 95% CI 1.6–2.5). The probability of publication within two years after study completion ranged from 7% to 30%.

Conclusions

A substantial part of the studies approved by RECs or included in trial registries remains unpublished. Due to the large heterogeneity a prediction of the publication probability for a future study is very uncertain. Non-publication of research is not a random process, e.g., it is associated with the direction of study findings. Our findings suggest that the dissemination of research findings is biased.

Introduction

The synthesis of published research in systematic reviews is essential when providing evidence to inform both clinical and health policy decision making. However, its validity is threatened if publications represent a biased selection of all the studies that have been conducted. Publication bias occurs when some types of results (e.g., those that are statistically significant) are reported more frequently or more quickly than others. [1][3] Increasingly, the term dissemination bias is used. It reflects that research reporting is not limited to journal publication alone but also comprises other forms of dissemination such as posting results in a trial registry. [4] Dissemination bias, similar to publication bias, results from favoured dissemination of research findings depending on their statistical significance and direction. It may lead to preferential prescribing of newer and more expensive treatments while underestimating the potential harm of drugs that have been in use for only a limited time. Clinical decisions may, therefore, be based on erroneous information. [5] It is obvious that these selection mechanisms violate the fundamental scientific and ethical imperative that findings from all research on humans should be available to advance knowledge. Furthermore, non-publication of studies implies considerable financial investment by funders without any return. Further down the road, indirect costs incurred due to non-publication of studies include those by health care providers, health insurances, and patients who all continue to pay for treatments that may not be the most effective ones or may even be harmful.

In response to these concerns, the OPEN Project (To Overcome failure to Publish nEgative fiNdings; www.open-project.eu) was developed with the goal of elucidating the scope of non-publication of studies through a series of systematic reviews and to develop recommendations. [4], [6][8] The OPEN Project was funded by the European Commission and conducted by an international working group of methodologists and other experts (see S1 Fig.). Besides evaluating the extent of non-publication of studies, OPEN examined current publication practices of key groups in the field of biomedical research (e.g., funding agencies, pharmaceutical industry, research ethics committees [RECs], trial registries, biomedical journals and regulatory agencies) through surveys and analysis of current policies and guidelines.

Because unpublished studies are hidden from view it is challenging to study dissemination bias. [9] One opportunity for such research is that in virtually all research settings REC approval is required before clinical studies can start. In addition, an increasing number of journals require prospective trial registration as a pre-condition for acceptance of manuscript reporting on studies. Further, any clinical study conducted under FDA regulations in the United States needs to be registered in clinicaltrials.gov (http://clinicaltrials.gov/ct2/manage-recs/fdaaa#WhichTrialsMustBeRegistered). Thus, study protocols submitted to RECs and study data accessible in trial registries are a resource to identify unpublished studies and evaluate the extent of non-publication of clinical research.

This systematic review investigated the extent to which studies approved by RECs or included in trial registries remained unpublished. To this effect, evidence from methodological research projects (MRPs) following such studies was evaluated and summarised. In addition, we assessed study characteristics that are potentially associated with publication (dissemination bias). The review was conducted according to a protocol published previously. [6]

Methods

Systematic literature search

We searched the databases Medline (Ovid), Embase (Ovid), The Cochrane Library and Web of Science from their inception until February 2012. An update search was performed in November 2013. The search strategy was based on combinations of medical subject headings (MeSH) and keywords and was not restricted to specific languages or years of publication. The search strategy used in Medline (Ovid) is presented in S2 Fig. Search strategies for other databases were adapted to meet the requirements of each database while keeping the search algorithm. The searches were supplemented by checking the bibliographies of any eligible articles for additional references. In addition, several experts in the field were contacted and asked to identify additional studies.

Study selection and inclusion criteria

Titles and abstracts were reviewed using specific inclusion criteria (see below). All stages of study selection, data extraction and quality assessment were done independently by two reviewers (CS, LC, PO, LKS). Any disagreement during the selection, extraction, and assessment process was resolved by discussion and consensus.

We included MRPs which reported the proportion of studies published as journal publication after (i) REC approval or (ii) inclusion in trial registries. MRPs evaluating approved studies were included under the assumption that the majority of those studies were completed at the time of the search for peer-reviewed journal publications. In the case of multiple publications we extracted data from the MRP with the largest sample size and/or most comprehensive information while using cross-referencing.

Outcomes

Our main outcomes were the overall proportion of studies published as journal articles and time to journal publication after study completion. Thereby, study completion was defined as the last day of follow-up of study participants. If the last day of follow-up was not given, time to publication was calculated based on the time reported in the MRP. To calculate the overall proportion of studies published, we set a minimum follow-up time of 24 months after study completion. In addition, we aimed to identify study characteristics that were associated with an increased likelihood of journal publication and time to publication. [6] We also collated information on costs or other resources which occurred by studies that were not published (if available). For the evaluation of study associations with publication a minimum follow-up time after study completion was not necessary. Outcomes were reported separately for both types of MRPs (RECs and trial registries).

Data extraction and risk of bias assessment

Information on main characteristics of studies were abstracted for each MRP. [6] Internal and external validity of the identified MRPs was evaluated according to pre-defined criteria which were developed considering relevant literature investigating dissemination bias [10] and internal discussion. [6] Criteria for internal validity were: (i) follow-up time between study completion and search for journal publication, (ii) methodology used to identify journal publications, (iii) matching between study protocol or trial registry entry and retrieved journal publication and (iv) adjustment for confounders. External validity was judged based on the status of the study sample (i.e., whether the reported proportion of studies published was calculated based on a sample of completed and/or approved studies) and the sampling method used (i.e., whether a random or selected study sample was considered).

For each criterion an MRP's risk of bias was categorized as high, low or unclear.

Statistical analysis/data synthesis

For both types of MRPs (RECs and trial registries) we separately estimated the proportion of studies published as journal articles using a random effects meta-analysis (DerSimonian-Laird method) based on logit-transformed proportions and their corresponding 95% confidence intervals (CI). Heterogeneity was assessed with the Chi2-test and calculation of the I2 statistic. [11] Given the substantial heterogeneity found we also decided to calculate prediction intervals (PI) - which were not pre-specified in the published protocol - using the method suggested by Higgins et al. [12] Pooled odds ratios (OR) were used to express associations between study characteristics and the likelihood of journal publication. Multivariate analyses of study characteristics were not feasible due to the small number of studies providing this information.

To address potential bias due to approved rather than completed studies, the status of the study sample (completed, on-going and/or approved) was evaluated within a sensitivity analysis.

Time to publication was analysed in two ways: (i) Mean or median time was used only if the proportion of studies published as peer-reviewed journal articles was larger than 50%. Some MRPs calculated time to publication from approval of studies to journal publication, others from study completion to journal publication. Due to these differences in definitions we refrained from pooling time-to-publication estimates. (ii) The proportion of studies published up to fixed time points (e.g., 6, 12, 18, 24, 36 months) was extracted from included MRP publications (e.g., from published Kaplan Meier curves). For each time point we performed a random-effects meta-analysis using logit-transformed proportions.

Statistical analyses were done with R using the meta package (http://cran.r-project.org/web/packages/meta/index.html).

Results

Results of literature search and selection process

The searches identified 8612 references, including 2468 duplicates (Fig. 1). Among the 6144 potentially relevant references, 39 MRPs (45 publications) were eligible for the systematic review: 17 reported on MRPs following studies approved by RECs [13][29] (23 publications) [9], [13][34] and 22 on MRPs [35][56] following studies included in trial registries (22 publications).

Characteristics of included MRPs

The main characteristics of included MRPs of both types are presented in Table 1 and Table 2.

thumbnail
Table 1. Main characteristics of 17 methodological research projects following studies after approval by a research ethics committee.

http://dx.doi.org/10.1371/journal.pone.0114023.t001

thumbnail
Table 2. Main characteristics of 22 methodological research projects following studies included in trial registries.

http://dx.doi.org/10.1371/journal.pone.0114023.t002

MRPs following studies after REC approval.

Of the 17 MRPs that followed studies approved by RECs, four focused on specific medical fields: psychology, [15] epidemiology, [16] paediatrics [24] and general medicine [29]. Eight [17], [19][21], [23], [26][28] included studies from different fields and five [13], [14], [18], [22], [25] did not provide any information. Two MRPs [14] [28] included solely randomised controlled trials and 15 allowed for a wide range of interventional and observational study designs. The RECs in charge of study approval were based in Germany, [13], [21] USA, [15], [18], [27] The Netherlands, [16] Denmark, [14] France, [17] United Kingdom, [19], [29] Canada, [20] Sweden, [22] Spain, [23], [26] Argentina, [24] Australia [25] and Switzerland. [28]

MRPs following studies after inclusion in trial registries.

Of the 22 MRPs following studies included in trial registries, 12 focused on specific medical fields: orthopaedics, [36], [46] pneumology, [38] ophthalmology, [40] oncology, [41] neurology, [44] gynecology, [47] rheumatology, [51] urology, [53] pediatrics, [52] orthopedics [54] and gastroenterology/hepatology [45]. One MRP included drug trials in internal medicine and psychiatry [35] and another was restricted to Chinese trials dealing with different medical specialties. [39] Three MRPs did not restrict their field of research [37], [42], [43] and five did not provide any information. [48][50], [55], [56] Seven MRPs included randomised controlled trials. [44][46], [50], [51], [56] The remaining MRPs either did not specify included study designs or included a wide range of designs ranging from observational studies to controlled clinical trials. Twenty [35][43], [45][54], [56] of the 22 MRPs searched www.clinicaltrials.gov. Besides clinicaltrials.gov one MRP also searched 10 WHO registries for Chinese trials, [39] two other MRPs searched the GlaxoSmithKline (GSK) trial registry (United Kingdom) [44] and the ISRCTN register, [55] respectively.

Risk of bias

Results of the methodological quality assessment are presented in Table 3 and Table 4, respectively.

thumbnail
Table 3. Risk of bias table for MRPs following studies after approval by a REC.

http://dx.doi.org/10.1371/journal.pone.0114023.t003

thumbnail
Table 4. Risk of bias table for MRPs following studies included in trial registries.

http://dx.doi.org/10.1371/journal.pone.0114023.t004

MRPs following studies after REC approval.

All of the included MRPs fulfilled the 24-month follow-up criterion. However, twelve MRPs based their follow-up time on studies which were approved but not necessarily completed. [13][16], [18], [20][25], [27] Although these MRPs fulfilled the 24-month follow-up criterion, we judged them to have an unclear risk of bias. The methodology used to identify journal publications was adequate in all but two MRPs. [23], [24] Three MRPs performed adequate matching between protocol and retrieved journal publications. [13], [26], [28] However, in most MRPs this criterion was not applicable because identification of journal publications relied solely on author contacts. None of the MRP publications reported on adjustment for confounding factors when calculating proportions of published studies in specific subgroups or calculating measures of association between likelihood of publication and subgroup characteristics.

MRPs following studies after inclusion in trial registries.

Fourteen MRPs following studies included in trial registries had a follow-up time between study completion and search for full publication of 24 months or more. [35][37], [40], [42], [43], [45], [46], [48][50], [53], [55], [56] All but one [45] of these MRPs included cohorts of completed studies. The publication status was verified by searching adequate electronic databases and/or contacting the lead investigators in all but two MRPs. [44], [52] Thirteen MRPs [35], [37], [38], [40], [41], [44], [45], [48], [49], [52], [53], [55], [56] did not comment on matching criteria between registry entry and retrieved journal publication; whereas all but one [47] remaining MRPs performed adequate matching. Similar to MRPs following studies after REC approval, adjustment for confounders was not considered in any of the MRPs.

Proportion of studies published

After REC approval, the proportion of studies published ranged from 26% to 76% in 17 MRPs with a follow-up of 24 months or more, including 5112 studies (Fig. 2, Table 5). The prediction interval was 22% to 72%; the heterogeneity among individual estimates was substantial (I2 = 94.4%, p<0.0001). If one combined the individual estimates even so the pooled estimate would be 46.2% (95% CI 40.2–52.4).

thumbnail
Figure 2. Weighted proportion of published studies for 17 MRPs following studies after REC approval.

http://dx.doi.org/10.1371/journal.pone.0114023.g002

thumbnail
Table 5. Pooled proportions of published studies based on methodological research projects.

http://dx.doi.org/10.1371/journal.pone.0114023.t005

After trial registration, the proportion of studies published ranged from 23% to 76% in 14 MRPs with a follow-up of 24 months or more, including 12660 studies (Fig. 3, Table 5). The prediction interval was 13% to 90%; again the heterogeneity among individual estimates was substantial (I2 = 98.9%, p<0.0001). If one combined the individual estimates even so the pooled estimate would be 54.2% (95% CI 42.0–65.9).

thumbnail
Figure 3. Weighted proportion of published studies for 14 MRPs following studies after trial registration.

http://dx.doi.org/10.1371/journal.pone.0114023.g003

In a sensitivity analysis we excluded those MRPs that were based on a cohort of approved [13][16], [18], [20][25], [27] or initiated [45] studies. In the resulting sample of completed studies a pooled proportion of studies published would be similar: 46.3% (95% CI 41.0–51.6; I2 = 81.1%, p<0.0001; based on five MRPs following studies after REC approval) [17], [19], [26], [28], [29] and 53.5% (95% CI 40.9–65.7; I2 = 98.9%, p = 0.0003; based on 13 MRPs following studies after inclusion in trial registries) [35][37], [40], [42], [43], [46], [48][50], [53], [55], [56], respectively. The sensitivity analysis of MRPs which only included randomised controlled trials would yield a pooled proportion of published studies of 44.5% (95% CI 31.0–58.8; I2 = 92.9%, p = 0.0002; based on two MRPs following studies after REC approval) [14], [28] and 60.3% (95% CI 45.4–73.6; I2 = 92.5%, p<0.001; based on seven MRPs following studies after trial registration), respectively. [44][46], [50], [51], [54], [55] It should be noted that three of these MRPs had insufficient follow-up time for searching full publications [44], [51], [54] or included on-going studies. [45]

Factors associated with publication

Table 6 summarizes factors associated with journal publication of studies. Four MRPs following studies approved by RECs compared studies with statistically significant results (p<0.05) and studies with non-significant results. [9], [18], [25], [26] The pooled OR for publication of studies with statistically significant results (vs. non-significant) was 2.8 (95% CI 2.2–3.5). Also studies with positive results defined as experimentally better or clinically relevant had higher (but statistically not significant) odds of journal publication than studies with negative results (pooled OR 3.1; 95% CI 0.9–11.0; two MRPs). [25], [32]

Two of the MRPs that followed studies after REC approval [20], [30] and three of the MRPs that followed studies after registration [39], [42], [43] investigated the association of different study designs with publication (i.e., randomised controlled trials versus observational studies). In both types of MRPs, randomised controlled trials had a greater odds of publication than observational studies (OR 2.0; 95% CI 1.3–3.3 and 1.2; 95% CI 1.0–1.5, respectively). A post-hoc analysis including MRPs that followed studies after trial registration revealed that phase-III trials were more likely to be published than phase-II trials (pooled OR 2.0; 95% CI 1.6–2.5). [37], [38], [40][43], [47], [50], [52], [53]

In MRPs that followed studies after REC approval, multicentre studies were more likely to be published than single centre studies (pooled OR 1.5; 95% CI 1.0–2.4; four MRPs). [18], [28], [30], [34] We also found that research funded by governments was more frequently published than research funded by the industry (pooled OR 2.2; 95% CI 1.7–2.9; eight MRPs following studies after trial registration). [37], [39], [41], [42], [46], [50], [52], [53] But no difference in the probability of publication between basic and human research was identified (pooled OR 1.1; 95% CI 0.6–2.1; two MRPs). [24], [30] There were also no significant differences for national versus international studies (OR 1.3; 95% CI 0.5–3.8) in one MRP following studies after inclusion in trial registries [40] and for studies with sample sizes larger (versus smaller) than the cohort's median sample size (OR 1.2; 95% CI 0.8–1.6) in another such MRP. [42] Other potential factors associated with journal publications could not be derived from the included MRPs. In addition, none of the MRPs reported on costs or use of other resources due to studies that were not published.

Factors associated with time to publication

The time to full publication of studies being published in peer-reviewed journals was reported in two MRPs following studies after REC approval: [17], [26] One MRP [17] reported a statistically significant (p<0.001) association of the direction of results with mean time to publication of 62.4 months (95% CI 57.6–67.2) for positive (confirmatory) results compared with 78 months (95% CI 69.6–86.4) for studies with inconclusive results and 82.2 months (95% CI 70.8–94.8) for negative (invalidating) results. The second MRP [26] confirmed that the time to publication is significantly associated with the direction of the results. Median time to full publication was 25 months in studies with positive outcomes and 38.5 months in those with negative results.

Probability of publication over time

Three MRPs following studies approved by RECs provided information on the time course of publication (Fig. 4): [16], [25], [30] after two years the publication probability was approximately 7%, [16], [25], [30] after three years 20%, [16], [25], [30] after five years 30%, [16], [30] and after six years 55%. [25], [30] Estimates of publication probability after trial registration were available from five MRPs (Fig. 5): [35], [38], [43], [47], [50] After two years the publication probability reached approximately 30%, [35], [38], [43], [47], [50] after three years 50%, [38], [43], [47], [50] and after five years approximately 60%. [38] Because of the low number of included MRPs with data on follow-up, these estimates have to be interpreted cautiously.

Discussion

Principal findings

Overall, only about half of clinical and preclinical studies approved by RECs or included in trial registries are published as full journal articles; however estimates vary largely resulting in wide prediction intervals. For randomised controlled trials a pooled overall proportion of studies published would be somewhat larger (60.3%; 95% CI 45.4–73.6). Accordingly, prediction of the probability of publication for a future study is very uncertain. We also found evidence for dissemination bias in that studies with statistically significant results were more likely to be published than those without (pooled OR 2.8; 95% CI 2.2–3.5). This association is consistent with the finding that studies with positive results defined as experimentally better or clinically relevant were more likely to be published than studies with negative results though not reaching statistical significance (pooled OR 3.1; 95% CI 0.9–11.0). In addition, phase-III trials – which might be more successful than early-phase trials – were more likely to be published than phase-II trials (pooled OR 2.0; 95% CI 1.6–2.5). Also, randomised controlled trials which are considered as the “gold standard” design for a clinical study are published more often than observational studies (pooled OR 2.0; 95% CI 1,3–3,3). The reason for this finding could be that medical journals prefer to publish randomised controlled trials. But there may also be a tendency by study authors not to not write up results of observational studies, in particular when they are negative.

Strengths and weaknesses of this review

The findings of our systematic review are based on a thorough and comprehensive literature search for the available evidence on dissemination bias. We considered two types of MRPs which tracked studies from time of inception, thus including 39 individual MRPs evaluating more than 20,000 studies. For both types, the evidence on dissemination bias was consistent suggesting that publications over the last 20 years are an incomplete and biased subset of research findings. We conducted our systematic review following a pre-specified protocol thus preventing that any substantial post-hoc changes remain undisclosed. [6] Because a registry for methodological reviews is not yet available, this protocol was not prospectively registered, but previously published in an open-access journal. [6]

Our systematic review may have some limitations. We identified a large number of MRPs but associations between study characteristics and journal publication had not been reported in most of these publications. Therefore, not all pre-specified subgroup analyses stated in the protocol could be performed. For example, it was not possible to collate enough data on sex and rank of lead investigator or language of publication to investigate these factors in association with non-publication. We could not determine with certainty whether the MRP authors carried out additional analyses that ultimately were not reported (as authors were not contacted personally), thus selecting outcomes for publication. Furthermore, the aggregated data for publication probability over time refer to less than five studies at most time points. Accordingly, publication probabilities at given time points have to be interpreted cautiously. The reported estimates can only give a rough picture of the publication course after REC approval or trial registration. No standard methodology is available to assess study quality of the types of research projects we considered for our review. Therefore, we devised a tool to assess internal and external validity of the identified evidence. A sensitivity analysis for MRPs with high risk of bias was planned initially [6] but not performed due to the overall low quality of the identified MRPs. When we calculated the overall proportion of studies published as journal articles we only included studies with an arbitrarily defined minimum follow-up time of 24 months after study completion. Therefore, the proportion of studies published may be underestimated in some MRPs because journal publication may have occurred later. We included MRPs which investigated approved or on-going rather than completed studies. A sensitivity analysis excluding those MRPs showed that the proportion of studies published was similar. In addition, limited data on potential risk factors (e.g., follow-up time, language of included studies) made it impossible to further explore the large heterogeneity observed in our data. We also acknowledge shortcomings (like inaccurate estimation of heterogeneity) of random effects models meta-analysis - as carried out in our systematic review - with a small number of included studies.

Comparison with other systematic reviews

In a Cochrane Methodology Review full publication of results initially presented in abstracts was examined combining data from 79 MRPs; the weighted mean full publication proportion was 44.5% (95% CI 43.9–45.1). [10] In this review, survival analyses combining aggregated data resulted in an estimated publication rate at nine years of 52.6% for all types of studies, 63.1% for randomised controlled trials and 49.3% for other types of study designs. In addition, the review showed a significant association of positive study results (defined as any statistically significant result) with full publication. Other factors associated with full publication were randomised trial study design and funded research. Despite the different criteria for inclusion of MRPs (REC approval/trial registration versus meeting presentation) their findings were consistent with our results.

The extent of dissemination bias in different types of research projects was also investigated by Song et al. 2009. [57] The authors identified 12 MRPs that followed up research from inception (studies approved by RECs or registered by research funding bodies), four MRPs that included trials submitted to regulatory authorities, 28 MRPs that assessed the fate of studies presented as conference abstracts, and four MRPs that followed manuscripts submitted to journals. This review concluded that dissemination bias related to direction of study results mainly occurs before the presentation of findings at conferences and the submission of manuscripts to journals. [57] A recent systematic review of studies limited to randomised trials confirmed the existence of dissemination bias and outcome reporting bias, although meta-analysis was not conducted due to the differences between included studies. [58] In addition, a Cochrane Review concluded that trials with positive findings are published more often and more quickly than trials with negative findings. [59] Despite differences in types of study cohorts or MRPs included, all these reviews were consistent with our body of evidence in concluding that a study with positive findings is more likely to be published than a study with negative results. One might speculate that journals prefer publishing reports with positive rather than non-positive results or that investigators do not submit reports of studies with negative results.

Implications for policy makers and further research

Overall, the scientific literature represents an incomplete subset of research findings. Due to the large heterogeneity, prediction of the probability of publication for a single study is very uncertain. Our findings clearly confirm that (non-)publication is not a random process and the likelihood of publication is associated with the direction of study findings. When results are not published or are published selectively based on the direction or the strength of the findings, healthcare professionals and consumers of healthcare cannot base their decisions on the full body of current evidence. This ignorance can lead to the use of ineffective or harmful interventions and to waste of scarce health-care resources. For example, when unpublished studies were included in a meta-analysis, the antidepressant reboxetine was shown to have more adverse effects but no better efficacy than placebo for treatment of major depression – a different finding from that when only published studies were included. [60]

The inability to make evidence-informed decisions impacts the healthcare system at various levels: First, dissemination bias is at odds with the ethical responsibility towards patients to use all research to advance medical knowledge and improve their care. Second, if treatment effects are overestimated this may result in patients receiving treatments that may be more harmful or less efficacious than previously believed. Finally, non-publication of studies results is deleterious because a considerable part of the funds available for research are spent without return. Additional costs include those incurred by health care systems and individual patients who continue to pay for treatments that may not be as effective or efficient as commonly thought. Although the full extent of financial impact of non-publication of studies is currently unknown, the waste of funds is likely to be high. [61], [62]

The creation of clinical trial registers and the prospective publication of detailed study protocols with explicit outcome descriptions and analysis plans should help to combat dissemination bias. The recent AllTrials campaign has proposed that “all trials past and present should be registered, and the full methods and the results reported” (http://www.alltrials.net/). In addition, researchers should be encouraged and supported to present their studies at conferences and to proceed until full publication.

Nevertheless, dissemination bias exists and is currently invalidating findings in systematic reviews and meta-analyses when only published studies are considered. There is no excuse for study results to go unpublished and there is a huge public health benefit from obtaining a complete picture of what has been found in all studies to-date.

Supporting Information

S1 Fig.

The OPEN Consortium.

doi:10.1371/journal.pone.0114023.s001

(DOCX)

S2 Fig.

Search Strategy for OvidSP MEDLINE.

doi:10.1371/journal.pone.0114023.s002

(DOCX)

S1 PRISMA Checklist.

PRISMA checklist.

doi:10.1371/journal.pone.0114023.s003

(DOC)

Acknowledgments

The authors thank the members of the OPEN consortium Vittorio Bertele, Xavier Bonfill, Marie-Charlotte Bouesseau, Isabelle Boutron, Silvano Gallus, Silvio Garattini, Karam Ghassan, Carlo La Vecchia, Britta Lang, Jasper Littmann, Jos Kleijnen, Michael Kulig, Mario Malicki, Ana Marusic, Katharina Felicitas Mueller, Hector Pardo, Matthias Perleth, Philippe Ravaud, Andreas Reis, Daniel Strech, Ludovic Trinquart, Gerard Urrútia, Elizabeth Wager, Alexandra Wieland and Robert Wolff.

The authors also thank Edith Motschall for conducting the comprehensive systematic literature search.

Author Contributions

Conceived and designed the experiments: JM CS SP. Performed the experiments: CS LKS PO LC JM. Analyzed the data: GS CS JM. Contributed reagents/materials/analysis tools: GS. Wrote the paper: CS. Involved in data interpretation and commented on drafts of the manuscript: GA DB RWS JM SP LKS GS EvE.

References

  1. 1. Simes RJ (1986) Publication bias: the case for an international registry of clinical trials. J Clin Oncol 4:1529–1541.
  2. 2. Dickersin K (1990) The existence of publication bias and risk factors for its occurrence. JAMA 263:1385–1389. doi: 10.1001/jama.263.10.1385
  3. 3. Ioannidis JP (1998) Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 279:281–286. doi: 10.1001/jama.279.4.281
  4. 4. Muller KF, Briel M, D'Amario A, Kleijnen J, Marusic A, et al. (2013) Defining publication bias: protocol for a systematic review of highly cited articles and proposal for a new framework. Syst Rev 2:34. doi: 10.1186/2046-4053-2-34
  5. 5. Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2:e124. doi: 10.1371/journal.pmed.0020124
  6. 6. Portalupi S, von Elm E, Schmucker C, Lang B, Motschall E, et al. (2013) Protocol for a systematic review on the extent of non-publication of research studies and associated study characteristics. Syst Rev 2:2. doi: 10.1186/2046-4053-2-2
  7. 7. Schmucker C, Bluemle A, Briel M, Portalupi S, Lang B, et al. (2013) A protocol for a systematic review on the impact of unpublished studies and studies published in the gray literature in meta-analyses. Syst Rev 2:24. doi: 10.1186/2046-4053-2-24
  8. 8. Briel M, Muller KF, Meerpohl JJ, von Elm E, Lang B, et al. (2013) Publication bias in animal research: a systematic review protocol. Syst Rev 2:23. doi: 10.1186/2046-4053-2-23
  9. 9. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (1991) Publication bias in clinical research. Lancet 337:867–872. doi: 10.1016/0140-6736(91)90201-y
  10. 10. Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 18:MR000005. doi: 10.1002/14651858.mr000005.pub2
  11. 11. Higgins JP, Thompson SG, Deeks JJ, Altman DG (2003) Measuring inconsistency in meta-analyses. BMJ 327:557–560. doi: 10.1136/bmj.327.7414.557
  12. 12. Higgins JPT, Thompson SG, Spiegelhalter DJ (2009) A re-evaluation of random-effects meta-analysis. Journal of the Royal Statistical Society: Series A (Statistics in Society) 172:137–159. doi: 10.1111/j.1467-985x.2008.00552.x
  13. 13. Blümle A, Meerpohl JJ, Schumacher M, von Elm E (2014) Fate of clinical research studies after ethical approval – follow-up of study protocols until publication. PLoS One 9:e87184. doi: 10.1371/journal.pone.0087184
  14. 14. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 291:2457–2465. doi: 10.1001/jama.291.20.2457
  15. 15. Cooper H, DeNeve K, Charlton K (1997) Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychol Methods 2:447–452. doi: 10.1037//1082-989x.2.4.447
  16. 16. de Jong JP, Ter Riet G, Willems DL (2010) Two prognostic indicators of the publication rate of clinical studies were available during ethical review. J Clin Epidemiol 63:1342–1350. doi: 10.1016/j.jclinepi.2010.01.018
  17. 17. Decullier E, Lheritier V, Chapuis F (2005) Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 331:19–22. doi: 10.1136/bmj.38488.385995.8f
  18. 18. Dickersin K, Min YI, Meinert CL (1992) Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 267:374–378. doi: 10.1001/jama.1992.03480030052036
  19. 19. Easterbrook PJ, Matthews DR (1992) Fate of research studies. J R Soc Med 85:71–76.
  20. 20. Hall R, de Antueno C, Webber A, Canadian Research Ethics B (2007) Publication bias in the medical literature: a review by a Canadian Research Ethics Board. Can J Anaesth 54:380–388. doi: 10.1007/bf03022661
  21. 21. Menzel S, Uebing B, Hucklenbroich P, Schober O (2007) Evaluation of clinical trials following an approval from a research ethics committee. Dtsch Med Wochenschr 132:2313–2317. doi: 10.1055/s-2007-991648
  22. 22. Olofsson BO, Lynoe N, Mjorndal T (2000) [Unnecessary work of the research ethics committees? A lot of work is done with studies which are never to be published]. Lakartidningen 97:1974–1976.
  23. 23. Pich J, Carne X, Arnaiz JA, Gomez B, Trilla A, et al. (2003) Role of a research ethics committee in follow-up and publication of results. Lancet 361:1015–1016. doi: 10.1016/s0140-6736(03)12799-7
  24. 24. Rodriguez SP, Vassallo JC, Berlin V, Kulik V, Grenoville M (2009) Factors related to the approval, development and publication of research protocols in a paediatric hospital. Arch Argent Pediatr 107:504–509.
  25. 25. Stern JM, Simes RJ (1997) Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315:640–645. doi: 10.1136/bmj.315.7109.640
  26. 26. Sune P, Sune JM, Montoro JB (2013) Positive outcomes influence the rate and time to publication, but not the impact factor of publications of clinical trial results. PLoS One 8:e54583. doi: 10.1371/journal.pone.0054583
  27. 27. Turer AT, Mahaffey KW, Compton KL, Califf RM, Schulman KA (2007) Publication or presentation of results from multicenter clinical trials: evidence from an academic medical center. Am Heart J 153:674–680. doi: 10.1016/j.ahj.2007.01.005
  28. 28. von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, et al. (2008) Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee. Swiss Med Wkly 138:197–203.
  29. 29. Wise P, Drury M (1996) Pharmaceutical trials in general practice: the first 100 protocols. An audit by the clinical research ethics committee of the Royal College of General Practitioners. BMJ 313:1245–1248. doi: 10.1136/bmj.313.7067.1245
  30. 30. Blümle A, Antes G, Schumacher M, Just H, von Elm E (2008) Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others. Journal of Medical Ethics 34.
  31. 31. Blümle A, Kasenda B, Saccilotto R, Briel M, Elm E (2011) Epidemiology and publication history of randomized controlled trials that were discontinued. Poster presentation at the 19th Cochrane Colloquium; 2011 Oct 19–22; Madrid, Spain [abstract]. Cochrane Database of Systematic Reviews Suppl:126–127.
  32. 32. Decullier E, Chapuis F (2007) Oral presentation bias: a retrospective cohort study. Journal of Epidemiology and Community Health 61:190–193. doi: 10.1136/jech.2006.048603
  33. 33. Dickersin K, Meinert CL (1990) Risk factors for publications bias: results of a follow-up study [abstract]. Controlled Clinical Trials 11:255. doi: 10.1016/0197-2456(90)90046-5
  34. 34. Sune-Martin P, Montoro-Ronsano JB (2003) Role of a research ethics committee in follow-up and publication of results. Lancet 361:2245–2246. doi: 10.1016/s0140-6736(03)13751-8
  35. 35. Bourgeois FT, Murthy S, Mandl KD (2010) Outcome reporting among drug trials registered in ClinicalTrials.gov. Annals of Internal Medicine 153:158–166. doi: 10.7326/0003-4819-153-3-201008030-00006
  36. 36. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M (2011) Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskeletal Disorders 12:278. doi: 10.1186/1471-2474-12-278
  37. 37. Gopal RK, Yamashita TE, Prochazka AV (2012) Research without results: inadequate public reporting of clinical trial results. Contemporary Clinical Trials 33:486–491. doi: 10.1016/j.cct.2012.02.001
  38. 38. Hurley MN, Prayle AP, Smyth AR (2012) Delayed publication of clinical trials in cystic fibrosis. Journal of Cystic Fibrosis 11:14–17. doi: 10.1016/j.jcf.2011.08.004
  39. 39. Liu X, Li Y, Yin S, Song S (2010) Result publication of Chinese trials in World Health Organization primary registries. PLoS One 5:e12676. doi: 10.1371/journal.pone.0012676
  40. 40. Prenner JL, Driscoll SJ, Fine HF, Salz DA, Roth DB (2011) Publication rates of registered clinical trials in macular degeneration. Retina 31:401–404. doi: 10.1097/iae.0b013e3181eef2ad
  41. 41. Ramsey S, Scoggins J (2008) Commentary: Practicing on the Tip of an Information Iceberg? Evidence of Underpublication of Registered Clinical Trials in Oncology. Oncologist 13:925–929. doi: 10.1634/theoncologist.2008-0133
  42. 42. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM (2009) Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis. PLoS medicine 6.
  43. 43. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, et al. (2012) Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ 344:d7292. doi: 10.1136/bmj.d7292
  44. 44. Tfelt-Hansen PC (2011) Published and not fully published double-blind, randomised, controlled trials with oral naratriptan in the treatment of migraine: a review based on the GSK Trial Register. Journal of Headache and Pain 12:399–403. doi: 10.1007/s10194-011-0327-3
  45. 45. Wildt S, Krag A, Gluud L (2011) Characteristics of randomised trials on diseases in the digestive system registered in ClinicalTrials.gov: a retrospective analysis. BMJ Open 1:e000309. doi: 10.1136/bmjopen-2011-000309
  46. 46. Chahal J, Tomescu SS, Ravi B, Bach BR Jr, Ogilvie-Harris D, et al. (2012) Publication of sports medicine-related randomized controlled trials registered in ClinicalTrials.gov. American Journal of Sports Medicine 40:1970–1977. doi: 10.1177/0363546512448363
  47. 47. Guo SW, Evers JL (2013) Lack of transparency of clinical trials on endometriosis. Obstetrics & Gynecology 121:1281–1290. doi: 10.1097/aog.0b013e318291f299
  48. 48. Huser V, Cimino JJ (2012) Precision and negative predictive value of links between ClinicalTrials.gov and PubMed. AMIA Annual Symposium Proceedings/AMIA Symposium 2012:400–408.
  49. 49. Huser V, Cimino JJ (2013) Linking ClinicalTrials.gov and PubMed to Track Results of Interventional Human Clinical Trials. PLos One 8:9. doi: 10.1371/journal.pone.0068409
  50. 50. Jones CW, Handler L, Crowell KE, Keil LG, Weaver MA, et al. (2013) Non-publication of large randomized clinical trials: cross sectional analysis. BMJ 347:f6104. doi: 10.1136/bmj.f6104
  51. 51. Khan NA, Lombeida JI, Singh M, Spencer HJ, Torralba KD (2012) Association of industry funding with the outcome and quality of randomized controlled trials of drug therapy for rheumatoid arthritis. Arthritis & Rheumatism 64:2059–2067. doi: 10.1002/art.34393
  52. 52. Shamliyan T, Kane RL (2012) Clinical research involving children: registration, completeness, and publication. Pediatrics 129:e1291–1300. doi: 10.1542/peds.2010-2847
  53. 53. Shamliyan TA, Kane RL, Wyman J, Sainfort F (2012) Results availability from clinical research of female urinary incontinence. Neurourol Urodyn 31:22–29. doi: 10.1002/nau.21220
  54. 54. Smith HN, Bhandari M, Mahomed NN, Jan M, Gandhi R (2012) Comparison of arthroplasty trial publications after registration in ClinicalTrials.gov. Journal of Arthroplasty 27:1283–1288. doi: 10.1016/j.arth.2011.11.005
  55. 55. Thorn JC, Noble SM, Hollingworth W (2013) Timely and complete publication of economic evaluations alongside randomized controlled trials. Pharmacoeconomics 31:77–85. doi: 10.1007/s40273-012-0004-7
  56. 56. Vawdrey DK, Hripcsak G (2013) Publication bias in clinical trials of electronic health records. Journal of Biomedical Informatics 46:139–141. doi: 10.1016/j.jbi.2012.08.007
  57. 57. Song FJ, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, et al. (2009) Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Medical Research Methodology 9.
  58. 58. Dwan K, Gamble C, Williamson PR, Kirkham JJ, Reporting Bias G (2013) Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One 8:e66844. doi: 10.1371/journal.pone.0066844
  59. 59. Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009) Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev 21.
  60. 60. Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, et al. (2010) Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. BMJ 341:c4737. doi: 10.1136/bmj.c4737
  61. 61. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, et al. (2014) Increasing value and reducing waste in research design, conduct, and analysis. Lancet 383:166–175. doi: 10.1016/s0140-6736(13)62227-8
  62. 62. Al-Shahi Salman R, Beller E, Kagan J, Hemminki E, Phillips RS, et al. (2014) Increasing value and reducing waste in biomedical research regulation and management. Lancet 383:176–185. doi: 10.1016/s0140-6736(13)62297-7