The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.
In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.
This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Citation: Dwan K, Gamble C, Williamson PR, Kirkham JJ, the Reporting Bias Group (2013) Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review. PLoS ONE 8(7): e66844. https://doi.org/10.1371/journal.pone.0066844
Editor: Isabelle Boutron, University Paris Descartes, France
Received: January 25, 2013; Accepted: May 9, 2013; Published: July 5, 2013
Copyright: © 2013 Dwan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This updated review was funded by the Medical Research Council (MRC) hub for trial methodology. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Study publication bias arises when studies are published or not depending on their results; it has received much attention , . Empirical research consistently suggests that published work is more likely to be positive or statistically significant (P<0.05) than unpublished research . Study publication bias will lead to overestimation of treatment effects; it has been recognised as a threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. There is additional evidence that research without statistically significant results takes longer to achieve publication than research with significant results, further biasing evidence over time –. This “time lag bias” (or “pipeline bias”) will tend to add to the bias since results from early available evidence tend to be inflated and exaggerated , .
Within-study selective reporting bias relates to studies that have been published. It has been defined as the selection on the basis of the results of a subset of the original variables recorded for inclusion in a publication . Several different types of selective reporting within a study may occur. For example, selective reporting of analyses may include intention-to–treat analyses versus per–protocol analyses, endpoint score versus change from baseline, different time points or subgroups . Here we focus on the selective reporting of outcomes from those that were originally measured within a study; outcome reporting bias (ORB).
Randomised controlled trials (RCTs) are planned experiments, involving the random assignment of participants to interventions, and are seen as the gold standard of study designs to evaluate the effectiveness of a treatment in medical research in humans . The likely bias from selective outcome reporting is to overestimate the effect of the experimental treatment.
The original version of this systematic review  summarised the empirical evidence for the existence of study publication bias and outcome reporting bias. It found that 12 of the 16 included empirical studies demonstrated consistent evidence of an association between positive or statistically significant results and publication and that statistically significant outcomes have higher odds of being fully reported.
The ORBIT (Outcome Reporting Bias In Trials) study conducted by authors of this review, found that a third of Cochrane reviews found at least one trial with high suspicion of outcome reporting bias for a single review primary outcome . Work has also been published to show how to identify outcome reporting bias within a review and relevant trial reports .
Studies comparing trial publications to protocols or trial registries are also accumulating evidence on the proportion of studies in which at least one primary outcome was changed, introduced, or omitted .
Thus, the bias from missing outcome data that may affect a meta-analysis is on two levels: non-publication due to lack of submission or rejection of study reports (a study level problem) and the selective non-reporting of outcomes within published studies on the basis of the results (an outcome level problem). While much effort has been invested in trying to identify the former , , it is equally important to understand the nature and frequency of missing data from the latter level.
The aim of this study was to update the original review  and summarise the evidence from empirical cohort studies that have assessed study publication bias and/or outcome reporting bias in RCTs approved by a specific ethics committee or other inception cohorts of RCTs.
Study Inclusion Criteria
We included research that assessed an inception cohort of RCTs for study publication bias and/or outcome reporting bias. We focussed on inception cohorts with study protocols being registered before the start of the study as this type of prospective design were deemed more reliable. We excluded cohorts based on prevalence archives, in which a protocol is registered after a study is launched or completed, since such cohorts can already be affected by publication and selection bias.
Both cohorts containing exclusively RCTs or containing a mix of RCTs and non-RCTs were eligible. For those studies where it was not possible to identify the study type (i.e. whether any included studies were RCTs), we attempted to contact the authors to try to resolve this. In cases where it could not be resolved, studies were excluded. Those studies containing exclusively non-RCTs were excluded.
The assessment of RCTs in the included studies had to involve comparison of the protocol against all publications (for outcome reporting bias) or information from trialists (for study publication bias).
The search strategy from the original version of this review  was used in this update. In the original review, screening of titles was carried out by one author (KD), but in this update two authors (KD and JJK) screened both titles and abstracts. No masking was used during the screening of abstracts. MEDLINE (1946 to 2012), SCOPUS (1960 to 2012) and the Cochrane Methodology Register (1898 to 2012) were searched without language restrictions (see Appendix S1 for all search strategies). SCOPUS is a much larger database than EMBASE, it offers more coverage of scientific, technical, medical and social science literature than any other database. Over 90% of the sources indexed by EMBASE are also indexed by SCOPUS plus many other indexed sources as well.
Additional steps were taken to complement electronic database searches:the lead or contact authors of all identified studies were asked to identify further studies and references of included studies were checked for further eligible studies.
To assess the methodological quality of the included studies, the same criteria was applied as in the original version of this review .
1. Was there an inception cohort?
Yes = a sample of clinical trials registered at onset or on a roster (e.g. approved by an ethics committee) during a specified period of time.
No = anything else.
2. Was there complete follow up (after data-analysis) of all the trials in the cohort?
3. Was publication ascertained through personal contact with the investigators?
Yes = personal contact with investigators, or searching the literature and personal contact with the investigator.
No = searching the literature only.
4. Were positive and negative findings clearly defined?
Yes = clearly defined.
No = not clearly defined.
5. Were protocols compared to publications?
Yes = protocols were compared to publications.
No = protocols were not considered in the study.
A flow diagram (Figure 1, text S1) to show the status of approved protocols was completed for each empirical study by the first author only (KD) in the original version of the review and by two authors in the update (KD and JJK) using information available in the publication or further publications. Disagreements were resolved through discussion. Lead or contact authors of the empirical studies were then contacted by email and sent the flow diagram for their study to check the extracted data along with requests for further information or clarification of definitions if required. No masking was used and disagreements were resolved through discussion between KD and the lead or contact author of the empirical studies. Where comments from the original author were not available, PRW reviewed the report and discussed queries with KD in the original version of the review.
Characteristics of the cohorts were extracted by the first author in the original version of the review for each empirical study and issues relating to the methodological quality of the study were noted. This process was undertaken by two authors (JJK and KD) for newly identified studies in the update of this review. We recorded the definitions of ‘published’ employed in each empirical study. Further, we looked at the way the significance of the results of the studies in each cohort were investigated (i.e. direction of results and whether the study considered a p-value ≤0.05 as definition of significance and where there were no statistical tests whether the results were categorised as negative, positive, important or unimportant). We extracted data on the number of positive and negative trials that were published in each cohort and we extracted all information on the main objectives of each empirical study and separated these according to whether they related to study level or outcome level bias.
The search of MEDLINE, SCOPUS and the Cochrane Methodology Register led to 2525, 2090 and 832 references, respectively. Titles were checked by the two authors (KD and JJK) in this update and abstracts obtained for 86 potentially relevant studies. Abstracts were assessed for eligibility by both authors; 40 were excluded and full papers were obtained when available for 46.
Nineteen empirical studies were deemed eligible –, , –, sixteen of which were included in the original version of this review .
References from the included empirical studies led to another eligible study .
Thus in total, the search strategy identified 20 eligible empirical studies (Figure 2), of which, four were newly included in this update , , , . Two studies that should have been in the original review were included in this review. One study  was missed as a result of single author study selection in the original review and a second study  was identified through a reference search of a newly identified study . All previously identified studies were found again.
Twenty five studies were excluded; eight were not inception cohorts –; in two studies, the authors of included RCTs were not contacted for information on publication , ; in six, only published studies were included in the cohort –; in six studies, trial registries were investigated and were not considered as inception cohorts –; in one we could not confirm if any of the included studies were RCTs ; the author of a letter confirmed the study mentioned never began  and in a further study  was an analysis on oral presentations from one of the included studies .
Study publication bias.
Fifteen empirical studies considered the process up to the point of publication –, , . However, six of these empirical studies , , , , ,  did not consider whether a study was submitted for publication.
Five cohorts included only RCTs , , , , ; in the remaining ten cohorts , , –, ,  the proportion of included RCTs ranged from 14% to 56%. The results presented in the flow diagrams relate to all studies within each cohort because it was not possible to separate information for different types of studies (RCTs versus other).
Outcome reporting bias.
Five empirical studies covered the entire process from the study protocol to the publication of study outcomes , , , , . However, three of these empirical studies , ,  did not consider whether a study was submitted for publication. Four cohorts included only RCTs , , , ; in the remaining cohort  the proportion of included RCTs was 13%.
Two studies are currently being updated and data on outcomes is being analysed for publication , .
Table 1 contains information on empirical study characteristics. The majority of the empirical study objectives related to study publication bias and publication rates or outcome reporting bias.
Study publication bias.
Four of the empirical studies investigating study publication bias also assessed time lag bias , , , , four , , ,  assessed the outcome of protocols submitted to a research ethics committee (for example whether trials were started and if they were published) and another considered whether absence of acknowledged funding hampered implementation or publication . Eleven of the empirical studies , , , –, , , ,  assessed protocols approved by ethics committees, one  assessed those approved by health institutes, one assessed trials processed through a hospital pharmacy , one assessed studies funded by the NHS and commissioned by the North Thames Regional Office  and one empirical study  assessed trials conducted by NIH-funded clinical trials groups. The time period between protocol approval and assessment of publication status varied widely (less than one year to 34 years).
Outcome reporting bias.
Four of the empirical studies , , ,  assessed protocols approved by ethics committees and one empirical study  assessed those approved by a health institute. The time period between protocol approval and assessment of publication status varied from four to eight years.
Details of the methodological quality are presented in Table 2. The overall methodological quality of included empirical studies was good, with more than half of studies meeting all criteria.
Study publication bias.
Seven of the fifteen empirical studies , , , , – met all four of the criteria for studies investigating study publication bias (inception cohort, complete follow up of all trials, publication ascertained through personal contact with the investigator and definition of positive and negative findings clearly defined). In six empirical studies , , , , ,  there was less than 90% follow up of trials and in two empirical studies ,  the definition of positive and negative findings was unclear.
Outcome reporting bias.
All five empirical studies , , , ,  met all five criteria for studies investigating ORB (inception cohort, complete follow up of all trials, publication ascertained through personal contact with the investigator, definition of positive and negative findings clearly defined and comparison of protocol to publication).
As some studies may have several specified primary outcomes and others none, we looked at how each of the empirical studies dealt with this: Hahn et al  looked at the consistency between protocols and published reports in regard to the primary outcome and it was only stated that there were two primary outcomes in one study. In both of their empirical studies Chan et al ,  distinguished harm and efficacy outcomes but did consider the consistency of primary outcomes between protocols and publications and stated how many had more than one primary outcome. Ghersi et al  included studies with more than one primary outcome and included all primary outcomes in the analysis but excluded studies with primary outcomes that were non identifiable or included more than two time points. This is due to complex outcomes being more prone to selective reporting. von Elm et al  considered harm and efficacy outcomes and primary outcomes.
The flow diagrams (Figures 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22) show the status of approved protocols in included empirical studies based on available publications and additional information obtained such as number of studies stopped early or never started.
Study publication bias.
No information other than the study report was available for one empirical study  due to its age. Information could not be located for four empirical studies , , , . A conference abstract and poster was only available for one empirical study presented over 10 years ago . Extra information from lead or contact authors was available for nine empirical studies , , , –, , , , including data to complete flow diagrams, information on definitions and clarifications.
Outcome reporting bias.
Extra information from lead or contact authors was available for four empirical studies , , , , including data to complete flow diagrams, information on definitions, clarifications and extra information on outcomes. Original flow diagrams and questions asked are available on request.
Figure 3 shows for illustrative purposes the completed flow diagram for the empirical study conducted by Chan et al  on the status of 304 protocols approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg in 1994–1995. The empirical study was conducted in 2003, which allowed sufficient time for trial completion and publication. Thirty studies were excluded as the files were not found. Surveys were sent to trial investigators with a response rate of 151 out of 274 (55%); of these two were ongoing, 38 had stopped early, 24 studies had never started and 87 studies were completed. Information from the survey responses (151) and the literature search alone (123) indicated that 120 studies had been submitted for publication and 154 studies had not been submitted for publication. Of the 120 submitted studies; 102 had been fully published, 16 had been submitted or were under preparation and two had not been accepted for publication. This resulted in 156 studies not being published.
Publication and Trial Findings
Study publication bias.
Table 3 shows the total number of studies published in each cohort which varies widely from 21% to 93%. Nine of the cohorts –, , , –,  consider what proportion of trials with positive and negative results are published, ranging from 60% to 98% and from 19% to 85%, respectively. Only four cohorts , , ,  consider what percentage of studies with null results (no difference observed between the two study groups, p>0.10, inconclusive) are published (32% to 44%). The results consistently show that positive studies are more likely to be published compared to negative studies.
Table 4 shows general consistency in the definition of ‘published.’ However, two empirical studies ,  considered grey literature in their definition of ‘published’ although information on full publications and grey literature publications are separated (Figures 5, 6). Although not considered in the definition of ‘published’, seven empirical studies , , , , , ,  gave information on the grey literature or reports in preparation. Three empirical studies gave no information on their definition of ‘published’ , , . In addition, results are presented for the percentage of studies not submitted for journal publication (7% to 58%), of studies submitted but not accepted for publication (0 to 20%) by the time of analysis of the cohort and the percentage of studies not published that were not submitted (63% to 100%). This implies that studies remain unpublished due largely to failure to submit rather than rejection by journals.
The main findings of the empirical studies are shown in Table 5 and they are separated into study level and outcome level results. Nine of the included cohort studies , , , , , , , ,  investigated results in relation to their statistical significance. One empirical study considered the importance of the results as rated by the investigator  and another empirical study considered confirmatory versus inconclusive results . Five of the empirical studies , , , ,  that examined the association between publication and statistical significance found that studies with statistically significant results were more likely to be published than those with non-significant results. Stern et al  reported that this finding was even stronger for their subgroup of clinical trials (Hazard Ratio (HR) 3.13 (95% confidence interval (CI) 1.76, 5.58), p = 0.0001) compared to all quantitative studies (HR 2.32 (95% CI 1.47, 3.66), p = 0.0003). One empirical study  found that studies with statistically significant results were more likely to be submitted for publication than those with non-significant results. Easterbrook et al  also found that study publication bias was greater with observational and laboratory-based experimental studies (Odds Ratio (OR) 3.79, 95% CI; 1.47, 9.76) than with RCTs (OR 0.84, 95% CI; 0.34, 2.09). Hall et al  found no difference in publication success in high impact journals i.e., >5 for trials reporting statistically significant or non-significant results (RR 0.929; 95% CI 0.759–1.137; P = 0.537). However, two empirical studies ,  found no statistically significant evidence for study publication bias (RR 4 (95% CI 0.6, 32) p = 0.1 and OR 0.53 (95% CI 0.25, 1.1) p = 0.1).
Ioannidis et al  found that positive trials were submitted for publication more rapidly after completion than negative trials (median 1 vs 1.6 years, p<0.001) and were published more rapidly after submission (median 0.8 vs 1.1 years, p<0.04). Stern el al  and Decullier et al  also considered time to publication and found that those studies with positive results were published faster than those with negative results (median 4.8 v 8.0 years  and HR 2.48 (95% CI 1.36, 4.55) , respectively). However, for 53 trials where data were available, Hall et al  also found that there was no difference in the time to publication for trials reporting statistically significant results vs those reporting non-significant results (32±16 vs 36±24 months; mean ± SD; P = 0.869).
Pich et al  looked at whether studies in their cohort were completed and published; 64% (92/143) of initiated trials were finished in accordance with the protocol and 31% (38/123) were published (or in-press) in peer reviewed journals. The main objective of the study by Blumle et al  was to consider how eligibility criteria stated in protocols was reported in subsequent reports, in doing so they noted that 52% of studies in their cohort were published, decreasing to 48% for RCTs only. Turer et al  looked at publication rates and found that 47% of studies in their cohort had been published. de Jong et al  aimed to identify prognostic indicators of the publication rate of clinical studies and found that 29% of studies had been published, although some had only been approved 6 months previously.
Seven empirical studies , , , –,  described reasons why a study was not published as reported by the trialists. Reasons related to trial results included: unimportant/null results; results not interesting; results not statistically significant.
Outcome reporting bias.
The total number of studies published in each cohort varied from 37% to 67% (Table 3). However, none of the empirical studies investigating ORB considered the proportions of published trials with positive, negative, or null overall results.
Table 4 shows that three of the empirical studies , ,  defined ‘published' as a journal article; one empirical study  considered grey literature in their definition of ‘published’ although information on full publications and grey literature publications are separated (Figure 15). Although not considered in the definition of ‘published’, one empirical study  gave information on the grey literature or reports in preparation. Only two empirical studies ,  present results for the percentage of studies not submitted (31% to 56%), the percentage of studies submitted but not accepted (1 to 2%) by the time of analysis of the cohort and the percentage of studies not published that were not submitted (97% to 99%).
All four empirical studies , , ,  that examined the association between outcome reporting bias (outcome level bias) and statistical significance found that statistically significant outcomes were more likely to be completely reported than non-significant outcomes (range of odds ratios: 2.2 to 4.7 (Table 5)).
Five empirical studies , , , ,  compared the protocol and the publication with respect to the primary outcome (Table 5). Only two empirical studies looked at the different types of discrepancies that can arise ,  and concluded that 40–62% of trials had major discrepancies between the primary outcomes specified in protocols and those defined in the published articles. Four of the included empirical studies found that in 47–74% of studies the primary outcome stated in the protocol was the same as in the publication; between 13 and 31% of primary outcomes specified in the protocol were omitted in the publication and between 10 and 18% of reports introduced a primary outcome in the publication that was not specified in the protocol.
Chan et al also looked at efficacy and harm outcomes and in their Canadian empirical study  found that a median of 31% of efficacy outcomes and 59% of harm outcomes were incompletely reported and statistically significant efficacy outcomes had a higher odds than non significant efficacy outcomes of being fully reported (OR 2.7; 95% CI 1.5, 5). In their Danish empirical study  they found that 50% of efficacy and 65% of harm outcomes per trial were incompletely reported and statistically significant outcomes had a higher odds of being fully reported compared with non significant outcomes for both efficacy (OR 2.4, 95% CI; 1.4, 4) and harm (OR 4.7, 95% CI; 1.8, 12) data.
von Elm et al  considered efficacy and harm outcomes as well as primary outcomes overall and found that 32% (223/687) were reported in the publication but not specified in the protocol and 42% (227/546) were specified in the protocol but not reported, however this is preliminary data.
Two empirical studies ,  describe the reasons why outcomes do not get reported but the study is published, these include lack of clinical importance and lack of statistical significance.
The four newly identified empirical studies only examined study publication bias. Outcome reporting bias was considered in one of the cohorts but results have only just been submitted for publication .
Very few of the 20 empirical studies examined both study publication bias and outcome reporting bias in the same cohort. Twelve of the included empirical studies demonstrate consistent evidence of an association between positive or statistically significant results and publication. They suggest that studies reporting positive/statistically significant results are more likely to be published and that statistically significant outcomes have higher odds of being fully reported.
In this review we focused on empirical studies that included RCTs since they provide the best evidence of the efficacy of medical interventions . RCTs are prone to study publication bias, but it has been shown that other types of studies are more prone to study publication bias . The main limitation of this review was that for eleven of the 20 included cohorts, information on RCTs could not be separated from information on other studies. Due to this barrier, and variability across empirical studies in the time lapse between when the protocol was approved and when the data were censored for analysis, we felt it was not appropriate to combine statistically the results from the different cohorts. Also, the fact that in six empirical studies , , , , ,  follow-up of trials was less than 90% could mean that the problem of study publication bias is underestimated in these cohorts.
It is difficult to tell the current state of the literature with respect to study publication bias, as even the most recently published empirical evaluations included in the review, considered RCTs which began 10 years ago. Nevertheless, the empirical studies that were published within the last ten years show that the total amount of studies published was less than 50% on average.
None of the empirical studies explored the idea of all outcomes being non-significant versus those deemed most important being non-significant. In the reasons given, it was not stated which outcomes/how many outcomes were non-significant. Some empirical studies imply that all results were non-significant although this is due to the way the reason was written i.e. no significant results; but it is not explained whether this means for all outcomes, or primary and secondary, harm and efficacy etc. This implies a potential ambiguity of ‘no significant results’. It is not clear whether studies remain unpublished because all outcomes are non-significant and those that are published are so because significant results are selectively reported. This is where study publication bias and outcome reporting bias overlap.
Dubben et al  looked at whether study publication bias exists in studies which investigate the problem of study publication bias. Although they found no evidence of study publication bias, it is interesting to note that two of the included cohorts in this review have not been published , . The study conducted by Wormald et al  concluded that ‘there was limited evidence of study publication bias’ whereas the authors of the other study  have not submitted their study for publication. There may be other unpublished studies of study publication bias or outcome reporting bias that were not located by the search, however contact with experts in the field reduces the likelihood of these issues introducing bias.
Submission is an important aspect of investigating study publication bias as it will provide information on whether reports are not being published because they are not submitted or they are submitted but not accepted. Obviously those studies that are not submitted are not published and it was found by Dickersin et al  that non-publication was primarily a result of failure to write up and submit the trial results rather than rejection of submitted manuscripts. This is confirmed for the cohorts identified here with the percentage of studies not published due to not being submitted ranging from 63% to 100%. Olson et al  also found that there was no evidence that study publication bias occurred once manuscripts had been submitted to a medical journal. However, this study looks at a high impact general journal, which is unlikely to be representative for specialist journals that publish the majority of clinical trials.
Eleven studies assessed the impact of funding on publication; this was done in several ways. Three studies found that external funding lead to a higher rate of publication , , . von Elm et al  found that the probability of publication decreased if the study was commercially funded and increased with non commercial funding. Easterbrook et al  found that compared with unfunded studies, government funded studies were more likely to yield statistically significant results but government sponsorship was not found to have a statistically significant effect on the likelihood of publication and company sponsored trials were less likely to be published or presented. Dickersin et al  found no difference in the funding mechanism grant versus contract and Ioannidis et al  found no difference in whether data were managed by the pharmaceutical industry or other federally sponsored organisations. Chan 2004b et al  found that 61% of the 51 trials with major discrepancies were funded solely by industry sources compared with 49% of the 51 trials without discrepancies. Ghersi  did examine the effect of funding in terms of reporting and discrepancies of outcomes but no information about the results is currently available. Hahn et al  compared the funder stated in protocol to publication. Hall et al  found that studies sponsored by the pharmaceutical industry were less likely to be published than those sponsored by federal granting agencies (RR 0.50; 95% CI 0.39–0.65; P = 0.0045) but were more likely to be published than studies funded by the local health authority (RR 1.94; 95% CI 1.09–3.44; P = 0.011). These studies indicate that funding is an important factor to consider when investigating publication bias and outcome reporting bias, however more work needs to be done to examine common questions before conclusions regarding the relationship between funding and outcome reporting bias can be drawn.
Our review has examined inception cohorts only, however, other authors have investigated aspects of study publication bias and outcome reporting bias using different study designs, with similar conclusions. Since the original version of this review was published , a Cochrane methodology review is now available on publication bias . This review included five studies, only one of which was not included in our review as it was not an inception cohort . Hopewell et al concluded that trials with positive findings are published more often and more quickly than trials with negative findings . The Cochrane review by Scherer et al  investigating the full publication of results initially presented in abstracts found that only 63% of results from abstracts describing randomized or controlled clinical trials are published in full and ’positive’ results were more frequently published than non ’positive’ results. Several studies investigated a cohort of trials submitted to drug licensing authorities , , ,  and all found that many of these trials remain unpublished, with one study demonstrating that trials with positive outcomes resulted more often in submission of a final report to the regulatory authority . Olson et al  conducted a prospective cohort study of manuscripts submitted to JAMA and assessed whether the submitted manuscripts were more likely to be published if they reported positive results. They did not find a statistically significant difference in publication rates between those with positive and negative results. None of the inception cohorts addressed the question as to whether the significance determined whether a submitted paper was accepted or not, with the exception of one inception cohort  that found that “positive” trials were published significantly more rapidly after submission than “negative” trials. Finally, a comparison of the published version of RCTs in a specialist clinical journal with the original trial protocol found that important changes between protocol and published paper are common; the published primary outcome was exactly the same as in the protocol in six out of 26 trials (23%)  This was also highlighted in a recent Cochrane methodological review , which included 12 studies comparing protocols to published reports and four studies comparing trial registry entries to published reports.
We recommend that researchers use the flow diagram presented in this work as the standard for reporting of future similar studies that look at study publication bias and ORB as it clearly shows what happens to all trials in the cohort.
Reviewers should scrutinise trials with missing outcome data and ensure that an attempt to contact trialists is always made if the study does not report results. An outcome matrix generator has now been developed as a tool to help identify missing outcome data at the study level within a review (http://ctrc.liv.ac.uk/orbit/). Also, the lack of reporting of specified outcome(s) should not be an automatic reason for exclusion of studies. Statisticians should be involved for the data extraction of more complex outcomes, for example, time to event. Methods that have been developed to assess the robustness of the conclusions of systematic reviews to ORB – should be used. Meta-analyses of outcomes where several relevant trials have missing data should be seen with extra caution. In all, the credibility of clinical research findings may decrease when there is wide flexibility in the use of various outcomes and analysis in a specific field and this is coupled with selective reporting biases.
The setting up of clinical trials registers and the advanced publication of detailed protocols with an explicit description of outcomes and analysis plans should help combat these problems, although it should be noted that other work has shown that there can be discrepancies between protocols/trial registries and published reports . Trialists should be encouraged to describe legitimate changes to outcomes stated in the protocol.
For empirical evaluations of selective reporting biases, the definition of significance is important as is whether the direction of the results is taken into account, i.e. whether the results are significant for or against the experimental intervention. However, only one study took this into account . The selective publication preference forces may change over time. For example, it is often seen that initially studies favouring treatment are more likely to be published and those favouring control suppressed. However, as time passes, contradicting trials that favour control may become attractive for publication, as they are ‘different.’ The majority of cohorts included in this review do not consider this possibility.
Another recommendation is to conduct empirical evaluations looking at both ORB and study publication bias in RCTs to investigate the relative importance of both i.e. which type of bias is the greater problem. The effects of factors such as funding, i.e. the influence of pharmaceutical industry trials versus non pharmaceutical trials, should also be factored in these empirical evaluations.
PRISMA 2009 Checklist.
Explanation of flow diagram.
The authors would like to thank the authors of all included studies for further information on their studies.
The members of the Reporting Bias Group are:
Douglas G Altman, Centre for statistics in Medicine, The University of Oxford (helped conceive and design the systematic review)
Juan A Arnaiz, Clinical Pharmacology Unit, UASP Hospital Clinic, Barcelona, Spain
Jill Bloom, Moorfields Eye Hospital, London, United Kingdom
An-Wen Chan, Women's College Research Institute Skin Surgery Centre, Women's College Hospital, Toronto, Canada
Mike Clarke, Centre for Public Health, Queen's University, Belfast, UK
Eugenia Cronin, Healthier Communities/Public Health, Greenwich Council, London, England
Evelyne Decullier, Clinical Epidemiology Unit, DIM-Hospices Civils de Lyon, Lyon, France
Philippa J Easterbrook, Department of HIV/GUM, King's College London, London, United Kingdom
Erik Von Elm, Cochrane Switzerland, Institute of Social and Preventive Medicine (IUMSP), Lausanne University Hospital, Lausanne, Switzerland and German Cochrane Centre, Institute of Medical Biometry and Medical Informatics, University Medical Centre Freiburg, Freiburg, Germany
Davina Ghersi, NHMRC Clinical Trials Centre, The University of Sydney, Australia
Julian P T Higgins, School of Social and Community Medicine, The University of Bristol, UK
John P A Ioannidis, Stanford University School of Medicine, Stanford University, Stanford, California and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America,
John Simes, National Health and Medical Research Council (NHMRC) Clinical Trials Centre, University of Sydney, Sydney, Australia
Jonathan A C Sterne, School of Social and Community Medicine, The University of Bristol, UK
Analyzed the data: KD. Contributed reagents/materials/analysis tools: RBG. Wrote the paper: KD CG PRW JJK. Conceived and designed the systematic review: KD CG PRW.
Song F, Parekh S, Hooper L, Loke YK, Ryder J, et al.. (2010) Dissemination and publication of research findings: an updated review of related biases.. Health Technol Assess 14.
Rothstein HR, Sutton AJ, Borenstein M (2005) Publication Bias in Meta-Analysis. Prevention, Assessment and Adjustments: Wiley.
Dickersin K, Min YI (1993) NIH clinical trials and publication bias. Online J Curr Clin Trials Doc No 50.
- 4. Stern JM, Simes RJ (1997) Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315: 640–645.
- 5. Ioannidis JP (1998) Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 279: 281–286.
Scherer RW, Langenberg P, von Elm E (2007) Full publication of results initially presented in abstracts. Cochrane Database Syst Rev: MR000005.
- 7. Decullier E, Lheritier V, Chapuis F (2005) Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 331: 19.
- 8. Ioannidis J, Lau J (2001) Evolution of treatment effects over time: empirical insight from recursive cumulative metaanalyses. Proc Natl Acad Sci U S A 98: 831–836.
- 9. Trikalinos TA, Churchill R, Ferri M, Leucht S, Tuunainen A, et al. (2004) Effect sizes in cumulative meta-analyses of mental health randomized trials evolved over time. J Clin Epidemiol 57: 1124–1130.
- 10. Hutton JL, Williamson PR (2000) Bias in meta-analysis due to outcome variable selection within studies. Applied Statistics 49: 359–370.
- 11. Williamson PR, Gamble C, Altman DG, Hutton JL (2005) Outcome selection bias in meta-analysis. Stat Methods Med Res 14: 515–524.
- 12. Kane R L, Wang J, Garrard J (2007) Reporting in randomized clinical trials improved after adoption of the CONSORT statement. J Clin Epidemiol 60: 241–249.
- 13. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, et al. (2008) Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias. PLoS ONE 3: e3081.
- 14. Kirkham JJ, Dwan K, Altman DG, Gamble C, Dodd S, et al. (2010) The impact of outcome reporting bias in a cohort of systematic reviews. BMJ 340: c365.
Dwan K, Gamble C, Kolamunnage-Dona R, Mohammed S, Powell C, et al.. (2010) Assessing the potential for outcome reporting bias in a review: A tutorial Trials 11.
Dwan K, Altman DG, Cresswell L, Blundell M, Gamble C, et al.. (2011) Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database of Systematic Reviews: MR000031.
- 17. Chan A-W, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 291: 2457–2465.
- 18. Chan AW, Krleza-Jeri K, Schmid I, Altman DG (2004) Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 171: 735–740.
- 19. Cooper H, DeNeve K, Charlton K (1997) Finding the missing science: the fate of studies submitted for review by a human subjects committee. Psychol Methods 2: 447–452.
- 20. Cronin E, Sheldon T (2004) Factors influencing the publication of health research. Int J Technol Assess Health Care 20: 351–355.
- 21. de Jong JP, Ter Riet G, Willems DL (2010) Two prognostic indicators of the publication rate of clinical studies were available during ethical review. J Clin Epidemiol 63: 1342–1350.
- 22. Decullier E, Chapuis F (2006) Impact of funding on biomedical research: a retrospective cohort study. BMC Public Health 6: 165.
- 23. Dickersin K, Min YI, Meinert CL (1992) Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 267: 374–378.
- 24. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (1991) Publication bias in clinical research. Lancet 337: 867–872.
Ghersi D (2006) Issues in the design, conduct and reporting of clinical trials that impact on the quality of decision making: University of Sydney.
- 26. Hahn S, Williamson PR, Hutton JL (2002) Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract 8: 353–359.
- 27. Pich J, Carne X, Arnaiz JA, Gomez B, Trilla A, et al. (2003) Role of a research ethics committee in follow-up and publication of results. Lancet 361: 1015–1016.
- 28. Turer AT, Mahaffey KW, Compton KL, Califf RM, Schulman KA (2007) Publication or presentation of results from multicenter clinical trials: Evidence from an academic medical center. American heart journal 153: 674–680.
- 29. Von Elm E, Röllin A, Blümle A, Huwiler K, Witschi M, et al. (2008) Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee. Swiss Med Wkly 138: 197–203.
Wormald R, Bloom J, Evans J, Oldfield K (1997) Publication bias in eye trials; Amsterdam.
- 31. Blümle A, Antes G, Schumacher M, Just H, von Elm E (2008) Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others. Journal of Medical Ethics 34: e20.
- 32. Hall R, de Antueno C, Webber A (2007) Publication bias in the medical literature: a review by a Canadian Research Ethics Board. Canadian Journal of Anasthesia 54: 380–388.
McKenzie JE, Herbison GP, Roth P, Paul C (2010) Obstacles to researching the researchers: A case study of the ethical challenges of undertaking methodological research investigating the reporting of randomised controlled trials. Trials 11.
Mhaskar R, Kumar A, Soares H, Gardner B, Djulbegovic B (2009) Treatment related harms: what was planned and what was reported? An analysis of Southwest Oncology Group phase III trials. Singapore.
- 35. Soares HP, Daniels S, Kumar A, Clarke M, Scott C, et al. (2004) Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiation Therapy Oncology Group. BMJ 328: 22–24.
- 36. Matthews GA, Dumville JC, Hewitt CE, Torgerson DJ (2011) Retrospective cohort study highlighted outcome reporting bias in UK publicly funded trials. J Clin Epidemiol 64: 1317–1324.
- 37. Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, et al. (2011) Optimism bias leads to inconclusive results–an empirical study. J Clin Epidemiol 64: 583–593.
- 38. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. N Engl J Med 358: 252–260.
- 39. Vedula SS, Bero L, Scherer RW, Dickersin K (2009) Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 361: 1963–1971.
- 40. Bardy AH (1998) Bias in reporting clinical trials. Br J Clin Pharmacol 46: 147–150.
- 41. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003) Evidence b(i)ased medicine–selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326: 1171–1173.
- 42. Jureidini JN, McHenry LB, Mansfield PR (2008) Clinical trials and drug promotion: Selective reporting of study 329. The International Journal of Risk and Safety in Medicine 20: 73–81.
- 43. Menzel S, Uebing B, Hucklenbroich P, Schober O (2007) Evaluation of clinical trials following an approval from a research ethics committee. Dtsch Med Wochenschr 132: 2313–2317.
- 44. Suñe-Martin P, Montoro-Ronsano J B (2003) Role of a research ethics committee in follow-up and publication of results. The Lancet 361: 2245–2246.
- 45. Reveiz L, Bonfill X, Glujovsky D, Pinzon CE, Asenjo-Lobos C, et al. (2012) Trial registration in Latin America and the Caribbean’s: study of randomized trials published in 2010. J Clin Epidemiol 65: 482–487.
- 46. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P (2009) Comparison of registered and published primary outcomes in randomized controlled trials. JAMA: The Journal of the American Medical Association 302: 977–984.
- 47. Ewart R, Lausen H, Millian N (2009) Undisclosed Changes in Outcomes in Randomized Controlled Trials: An Observational Study. The Annals of Family Medicine 7: 542–546.
- 48. Rasmussen N, Lee K, Bero L (2009) Association of trial registration with the results and conclusions of published trials of new oncology drugs. Trials 10: 116.
- 49. Huić M, Marušić M, Marušić A (2011) Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy. PLoS ONE 6: e25258.
Chappell L, Alfirevic Z, Chien P, Jarvis S, Thornton JG (2005) A comparison of the published version of randomized controlled trials in a specialist clinical journal with the original trial protocols; Chicago.
- 51. Ramsey S, Scoggins J (2008) Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology. Oncologist 13: 925–929.
- 52. Bourgeois FT, Murthy S, Mandl KD (2010) Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov. Annals of Internal Medicine 153: 158–166.
- 53. Reveiz L, Chan AW, Krleža-Jerić K, Granados CE, Pinart M, et al. (2010) Reporting of Methodologic Information on Trial Registries for Quality Assessment: A Study of Trial Records Retrieved from the WHO Search Portal. PLoS ONE 5: e12484.
- 54. Tharyan P, George AT, Kirubakaran R, Barnabas JP (2013) Reporting of methods was better in the Clinical Trials Registry-India than in Indian journal publications. J Clin Epidemiol 66: 10–22.
- 55. Viergever RF, Ghersi D (2011) The Quality of Registration of Clinical Trials. PLoS ONE 6: e14701.
Xuemei L, Li Y, Shangqi S (2010) Transparency of Chinese trials: the results are fully published after registered in WHO primary registries? Keystone, Colorado, USA.
- 57. Misakian AL, Bero LA (1998) Publication bias and research on passive smoking: comparison of published and unpublished studies. Jama 280: 250–253.
- 58. McCormack J, Loewen P, Jewesson P (2005) Dissemination of results needs to be tracked as well as the funding is. BMJ 331: 456.
- 59. Decullier E, Chapuis F (2007) Oral presentation bias: a retrospective cohort study. Journal of Epidemiology and Community Health 61: 190–193.
Egger M, Davey Smith G, Altman DG (2001) Systematic Reviews in Health Care. Meta-analysis in context: BMJ Publishing Group.
- 61. Dubben HH, Beck-Bornholdt HP (2005) Systematic review of publication bias in studies on publication bias. BMJ 331: 433–434.
- 62. Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H Jr (1987) Publication bias and clinical trials. Control Clin Trials 8: 343–353.
- 63. Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, et al. (2002) Publication bias in editorial decision making. JAMA 287: 2825–2828.
Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009) Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev: MR000006.
- 65. Hemminki E (1980) Study of information submitted by drug companies to licensing authorities. Br Med J 280: 833–836.
- 66. Williamson PR, Gamble C (2005) Identification and impact of outcome selection bias in meta-analysis. Stat Med 24: 1547–1561.
- 67. Williamson PR, Gamble C (2007) Application and investigation of a bound for outcome reporting bias. Trials 8: 9.
- 68. Kirkham JJ, Riley RD, Williamson PR (2012) A multivariate meta-analysis approach for reducing the impact of outcome reporting bias in systematic reviews. Stat Med 31: 2179–2195.