Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Non-Publication Is Common among Phase 1, Single-Center, Not Prospectively Registered, or Early Terminated Clinical Drug Trials

  • Cornelis A. van den Bogert,

    Affiliations Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, TB Utrecht, The Netherlands, Central Committee on Research involving Human Subjects (CCMO), BH The Hague, the Netherlands, National Institute for Public Health and the Environment (RIVM), Division of Public Health and Health Services, BA Bilthoven, The Netherlands

  • Patrick C. Souverein ,

    Affiliation Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, TB Utrecht, The Netherlands

  • Cecile T. M. Brekelmans,

    Affiliation Central Committee on Research involving Human Subjects (CCMO), BH The Hague, the Netherlands

  • Susan W. J. Janssen,

    Affiliation National Institute for Public Health and the Environment (RIVM), Division of Public Health and Health Services, BA Bilthoven, The Netherlands

  • Gerard H. Koëter,

    Affiliation Central Committee on Research involving Human Subjects (CCMO), BH The Hague, the Netherlands

  • Hubert G. M. Leufkens,

    Affiliation Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, TB Utrecht, The Netherlands

  • Lex M. Bouter

    Affiliation VU University Medical Center, Department of Epidemiology and Biostatistics, MB Amsterdam, the Netherlands


The objective of this study was to investigate the occurrence and determinants of non-publication of clinical drug trials in the Netherlands.All clinical drug trials reviewed by the 28 Institutional Review Boards (IRBs) in the Netherlands in 2007 were followed-up from approval to publication. Candidate determinants were the sponsor, phase, applicant, centers, therapeutic effect expected, type of trial, approval status of the drug(s), drug type, participant category, oncology or other disease area, prospective registration, and early termination. The main outcome was publication as peer reviewed article. The percentage of trials that were published, crude and adjusted odds ratio (OR), and 95% confidence interval (CI) were used to quantify the associations between determinants and publication. In 2007, 622 clinical drug trials were reviewed by IRBs in the Netherlands. By the end of follow-up, 19 of these were rejected by the IRB, another 19 never started inclusion, and 10 were still running. Of the 574 trials remaining in the analysis, 334 (58%) were published as peer-reviewed article. The multivariable logistic regression model identified the following determinants with a robust, statistically significant association with publication: phase 2 (60% published; adjusted OR 2.6, 95% CI 1.1–5.9), phase 3 (73% published; adjusted OR 4.1, 95% CI 1.7–10.0), and trials not belonging to phase 1–4 (60% published; adjusted OR 3.2, 95% CI 1.5 to 6.5) compared to phase 1 trials (35% published); trials with a company or investigator as applicant (63% published) compared to trials with a Contract Research Organization (CRO) as applicant (50% published; adjusted OR 1.7; 95% CI 1.1–2.8); and multicenter trials also conducted in other EU countries (68% published; adjusted OR 2.2, 95% CI 1.1–4.4) or also outside the European Union (72% published; adjusted OR 2.0, 95% CI 1.0–4.0) compared to single-center trials (45% published). Trials that were not prospectively registered (48% published) had a lower likelihood of publication compared to prospectively registered trials (75% published; adjusted OR 0.5, 95% CI 0.3–0.8), as well as trials that were terminated early (33% published) compared to trials that were completed as planned (64% published; adjusted OR 0.2, 95% CI 0.1–0.3). The non-publication rate of clinical trials seems to have improved compared to previous inception cohorts, but is still far from optimal, in particular among phase 1, single-center, not prospectively registered, and early terminated trials.


Since decades, non-publication of trial results has been a major concern in clinical research, as non-publication causes research waste [1,2], and can bias evidence-based treatment guidelines and clinical decision making [3,4,5]. Research waste was defined by Chalmers and Glasziou as avoidable waste of investments in research due to inadequately producing and reporting, non-publication being one of its four stages [1]. In 2009, the magnitude of research waste in clinical research was estimated at 85% [1]. Moreover, non-publication is unethical because the burdens and risks imposed on study participants do not contribute to the body of knowledge.

The waste and bias implicated in clinical research caused by non-publication over the past years [3,6,7,8,9,10,11,12,13,14,15,16,17,18] has strengthened the view of several organizations and governments that all clinical trials must be published [19,20,21,22,23]. Previous studies specifically focused on publication of randomized controlled trials (RCTs) [24], covered only trials within one medical specialty [25], examined a limited selection of determinants, or used incomplete trial cohorts depending on public registrations [26,27] or interview response rates [10]. The most well-known determinant for non-publication is having a ‘negative’ outcome [28], but other reasons for non-publication have been proposed as well [29]. Thus, there is limited data on the occurrence of non-publication and its determinants that is both recent and complete. Investigating determinants of non-publication can identify and provide specific solutions for areas where the problem of research waste and bias is most persistent. Therefore, the aim of our study was to investigate the occurrence and determinants of non-publication of clinical drug trials in a country-wide inception cohort of clinical drug trials.

Methods and Data Collection

The design of our study and the characteristics of the included trials have been published elsewhere [30]. In short, the inception cohort consisted of all clinical drug trials reviewed by IRBs in the Netherlands between 1 January and 31 December 2007. We used ToetsingOnline [31], the database of the competent authority of the Netherlands (the Central Committee on Research Involving Human Subjects, abbreviated in Dutch as CCMO), the only source containing a complete record of all trials that underwent IRB-review, to identify the cohort, the determinants, and the stages of progress of the included trials. In addition, we searched the trial registries and ISRCTN for the candidate determinant prospective registration, and for the availability of trial results in public registries. We originally defined prospective registration as registration before the first patient is recruited [30]. Because start-of-trial dates were missing in the database, we changed the definition of prospective registration to registration within one month of IRB-approval. In our experience, most trials start recruitment later than one month after IRB-approval, so this threshold classified more not prospectively registered trials as prospectively registered than vice versa. Sensitivity analyses were performed using two less strict thresholds of prospective registration: registration within 1 year of IRB-approval, and registration at any moment.

The search algorithm for publications used the platforms Pubmed, Embase and Google Scholar. More details are reported in the protocol [30]. We conducted the final search for publication and availability of results in January and February 2016. So, the follow-up since IRB-approval was 8 years at minimum, and 9 at maximum. Questionnaires were e-mailed to the principal investigators (PIs) of the trials, asking for reasons for non-publication. If the PI had left the company or the hospital that conducted the trial, we tried to contact the PI at his current affiliation, or otherwise we attempted to contact colleagues of the PI that were involved in the same trial. After identification of the right person, at maximum two reminders were sent. The Dutch accredited IRBs were asked for permission to send the questionnaire to the PIs. All IRBs consented and provided a signed letter of endorsement, which we attached to the questionnaire. The list of 23 Dutch accredited IRBs can be found on the website of the CCMO [32].

Candidate determinants were trial characteristics that the PI filled out on a form at the time of submission of the trial application for IRB-review. This form is mandatory and identical for all IRBs in the Netherlands. Prospective registration on the registries of or ISRCTN, and whether the trial was completed as planned or terminated early were also candidate determinants.

To be consistent with the literature referred to above, and for the purpose of linguistic clarity, we used publication as an outcome rather than non-publication. A publication was defined as a peer-reviewed article (i.e. the reciprocal of non-publication). Percentages of published trials were calculated for each of the determinant categories. Logistic regression was used to calculate crude and adjusted odds ratios (ORs) and 95% CIs for the association between determinants and publication. The final multivariable model included determinants that were retained after backward stepwise elimination based on the likelihood ratio, using p>0.2 as elimination rule. The original published study protocol prescribed Cox-regression for multivariable analysis instead of logistic regression [30]. However, the hazard ratios of determinants were not proportional during the observation period. Moreover, the end-of-trial dates were missing for 186 trials. Therefore, the date of IRB-approval was used as the starting point of follow-up, instead of the end-of-trial date prescribed by the protocol [30]. Because we were unable to control for the duration of the trials, interpretation of the hazard ratio would therefore be challenging and we decided to use logistic regression instead. The Kaplan Meier analysis was used to visualize the cohort from its starting point (date of IRB-review) until the endpoint (publication or non-publication), stratified by trial phase, one of the determinants which also discriminates between longer- and shorter-during trials [33].

We also stratified by oncology versus other disease areas (pre-specified in the protocol), and further stratified oncology trials by phase 1 trials versus other phase trials (post-hoc). Oncology phase 1 trials differ from other disease area phase 1 trials in that oncology phase 1 trials are usually restricted to patients, while most other disease areas include healthy volunteers [34].

In a second post hoc analysis, we investigated the association of the direction of results and publication. We categorized the direction of conclusions as positive, negative or descriptive. This categorization was based on the conclusion paragraph of the publication (e.g. the investigated treatment was superior, equivalent, and/or safer than the comparator), and for the unpublished trials on the primary outcome measurement reported in the registry (positive if the primary outcome was in favor of the investigated treatment, negative if not, and descriptive if no statistical test was provided in the registry). All data analyses were performed in IBM SPSS Statistics, version 23.


Of the 622 trials reviewed by the Dutch IRBs, 19 (3.0%) were rejected, and after obtaining IRB-approval, another 19 trials never started the inclusion of patients (Fig 1). Thus, before any patients were included, 6% of the trials had reached their final stage of progress. Of the 574 trials that started, 334 trials (58.2%) were published within the observation period of 8–9 years after IRB-approval.

Fig 1. Stages of progress of the inception cohort.

IRB = institutional review board. The end-of-trial form was missing of 186 of the 574 (32%) trials that were included in the analysis. Principal investigators of 73 of these trials responded to our questionnaire, completing the information on the end-of-trial. From the remaining 113, of 87 trials we found other documents than the end-of-trial form indicating that the trial had started (for example, emails from the IRB or amendments), or we found that the trial was published.

Of 26 trials included in the analysis we had no follow-up information. The 113 trials with missing information about completion were assumed to be completed as planned.

Table 1 shows all candidate determinants and the percentages of publication for each determinant category. Nine of these candidate determinants were included in the multivariable logistic regression model (Table 2). In this model, phase 2 (adjusted OR 2.6; 95% CI 1.1–5.9), 3 (adjusted OR 4.1; 95% CI 1.7–10.0) and other-phase trials (adjusted OR 3.2; 95% CI 1.5–6.5) had a significantly higher likelihood of publication compared to phase 1 trials. Trials of which the investigator or company was the applicant had a significantly higher likelihood of publication compared to trials of which a contract research organization was the applicant (adjusted OR 1.7; 95% CI 1.1–2.8). Furthermore, international multicenter trials within the EU (adjusted OR 2.2; 95% CI 1.1–4.4) or also outside the EU (adjusted OR 2.0; 95% CI 1.0–4.0) were more likely published than single center trials. Invasive observational trials had a lower likelihood of publication compared to intervention trials (adjusted OR 0.4; 95% CI 0.2–0.9). Trials that were not prospectively registered had a lower likelihood of publication compared to prospectively registered trials (adjusted OR 0.5; 95% CI 0.3–0.8). Sensitivity analyses showed that the magnitude of this association increased if the threshold of prospective registration was changed to registration within one year of IRB-approval, or to registration at any moment (data not shown). Finally, trials that were terminated early had a substantially lower likelihood of publication compared to trials that were completed as planned (adjusted OR 0.2; 95% CI 0.1–0.3).

Table 1. Frequencies and publication percentages of candidate determinants.

Table 2. Associations between determinants and publication, expressed as crude and adjusted odds ratios (OR), and 95% confidence intervals (CI) of the crude and adjusted ORs.

Based on visual inspection of the Kaplan Meier analysis, the curves of all phases seemed to approach their plateau after 8–9 years of follow-up since IRB-approval (Fig 2). The overall median time to publication since IRB-approval was 53 months (interquartile range (IQR) 39–65) and was not different between the trial phases.

Fig 2. Kaplan Meier analysis of the publication rates of trial phases.

Overall, non-oncology trials had a lower likelihood of publication compared to oncology trials; however, this association was not significant in the multivariable analysis (Table 2, adjusted OR 0.7, 95% CI 0.4–1.1, S1 Fig). No significant difference was observed in the median time to publication between other disease area and oncology trials (median time to publication 52 months (IQR 41–69) vs. 57 months (IQR 39–63), respectively). Post-hoc analysis showed that only 28 out of 100 (28%) other disease area phase 1 trials were published, which was significantly lower compared to the 13 out of 19 (68%) published oncology phase 1 trials (OR 0.2, 95% CI 0.1–0.5; S2 Fig). Among other phases we observed no difference in publication of other disease area and oncology trials (64% vs. 66%, respectively; S3 Fig).

Substantially more published trials (113/334, 34%) had also uploaded a summary of results in the or ISRCTN registries compared to the unpublished trials (23/240, 10%). Post hoc analyses showed that of the published trials, 42% of the direction of conclusions was positive, 19% was negative, and 39% were descriptive. Of the unpublished trials that reported results in their registry, 5 (22%) trials reported a positive primary outcome, 2 (9%) reported a negative primary outcome and 16 (70%) were descriptive or missing (primarily due to missing statistical information that was needed to infer a direction of the results).

The principal investigators of only 55 of the 240 (23%) unpublished trials responded to the questionnaire and provided the reason(s) for non-publication (S1 Table). The most important reason for non-publication among the responders was that the investigators had other priorities than to write a manuscript (18.2%). Other reasons included no statistically significant or clinically relevant results (14.5%), the manuscript was rejected by a journal (12.7%), the article was not finished yet (10.9%), and the study was underpowered due to poor inclusion of participants (10.9%).


Of the clinical drug trials approved by the Dutch IRBs in 2007, 42% had not been published as a peer-reviewed article by January/February 2016. The publication rate approximated their plateau at the time of our final search, suggesting that only a few more publications can be expected. The observed publication rate of 42% is relatively high compared with other studies investigating older cohorts [3,6,7,8,9,11,12,13,14,16,35,36]. This suggests that the publication rate of clinical trials has somewhat improved, but is still far from ideal. In particular, the publication percentage of the phase 3 trials (mainly RCTs) in our cohort (73%) was higher compared to previous cohorts investigating RCTs (overall, 54% published) [16,37]. Other recent research also supports that publication of phase 3 trials has been improved [17]. So, the regularly mentioned number of 50% non-publication [38] probably needs to be updated with regard to the phase 3 trials. Awareness-raising public campaigns [39], incorporation of publication requirements in clinical trial legislation [40] and advocacy by influential organizations [20] over the past decade may have contributed to this improvement. However, it is uncertain whether the identified publications have adequately reported all relevant aspects of the trials [41]. We are investigating this in the next phase of our cohort study [30].

The implicated research waste is considerable. Starting with the inception cohort of 622 IRB-reviewed trials, at least 140 (23%) failed to be completed as planned (Fig 1, Table 1). If we consider the published trials and the trials that are still running as not (yet) wasted, waste is implicated in 50% of the trials. This percentage should not be compared to the research waste estimate of 85% (of which 50% was due to non-publication) suggested by Chalmers and Glasziou [1], as we did not factor in research waste due to a poor design, conduct, data analysis, and selective reporting within the publications. Some waste is probably unavoidable (for example, trials sometimes are terminated early for ethical reasons). However, the need for better solutions is urgent considered the large public and private investments involved in the unpublished trials. Furthermore, 42% non-publication implies that publication bias in clinical drug trials is likely still substantial, despite many years of attention to this topic [42].

A limitation of our study was that we did not include the direction, magnitude and statistical significance of the trial results as determinants in our analysis. Previous studies included this determinant [10,15], by interviewing the PIs [10], or using trial reports submitted to the IRB [15]. However, this approach excludes trials of which no such data is available, potentially introducing selection bias. This would have excluded 113 of the 240 (77%) unpublished trials from our cohort. Furthermore, it is questionable how objective investigators can judge the direction of results of their own research [43], and definitions of ‘positive’ and ‘negative’ results are heterogeneous [28]. Despite the attached endorsement letters from the local IRBs, the response rate to our questionnaire was low. Among the responders, only 14.5% of the PIs reported that lack of significance or relevance of the results were a reason for non-publication. Having other priorities was the most common reason. Rejection by a journal was also among the most common reasons for non-publication. Both these reasons have been reported previously in the literature [16,44]. The post hoc analysis of the results of the unpublished trials that were uploaded in their registry demonstrated that these results sections are often incomplete and provide therefore little information on the influence of the direction of the results on the likelihood of publication. Furthermore, this finding suggests in line with other studies that uploading results in trial registries should be done more often, and that the quality of these results uploads needs improvement [45,46].

The publication rate of phase 1 trials was substantially lower compared to other phases. This has been shown before [8]. However, the percentage of phase 1 trials that was published in our cohort was substantially higher (35%) than the previous study (17%) [8], suggesting that progress has also been made in the field of phase 1 trials, but still not sufficient. Publication of phase 1 trials may be considered less interesting because their direct impact for clinical practice is limited when the drug is still far from marketing approval. Yet, phase 1 trials are an important source for the clinical pharmacology of drugs. Furthermore, data from previous phase 1 trials on similar drugs is essential in determining the risk of phase 1 (first in man) trials upfront [47]. Increasing transparency in general in this field of clinical research should be high on the agenda of regulators and the pharmaceutical industry, as emphasized by the slow release of information after the recent tragic events in a phase 1 trial in France [48].

Our post hoc finding that oncology phase 1 trials are more likely to be published than phase 1 trials in other disease areas suggests that inclusion of patients who are typically very ill [49] may positively influence publication of phase 1 trials. Or, argued differently, oncology phase 1 trials are in fact phase 2 trials, as phase 2 trials in most other disease areas are usually the ‘first-in-patient’ trials. The publication percentage of oncology phase 1 trials in our cohort was indeed similar to that of the phase 2 trials (68% and 60%, respectively).

The lower likelihood of publication of single center trials compared to multicenter trials has been shown in previous research [10]. In our cohort, this trend was visible, but only statistically significant for multicenter trials conducted also outside the Netherlands. Opportunities for increasing the incentive to publish exist at the level of the trial center. Publication metrics (including, but not limited to the number of trials published divided by the total number of trials conducted) should be reported on the center-website as well as the website of the local IRB for all trials conducted in the center [50]. Transparency about the local publication practices may stimulate stakeholders to require publication of all trials.

Invasive observational trials had a lower likelihood to be published compared to intervention trials. This association was not observed between observational non-invasive trials and intervention trials. Findings by other studies regarding this determinant are inconsistent [51] and the poor precision makes this determinant difficult to interpret.

We found that prospective registration in a trial registry was associated with publication. The idea of prospective registration of all trials was proposed many years ago [4], but in our cohort, only 37% of the trials were prospectively registered. The sensitivity analyses showed that the significant association with publication remained when using the less strict definition of prospective as registration within 1 year of IRB-approval. Since 2007, prospective registration has become increasingly mandatory, and higher registration rates have been reported [52]. But given the changes in the requirements for prospective registration since the inception of this cohort, higher publication rates cannot be predicted from this rise in prospective registration. Furthermore, there is no evidence that registries in their current state can adequately replace journal articles as the primary source for clinical guidelines, decision making and designing future trials. Until the issues with registries, such as completeness and quality of uploads of trial results, are solved, the peer-reviewed journal article remains the golden standard for reporting the results of clinical trials, and all clinical trials should be published as such.


Our study shows a non-publication rate of clinical trials of 42%, which seems to be an improvement compared to previous inception cohorts, but is still far from optimal. Determinants of non-publication are early termination, no prospective registration, phase 1, and single center. Considerable research waste is implicated, and the likelihood of publication bias is high.

Supporting Information

S1 Table. Reasons for non-publication as reported by the responding principal investigators (PIs) to our questionnaire.

In total, PIs of 55 out of 240 non-published trials responded. PIs could provide more than 1 reason.


S1 Fig. Publication rate of all trials stratified by oncology versus non-oncology


S2 Fig. Publication rate of phase 1 trials stratified by oncology versus non-oncology


S3 Fig. Publication rate of non-phase1-trials stratified by oncology versus non-oncology


S1 File. Anonymized dataset used for the analyses


S3 File. Questionnaires.

Based on our initial search, we sent 4 different questionnaires, depending on whether or not we found that the trial was published, and depending on whether or not we had information on the end of trial (completed as planned or terminated early).



We thank our colleagues at the national competent authority of the Netherlands, the Central Committee on Research Involving Human Subjects (CCMO), for access to the data. We also thank the local IRBs in the Netherlands for their support and collaboration, and the Association for Innovative Medicines in The Netherlands for their help with the distribution of the questionnaire. Finally, we thank all clinical researchers that responded to our questionnaire for their time, effort and sharing their experiences.

Author Contributions

  1. Conceptualization: CAB PCS CTMB SWJJ GHK HGML LMB.
  2. Data curation: CAB.
  3. Formal analysis: CAB PCS LMB.
  4. Investigation: CAB.
  6. Project administration: PCS SWJJ.
  7. Resources: SWJJ GHK HGML.
  8. Supervision: PCS CTMB SWJJ GHK HGML LMB.
  9. Validation: CAB CTMB GHK.
  10. Visualization: CAB.
  11. Writing – original draft: CAB.
  12. Writing – review & editing: CAB PCS CTMB SWJJ GHK HGML LMB.


  1. 1. Chalmers I, Glasziou P (2009) Avoidable waste in the production and reporting of research evidence. Lancet 374: 86–89. pmid:19525005
  2. 2. Ioannidis JP (2014) Clinical trials: what a waste. BMJ 349: g7089. pmid:25499097
  3. 3. Bardy AH (1998) Bias in reporting clinical trials. British journal of clinical pharmacology 46: 147–150. pmid:9723823
  4. 4. Simes RJ (1986) Publication bias: the case for an international registry of clinical trials. Journal of clinical oncology: official journal of the American Society of Clinical Oncology 4: 1529–1541.
  5. 5. Tam VC, Tannock IF, Massey C, Rauw J, Krzyzanowska MK (2011) Compendium of unpublished phase III trials in oncology: characteristics and impact on clinical practice. Journal of clinical oncology: official journal of the American Society of Clinical Oncology 29: 3133–3139.
  6. 6. Blumle A, Antes G, Schumacher M, Just H, von Elm E (2008) Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others. Journal of medical ethics 34: e20. pmid:18757621
  7. 7. de Jong JP, Ter Riet G, Willems DL (2010) Two prognostic indicators of the publication rate of clinical studies were available during ethical review. J Clin Epidemiol 63: 1342–1350. pmid:20558034
  8. 8. Decullier E, Chan AW, Chapuis F (2009) Inadequate dissemination of phase I trials: a retrospective cohort study. PLoS medicine 6: e1000034. pmid:19226185
  9. 9. Decullier E, Lheritier V, Chapuis F (2005) Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 331: 19. pmid:15967761
  10. 10. Dickersin K, Min YI, Meinert CL (1992) Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA 267: 374–378. pmid:1727960
  11. 11. Easterbrook PJ, Matthews DR (1992) Fate of research studies. Journal of the Royal Society of Medicine 85: 71–76. pmid:1538384
  12. 12. Kasenda B, Schandelmaier S, Sun X, von Elm E, You J, et al. (2014) Subgroup analyses in randomised controlled trials: cohort study on trial protocols and journal publications. BMJ 349: g4539. pmid:25030633
  13. 13. Pich J, Carne X, Arnaiz JA, Gomez B, Trilla A, et al. (2003) Role of a research ethics committee in follow-up and publication of results. Lancet 361: 1015–1016. pmid:12660062
  14. 14. Stern JM, Simes RJ (1997) Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315: 640–645. pmid:9310565
  15. 15. Sune P, Sune JM, Montoro JB (2013) Positive outcomes influence the rate and time to publication, but not the impact factor of publications of clinical trial results. PLoS One 8: e54583. pmid:23382919
  16. 16. von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, et al. (2008) Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee. Swiss medical weekly 138: 197–203. pmid:18389392
  17. 17. Massey PR, Wang R, Prasad V, Bates SE, Fojo T (2016) Assessing the Eventual Publication of Clinical Trial Abstracts Submitted to a Large Annual Oncology Meeting. The oncologist 21: 261–268. pmid:26888691
  18. 18. Dwan K, Gamble C, Williamson PR, Kirkham JJ (2013) Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS one 8: e66844. pmid:23861749
  19. 19. ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Update December 2015. Last accessed on 2 November 2016.
  20. 20. Moorthy VS, Karam G, Vannice KS, Kieny MP (2015) Rationale for WHO's new position calling for prompt reporting and public disclosure of interventional clinical trial results. PLoS medicine 12: e1001819. pmid:25874642
  21. 21. Food and Drug Administration Amendments Act of 2007, Title VIII—Clinical trial databases. Public Law 110–85. September 27, 2007.
  22. 22. Amendment of the Medical Research Involving Human Subjects Act with regard to the evaluation of the act and recovery of incomplete implementation of guideline no. 2001/20/EG. Dossier 31452. Enacted as per 1 July 2012 (full text in Dutch only).
  23. 23. EFPIA EFPIA Position—Transparency of Information on Clinical Trials included in the Proposed EU Database (Article 78 of the Proposal for a Regulation on Clinical Trials). Last accessed on 2 November 2016.
  24. 24. Jones CW, Handler L, Crowell KE, Keil LG, Weaver MA, et al. (2013) Non-publication of large randomized clinical trials: cross sectional analysis. BMJ 347: f6104. pmid:24169943
  25. 25. Lampert A, Hoffmann GF, Ries M (2016) Ten Years after the International Committee of Medical Journal Editors' Clinical Trial Registration Initiative, One Quarter of Phase 3 Pediatric Epilepsy Clinical Trials Still Remain Unpublished: A Cross Sectional Analysis. PloS one 11: e0144973. pmid:26735955
  26. 26. Ramsey S, Scoggins J (2008) Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology. The oncologist 13: 925–929. pmid:18794216
  27. 27. Shamliyan T, Kane RL (2012) Clinical research involving children: registration, completeness, and publication. Pediatrics 129: e1291–1300. pmid:22529271
  28. 28. Song F, Parekh-Bhurke S, Hooper L, Loke YK, Ryder JJ, et al. (2009) Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC medical research methodology 9: 79. pmid:19941636
  29. 29. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (1991) Publication bias in clinical research. Lancet 337: 867–872. pmid:1672966
  30. 30. van den Bogert CA, Souverein PC, Brekelmans CT, Janssen SW, van Hunnik M, et al. (2015) Occurrence and determinants of selective reporting of clinical drug trials: design of an inception cohort study. BMJ open 5: e007827. pmid:26152325
  31. 31. ToetsingOnline. Last accessed on 5 August 2016.
  32. 32. Last accessed on 5 August 2016.
  33. 33. Food and Drug Administration. Code of Federal Regulations 21CFR312.21. Phases of an investigation. Title 21, Volume 5, Revised as of April 1, 2015.
  34. 34. Nurgat ZA, Craig W, Campbell NC, Bissett JD, Cassidy J, et al. (2005) Patient motivations surrounding participation in phase I and phase II clinical trials of cancer chemotherapy. British journal of cancer 92: 1001–1005. pmid:15770219
  35. 35. Hole OP, Nitter-Hauge S, Cederkvist HR, Winther FO (2009) An analysis of the clinical development of drugs in Norway for the year 2000: the completion of research and publication of results. European journal of clinical pharmacology 65: 315–318. pmid:19104790
  36. 36. Mattila T, Stoyanova V, Elferink A, Gispen-de Wied C, de Boer A, et al. (2011) Insomnia medication: do published studies reflect the complete picture of efficacy and safety? European neuropsychopharmacology: the journal of the European College of Neuropsychopharmacology 21: 500–507.
  37. 37. Kasenda B, von Elm E, You J, Blumle A, Tomonaga Y, et al. (2014) Prevalence, characteristics, and publication of discontinued randomized trials. JAMA 311: 1045–1051. pmid:24618966
  38. 38. Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PM, et al. (2015) Increasing value and reducing waste in biomedical research: who's listening? Lancet.
  39. 39. 2016 AChwanLaoA.
  40. 40. Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC. European Commission. Offic J Eur Union 2014;158:1–76.
  41. 41. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 291: 2457–2465. pmid:15161896
  42. 42. Hemminki E (1980) Study of information submitted by drug companies to licensing authorities. British medical journal 280: 833–836. pmid:7370687
  43. 43. Hewitt CE, Mitchell N, Torgerson DJ (2008) Listen to the data when results are not significant. BMJ 336: 23–25. pmid:18174597
  44. 44. Stewart LA, Parmar MK (1996) Bias in the analysis and reporting of randomized controlled trials. International journal of technology assessment in health care 12: 264–275. pmid:8707499
  45. 45. Law ignored, patients at risk. December 13, 2015. Last accessed on 29 March 2016.
  46. 46. Chen R, Desai NR, Ross JS, Zhang W, Chau KH, et al. (2016) Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ 352: i637. pmid:26888209
  47. 47. Van den Bogert CA, Cohen AF (2015) Need for a proactive and structured approach to risk analysis when designing phase I trials. BMJ 351: h3899. pmid:26201349
  48. 48. Royal Statistical Society statement on publication of the study-protocol BIA-102474-101 for the French "first-in-man" trial in healthy volunteers., 22 January 2016. Last accessed on 29 March 2016.
  49. 49. Salzberg M (2012) First-in-Human Phase 1 Studies in Oncology: The New Challenge for Investigative Sites. Rambam Maimonides medical journal 3: e0007. pmid:23908831
  50. 50. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, et al. (2014) Increasing value and reducing waste in research design, conduct, and analysis. Lancet 383: 166–175. pmid:24411645
  51. 51. Dickersin K (1997) How important is publication bias? A synthesis of available data. AIDS education and prevention: official publication of the International Society for AIDS Education 9: 15–21.
  52. 52. Huser V, Cimino JJ (2013) Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration. Journal of the American Medical Informatics Association: JAMIA 20: e169–174. pmid:23396544