Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Systematic Evaluation of the Patient-Reported Outcome (PRO) Content of Clinical Trial Protocols

  • Derek Kyte,

    Affiliation Primary Care and Clinical Sciences, University of Birmingham, Birmingham, United Kingdom

  • Helen Duffy,

    Affiliation Primary Care and Clinical Sciences, University of Birmingham, Birmingham, United Kingdom

  • Benjamin Fletcher,

    Affiliation The Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, United Kingdom

  • Adrian Gheorghe,

    Affiliation Department of Global Health and Development, London School of Hygiene & Tropical Medicine, London, United Kingdom

  • Rebecca Mercieca-Bebber,

    Affiliation Quality of Life Office, Psycho-oncology Co-operative Research Group, School of Psychology, University of Sydney, Sydney, Australia

  • Madeleine King,

    Affiliation Quality of Life Office, Psycho-oncology Co-operative Research Group, School of Psychology, University of Sydney, Sydney, Australia

  • Heather Draper,

    Affiliations Medicine, Ethics, Society and History, University of Birmingham, Birmingham, United Kingdom, MRC Midland Hub for Trials Methodology Research, University of Birmingham, Birmingham, United Kingdom

  • Jonathan Ives,

    Affiliation Medicine, Ethics, Society and History, University of Birmingham, Birmingham, United Kingdom

  • Michael Brundage,

    Affiliation Queens University, Kingston, Ontario, Canada

  • Jane Blazeby,

    Affiliation Medical Research Council ConDuCT II Hub for Trials Methodology Research, School of Social & Community Medicine, University of Bristol, Bristol, United Kingdom

  • Melanie Calvert

    Affiliations Primary Care and Clinical Sciences, University of Birmingham, Birmingham, United Kingdom, MRC Midland Hub for Trials Methodology Research, University of Birmingham, Birmingham, United Kingdom



Qualitative evidence suggests patient-reported outcome (PRO) information is frequently absent from clinical trial protocols, potentially leading to inconsistent PRO data collection and risking bias. Direct evidence regarding PRO trial protocol content is lacking. The aim of this study was to systematically evaluate the PRO-specific content of UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme trial protocols.

Methods and Findings

We conducted an electronic search of the NIHR HTA programme database (inception to August 2013) for protocols describing a randomised controlled trial including a primary/secondary PRO. Two investigators independently reviewed the content of each protocol, using a specially constructed PRO-specific protocol checklist, alongside the ‘Standard Protocol Items: Recommendations for Interventional Trials’ (SPIRIT) checklist. Disagreements were resolved through discussion with a third investigator. 75 trial protocols were included in the analysis. Protocols included a mean of 32/51 (63%) SPIRIT recommendations (range 16–41, SD 5.62) and 11/33 (33%) PRO-specific items (range 4–18, SD 3.56). Over half (61%) of the PRO items were incomplete. Protocols containing a primary PRO included slightly more PRO checklist items (mean 14/33 (43%)). PRO protocol content was not associated with general protocol completeness; thus, protocols judged as relatively ‘complete’ using SPIRIT were still likely to have omitted a large proportion of PRO checklist items.


The PRO components of HTA clinical trial protocols require improvement. Information on the PRO rationale/hypothesis, data collection methods, training and management was often absent. This low compliance is unsurprising; evidence shows existing PRO guidance for protocol developers remains difficult to access and lacks consistency. Study findings suggest there are a number of PRO protocol checklist items that are not fully addressed by the current SPIRIT statement. We therefore advocate the development of consensus-based supplementary guidelines, aimed at improving the completeness and quality of PRO content in clinical trial protocols.


The value of assessing patient-reported outcomes (PROs) in clinical trials has been emphasized by major international health-policy and regulatory authorities, and by patients [1][3]. PROs are increasingly selected as primary, secondary or exploratory outcomes within clinical trials as they provide the patient's perspective on the physical, functional and psychological consequences of treatment and the degree and impact of disease symptoms (Table 1) [4]. If captured in a scientifically rigorous way, PRO results may aid clinical decision-making [5], support labelling claims [6] and influence healthcare policy [7]. It is important, therefore, that details regarding PRO assessment are included in the trial protocol, to ensure that PRO data is collected and managed appropriately.

The trial protocol is a key document, which should provide sufficient detail to facilitate understanding of the study design and administration, and enable appraisal of the trial's scientific, methodological and ethical rigor by funders and ethics committees [8], [9]. However, important information relating to study design, implementation and dissemination is often omitted from trial protocols [10][12]. This has led to the development of international guidance for protocol developers and reviewers, in the form of the SPIRIT 2013 statement (Standard Protocol Items: Recommendations for Interventional Trials), which is aimed at enhancing general study design, conduct, reporting and external review [8], [9]. PRO-specific information within trial protocols has received little scrutiny to-date, however, recent qualitative evidence suggests that it is sub-optimal [13]. This may lead to variations in PRO measurement across trial sites, potentially degrading data quality and biasing trial results [13]. Our objective was to systematically review randomised controlled trial (RCT) protocols including either a primary or secondary PRO outcome, evaluating the completeness of their PRO-specific content using a specially developed PRO protocol checklist. We also used the SPIRIT tool to measure how complete the protocols were in broad terms, to investigate whether levels of PRO content were associated with general protocol completeness.



The University of Birmingham ethical review board approved this study (ERN_13-0047).

Protocol Selection

We reviewed protocols submitted to the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme, reasoning they would provide a representative snapshot of such documentation within the domain of health-care research. The NIHR-HTA programme is the largest such funding stream in the UK (comparable to the National Institutes of Health in the US and the Australian New Zealand Clinical Trials Registry in Australasia) and as a public interest funder, promotes the inclusion of patient-centred outcomes in its research [14]. Two investigators (BF, HDu) independently reviewed the NIHR-HTA database (inception to August 2013, for RCTs with a primary or secondary PRO endpoint. Disagreements regarding trial eligibility were resolved through discussion with a third reviewer (DK/MC). The most up-to-date trial protocols were retrieved for review, either from the HTA database, the trial website, or via the named trial representative (contacted by email, followed by one email reminder after 2 weeks).

Data Extraction

Two investigators (DK, HDu) independently extracted the following data from each protocol using a predesigned data extraction form: year of protocol publication, the name(s) of the PRO(s) used in the trial, whether the PRO was a primary or secondary outcome, the trial setting (primary or secondary care) and the clinical specialty.

Protocol Checklists

The completeness of the PRO-specific content of trial protocols was assessed using a PRO protocol checklist (Table 2), generated from 162 recommendations identified in our systematic review of PRO-specific guidance for trial protocol writers [15]. Recommendations were grouped into major categories comprising 33 PRO-specific items for inclusion in a trial protocol. Individual recommendations were retained under each item as subcategories (illustrated in Figure 1). MC and DK constructed the initial framework of the PRO protocol checklist, which was then reviewed, amended where necessary, and subsequently approved by an international expert external advisory group (MB, JB, RMB, MK) (see Appendix S1 for the full checklist). The completeness of general sections within each protocol was assessed using SPIRIT, as a proxy measure of the overall strength of the protocol [8], [9]. The SPIRIT resources include a checklist [8] containing 51 individual recommended protocol items, spread over 33 categories and an accompanying explanatory paper [9] and website (

Figure 1. PRO protocol checklist item ‘P8’ and associated sub-categories.

Protocol Review

Two investigators (DK, HDu) independently assessed the content of the included protocols using the PRO and SPIRIT checklists. For each trial protocol assessed, items on each checklist were either described as ‘present’ or ‘absent’. One point was assigned for each item ‘present’, giving a total score (maximum achievable, 51 for SPIRIT and 33 for the PRO checklist). In addition, for the PRO protocol checklist, the investigators also determined whether all sub-categories were satisfied for each item categorized as ‘present’. Therefore, PRO items that were marked as ‘present’, but that failed to satisfy all of the appropriate sub-categories were additionally tagged as ‘incomplete’. Levels of investigator agreement were determined for both checklists. Disagreements were resolved through discussion with a third investigator (MC) if required.

Data Analysis

Analyses were performed using SAS V9.2 (SAS Institute, Cary NC). Descriptive analyses were conducted on the number of PRO-specific and SPIRIT checklist items present in the included protocols. To explore factors associated with the inclusion of PRO-specific protocol items, we performed a pre-specified multiple regression analysis in which the dependent variable was the PRO-specific protocol checklist score and the independent variables were: whether the PRO was named as a primary or secondary outcome, the trial setting, the clinical specialty and the SPIRIT checklist score. 75 protocols were required to satisfy the sample size requirement for this regression analysis (15 per co-variate [16]). The relationship between the PRO-specific protocol checklist score and the candidate explanatory variables was assessed using a backward stepwise selection process with α = 0.05 as criteria for model inclusion.


At the time of the review (August 2013) 459 studies were listed on the HTA database, of which 284 fulfilled the inclusion criteria. As our sample size requirement was 75, we restricted our review to the 75 most recent trial protocols to provide an up-to-date picture of the PRO-specific content in such documentation. Levels of investigator agreement for both checklists were high (85.77% for SPIRIT and 86.11% for the PRO checklist) and all disagreements were resolved through discussion. Characteristics of the included protocols are presented in Table 3. A PRO was the primary outcome in 41%; 38% were conducted in a primary care setting, 51% were conducted in secondary care and 11% were conducted in both. In total, 251 different PRO measures were used across the included trials (Appendix S2), the most common being the five dimension European Quality of Life instrument (EQ-5D), the Short-Form Health Survey 12-item (SF-12) and 36-item (SF-36) questionnaires and the Hospital Anxiety and Depression Scale (HADS).

Table 3. Characteristics of included protocols (N = 75).

Adherence to SPIRIT and PRO Checklists

Protocols included a mean of 32/51 (63%) SPIRIT recommendations (range 16–41, SD 5.62) and 11/33 (33%) PRO-specific items (range 4–18, SD 3.56). Protocol adherence to individual SPIRIT and PRO checklist items is presented in Figures 2 and 3, summarized in Table 4, and discussed below.

Figure 2. Protocol adherence to individual SPIRIT items.

*Denominator adjusted as n = 46 blinded trials included in sample.

Figure 3. Protocol adherence to individual PRO items.

*Denominator adjusted as n = 46 blinded trials included in sample.

Table 4. Protocol adherence to individual SPIRIT and PRO checklist items (Sample, n = 75).

Administrative information


Protocols routinely included general administrative information including: the project title (97% of protocols), protocol version (99%), trial sponsor (88%) and coordinating centre/steering committee details (84%). Just under two-thirds presented information regarding trial registration (57%) or sources of funding (64%). Few (8%) made it clear who had contributed to the production of the protocol.


Five protocols (7%) included administrative information regarding the roles and responsibilities of trial personnel involved in the design and collection of PRO data.



Almost all protocols (99%) included general background information in the introduction and outlined the trial rationale or included specific trial objectives or hypotheses (97%).


Just under half of the protocols (49%) provided background details regarding the relevant existing PRO research (or lack of) in the area of interest, but very few (8%) included a rationale for the collection of PRO data within the trial. Over two-thirds also included PRO-specific objectives (77%), however, over one-third of these (39%) were incomplete, for example, details regarding the PRO dimensions under investigation or the timeframe of interest were often missing. In addition, less than one-third of protocols (19%) provided a PRO-specific hypothesis.

Methods: Participants, Interventions and Outcomes


Just over two-thirds of protocols (68%) included a description of the study setting(s), whilst 100% included general eligibility criteria. Protocols routinely included information on trial recruitment methods (87%), interventions (97%), outcomes (83%) and sample size requirements (97%). Half of the protocols (50%) presented criteria for discontinuing or modifying interventions, strategies to improve adherence to intervention protocols and included a participant time schedule. Less than one-third (29%) discussed relevant concomitant care and interventions.


Just under half of the included protocols (45%) discussed PRO-specific eligibility considerations. None provided a description/rationale addressing which trial participants were eligible for PRO analysis. There was routine reporting of the timing of PRO assessments (97%), but justification for PRO timings was rarely provided (7%). PRO endpoints were described in nearly all protocols (97%), however, in more than one-third (35%) the information provided was incomplete, for example, the primary time-point for analysis, or an outline of the constructs used to evaluate the intervention (e.g. overall quality of life, or a specific domain/symptom) were frequently absent. Similarly, whilst PRO sample size requirements were provided in approximately half of the included protocols (51%), 20% of these failed to justify the assumptions of PRO analyses outlined.

Methods: Assignment of Interventions (for controlled trials)


All of the included trials were controlled and 61% employed some form of blinding. Most protocols detailed methods of allocation sequence generation and concealment (87% and 81% respectively), but few outlined who would assign participants to interventions (35%). Almost all protocols (96%) identified who would be blinded to the trial interventions, but less than one-third (28%) discussed the circumstances under which un-blinding was permissible.

Methods: Data Collection


Most protocols (96%) provided general plans for the assessment and collection of trial outcomes and over two-thirds (80%) described proposed strategies for the promotion of participant retention.


PRO measures (PROMs) were always named (100%), but details regarding the measures were frequently missing, for example, the number of items/domains, methods for instrument scaling/scoring and estimated average completion time. The choice of PROM was rarely justified, whether in relation to the study hypothesis (justified in 41% of protocols), measurement properties (justified in 37%), or in relation to participant acceptability/burden (justified in 15%). Where some justification (of any type) was present (n = 33 protocols, 44%), it was commonly incomplete, for example, often information was not provided regarding the evidence-base (or lack of) for all measurement properties for a given tool, or for all tools used within a trial, and references were regularly absent. Brief information surrounding the plans for PRO data collection was included in 84% of protocols, but again elements were often absent, for example, there was a lack of information on who should administer the PROM and the level of assistance allowed during assessment, whether proxy assessment was permissible and where PRO assessment would take place. Just under half of the protocols (47%) detailed plans to minimize levels of avoidable missing PRO data. Finally, only 8% of protocols provided information surrounding PRO data collection guidelines and/or training for trial personnel.

Methods: Management and Analysis


Data management issues were discussed in 87% of protocols. Statistical methods for analysing (non-PRO) primary and secondary outcomes were routinely included in almost all (99%) protocols and over two-thirds discussed methods of additional analysis (71%) (e.g. subgroup analysis) and the handling of protocol non-adherence (72%).


PRO-specific quality assurance issues were discussed in 60% of protocols. A PRO statistical analysis plan was provided in 96% of protocols, however, very few (1%) provided plans to address multiplicity of PRO data or were explicit about PRO clinical significance levels; and less than half (45%) detailed statistical methods to deal with missing PRO data.



Information regarding the Data Monitoring Committee, interim analysis, stopping guidelines and trial auditing arrangements was included in 85%, 67% and 55% of protocols respectively. Plans for monitoring and managing adverse events/harms were included in 85% of protocols.


PRO-specific data monitoring issues were discussed in 1% of protocols. Plans for the identification and management of ‘PRO Alerts’ - where trial personnel encounter ‘concerning’ individual participant PRO data that may require a prompt response [17] - were included in 11% of protocols.

Ethics and Dissemination


Inclusion of ethics approval information (88%), informed consent/assent procedures (89%) and a dissemination policy (75%) was common. Just under two-thirds of protocols discussed confidentiality (63%) and ancillary and post-trial care (63%). There was, however, little consideration of authorship eligibility (36%), access to trial data (3%) or declaration of interests (0%).


A third of protocols discussed PRO-specific dissemination (33%), but few (1%) tackled PRO consent or confidentiality issues.



Fifty-one (68%) of the included protocols included patient information and consent materials in an appendix.


PRO-specific information was included in 59% of patient information sheets. An exact version of the PROM(s) employed by the study was included in 11% of appendices; none included a PRO assessment checklist/flowchart.

Determinants of Differences in PRO-specific Protocol Content

Table 5 summarizes the findings from our exploratory multiple regression analysis, which investigated predictors of differences in the PRO-specific checklist score between protocols. In the final model, only the nature of the PRO endpoint (primary versus secondary) was significant (P<.001), suggesting that protocols describing trials with a primary PRO include on average 5.00 (95% CI 3.79 to 6.21) additional recommended PRO-specific items compared to those employing a secondary PRO endpoint. There were no significant associations between the PRO checklist score and the year of protocol publication (P = .18), the trial setting (P = .08), the clinical specialty (P = .14) or the SPIRIT checklist score (P = .17). The full (first) model is presented in Appendix S3.

Table 5. Regression model investigating predictors of PRO-specific checklist score.a


Summary of Findings

To our knowledge, this is the first study to evaluate the PRO-specific content of trial protocols. We found that routine inclusion of PRO information was poor (33%) and that over half (61%) of included PRO items were incomplete. Trials with a primary PRO endpoint tended to routinely include slightly more PRO information in their protocols (mean 43%). PRO protocol content was not associated with general protocol completeness; thus, protocols judged as relatively ‘complete’ using SPIRIT were still likely to have omitted a large proportion of PRO checklist items.

Our findings are concordant with the prevailing empirical evidence that important general methodological details are often missing from protocols [10][12], [18], [19]. On average, the reviewed protocols failed to include over one-third (37%) of the recommended protocol items outlined in SPIRIT [8] and over two-thirds (67%) of PRO checklist items. Our results also concur with qualitative data drawn from UK-based trial personnel, suggesting a widespread lack of PRO-specific information in clinical trial protocols and training [13].

Omission of recommended PRO content in trial protocols could lead to inconsistent assessment of important patient-centred outcomes [13], risking biased and unreliable trial results, and lessening the impact of PROs on routine clinical care. This practice may mislead clinical or health policy decision-making, reduce the value of patient participation in trials and waste limited healthcare and research resources: this is unethical [20].

The particularly low PRO checklist compliance we observed in our study is unsurprising, as evidence suggests existing PRO guidance for protocol writers is difficult to access and lacks consistency [15]. Until such time as this guidance improves, it may be difficult for researchers to effectively incorporate PRO information into their protocols. Unfortunately, our findings also suggest that PRO-specific protocol items are either not addressed by the current SPIRIT checklist (for example, the management of ‘PRO Alerts’ [17]), or are addressed only partially, such that fuller explanation is warranted to provide meaningful guidance to protocol developers who may not be familiar with PRO methodology (for example, approaches to minimise avoidable missing PRO data). The scope and number of additional PRO items, and the current lack of coherence in the guidance literature, justifies the need for supplementary PRO-specific guidelines. The PRO protocol checklist developed for this study could be incorporated into such guidelines. It is important to note, however, in designing the PRO checklist we deliberately sought to retain all PRO protocol guidance extracted in our review [15], without making a judgment on which items might be essential and which may be optional, or if the essential versus optional items might differ depending on whether a PRO was a primary or secondary outcome. The checklist therefore provides the research community with a comprehensive starting point, as opposed to a definitive tool; and does not amount to an international consensus, but rather represents an approximation of it for illustrative purposes. The next step would be for the PRO protocol checklist be subjected to a formal international consensus process to ensure that it provides appropriate and consistent guidance to protocol developers and focuses on only those PRO-specific protocol items that are deemed most important by the scientific community and other relevant stakeholders, including patients. Following this process, the checklist may prove a valuable addition to formal PRO protocol guidelines, aimed at improving the completeness and quality of PRO content in clinical trial protocols.

Strengths and Weaknesses

The major strength of this study is its use of systematic methods and multiple reviewers at all stages. The SPIRIT 2013 statement was developed with comprehensive stakeholder involvement using rigorous and systematic methodology [21]. The PRO-specific checklist used in this study was developed by experts in the field, is supported by a systematic review of existing guidance [15] and demonstrated high levels of inter-rater agreement, however, it is yet to undergo a formal consensus process or validation. Both the PRO and SPIRIT checklists are still very recent and would not have been available to the developers of many of the included protocols, therefore validation of our findings in a contemporary sample of protocols is required. Our protocol sample is relatively small, and all describe trials that are UK-led (within a single funding stream), restricting generalizability. Nevertheless, the sample includes studies focusing on a range of clinical specialties, conducted in a variety of healthcare settings and employing a broad spectrum of PROs, thus enhancing external validity. Finally, it is possible that the trial protocols from other funding bodies are more advanced, in PRO terms, than those included in our review; although this is unlikely given the stature and nature of the HTA programme, further work would be needed to test this hypothesis.


The PRO components of HTA clinical trial protocols require improvement. Detailed instructions on the PRO rationale/hypothesis, data collection methods, training and management were often absent from protocols, even where the PRO was the primary outcome. This low compliance is unsurprising as existing PRO guidance for protocol writers lacks consistency and is difficult to access, whilst PRO-specific protocol items are not fully addressed by the current SPIRIT statement. There is a need for consensus-based supplementary guidelines outlining recommended standard PRO content for inclusion within trial protocols.

Supporting Information

Appendix S2.

Full list of PROMs used across included protocols.



Thanks to Andrea Roalfe for her statistical advice during the analysis phase.

Author Contributions

Conceived and designed the experiments: DK H. Duffy BF AG RMB MK H. Draper JI MB JB MC. Analyzed the data: DK H. Duffy MC. Wrote the paper: DK H. Duffy BF AG RMB MK H. Draper JI MB JB MC. Obtained funding: MC DK AG H. Draper JI MB JB RMB MK. Conducted the database search: H. Duffy BF.


  1. 1. Ouwens Ml, Hermens R, Hulscher M, Vonk-Okhuijsen S, Tjan-Heijnen V, et al. (2010) Development of indicators for patient-centred cancer care. Support Care Cancer 18: 121–130.
  2. 2. Ahmed S, Berzon RA, Revicki DA, Lenderking WR, Moinpour CM, et al. (2012) The Use of Patient-reported Outcomes (PRO) Within Comparative Effectiveness Research: Implications for Clinical Practice and Health Care Policy. Medical Care 50: 1060–1070.
  3. 3. Health Do (2010) Equity and excellence: Liberating the NHS. In: Health Do, editor.
  4. 4. Calvert MJ, Freemantle N (2003) Use of health-related quality of life in prescribing research. Part 1: why evaluate health-related quality of life? Journal of Clinical Pharmacy and Therapeutics 28: 513–521.
  5. 5. Higginson IJ, Carr AJ (2001) Measuring quality of life - Using quality of life measures in the clinical setting. BMJ 322: 1297–1300.
  6. 6. FDA (2009) Guidance for Industry: Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. http://wwwfdagov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM193282pdf.
  7. 7. NICE (2002) Guidance on the use of trastuzumab for the treatment of advanced breast cancer.
  8. 8. Chan A, Tetzlaff J, Altman DG, Laupacis A, Gøtzsche PC, et al.. (2013) SPIRIT 2013 Statement: Defining Standard Protocol Items for Clinical Trials. Annals of Internal Medicine
  9. 9. Chan A, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, et al. (2013) SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 346: 1–42.
  10. 10. Chan A-W, Hróbjartsson A, Haahr MT, Gøtsche PC, Altman DG (2004) Empirical Evidence for Selective Reporting of Outcomes in Randomized Trials: Comparison of Protocols to Published Articles. JAMA 291: 2457–2465.
  11. 11. Hróbjartsson A, Pildal J, Chan A-W, Haahr MT, Altman DG, et al. (2009) Reporting on blinding in trial protocols and corresponding publications was often inadequate but rarely contradictory. Journal of Clinical Epidemiology 62: 967–973.
  12. 12. Pildal J, Chan A-W, Hróbjartsson A, Forfang E, Altman DG, et al.. (2005) Comparison of descriptions of allocation concealment in trial protocols and the published reports: cohort study. BMJ doi:10.1136/bmj.38414.422650.8F (published 7 April 2005).
  13. 13. Kyte DG, Ives J, Draper H, Keely T, Calvert M (2013) Inconsistencies in Quality of Life Data Collection in Clinical Trials: A Potential Source of Bias? Interviews with Research Nurses and Trialists. PLoS ONE 8: e76625
  14. 14. Raftery J, Powell J (2013) Health Technology Assessment in the UK. The Lancet 382: 1278–1285.
  15. 15. Calvert M, Kyte D, Duffy H, Gheorghe A, Mercieca-Bebber R, et al.. ((in press)) Patient-Reported Outcome (PRO) Assessment in Clinical Trials: A Systematic Review of Guidance for Trial Protocol Writers. PLoS One.
  16. 16. Babyak MA (2004) What you see may not be what you get: a brief, nontechnical introduction to overfitting in regression-type models. Psychosomatic medicine 66: 411–421.
  17. 17. Kyte DG, Draper H, Calvert M (2013) Patient-Reported Outcome Alerts: Ethical and Logistical Considerations in Clinical Trials. JAMA 310: 1229–1230.
  18. 18. Chan AW, Hrobjartsson A, Jorgensen KJ, Gotzsche PC, Altman DG (2008) Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols. Bmj 337: a2299.
  19. 19. Gotzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, et al. (2007) Ghost authorship in industry-initiated randomised trials. PLoS Med 4: e19.
  20. 20. Chalmers I, Glasziou P (2009) Avoidable waste in the production and reporting of research evidence. Lancet 374: 86–89.
  21. 21. Tetzlaff JM, Moher D, Chan AW (2012) Developing a guideline for clinical trial protocol content: Delphi consensus survey. Trials 13: 176.