Health Canada’s Summary Basis of Decision (SBD) documents outline the clinical trial information that was considered in approving a new drug. We examined the ability of SBDs to inform clinician decision-making. We asked if SBDs answered three questions that clinicians might have prior to prescribing a new drug: 1) Do the characteristics of patients enrolled in trials match those of patients in their practice? 2) What are the details concerning the drug’s risks and benefits? 3) What are the basic characteristics of trials?
14 items of clinical trial information were identified from all SBDs published on or before April 2012. Each item received a score of 2 (present), 1 (unclear) or 0 (absent). The unit of analysis was the individual SBD, and an overall SBD score was derived based on the sum of points for each item. Scores were expressed as a percentage of the maximum possible points, and then classified into five descriptive categories based on that score. Additionally, three overall ‘component’ scores were tallied for each SBD: “patient characteristics”, “benefit/risk information” and “basic trial characteristics”.
161 documents, spanning 456 trials, were analyzed. The majority (126/161) were rated as having information sometimes present (score of >33 to 66%). No SBDs had either no information on any item, or 100% of the information. Items in the patient characteristics component scored poorest (mean component score of 40.4%), while items corresponding to basic trial information were most frequently provided (mean component score of 71%).
The significant omissions in the level of clinical trial information in SBDs provide little to aid clinicians in their decision-making. Clinicians’ preferred source of information is scientific knowledge, but in Canada, access to such information is limited. Consequently, we believe that clinicians are being denied crucial tools for decision-making.
Citation: Habibi R, Lexchin J (2014) Quality and Quantity of Information in Summary Basis of Decision Documents Issued by Health Canada. PLoS ONE 9(3): e92038. https://doi.org/10.1371/journal.pone.0092038
Editor: Sarah Pett, University of New South Wales, Australia
Received: August 22, 2013; Accepted: February 19, 2014; Published: March 20, 2014
Copyright: © 2014 Habibi, Lexchin. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: Funding for Roojin Habibi was provided by the Pharmaceutical Policy Research Collaboration through an Emerging Team Grant from the Canadian Institutes of Health Research. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: In 2008 Joel Lexchin was an expert witness for the Canadian federal government in its defence against a lawsuit challenging the ban on direct-to-consumer advertising. In 2010 he was an expert witness for a law firm representing the family of a plaintiff who allegedly died from an adverse reaction from a product made by Allergan. He is currently on the Management Board of Healthy Skepticism Inc. and is the Chair of the Health Action International – Europe Association Board. Joel Lexchin is a PLOS ONE Editorial Board member. This does not alter the authors’ adherence to all the PLOS ONE policies on sharing data and materials.
When pharmaceutical companies want to market a new drug, or obtain a new indication for an existing drug in Canada, they are required to submit the entire clinical trial portfolio for that drug to Health Canada. Pivotal trials are regarded as being most important in showing that the product is efficacious and safe in the context in which it will be used. However, Health Canada considers all company-submitted clinical trial data as confidential and will not release it to outside parties, even through an Access to Information request, unless the company submitting the data agrees to its release.  This creates a situation where Health Canada possesses information that may be critical to the proper use of a medication but cannot share that information with the people most in need of it – clinicians and patients.
The transparency of Health Canada’s drug review process has been criticized on separate occasions by its Science Advisory Board  by the House of Commons Standing Committee on Health , and the Auditor General of Canada.  The Canadian agency is not alone in this regard: both the European Medicines Agency (EMA) and the United States (US) Food and Drug Administration (FDA) have been criticized in the past for similar shortcomings in transparency.   These agencies, however, have met their reproaches with substantial efforts to improve: The FDA already publishes analyses and other materials related to the evaluation of an approved drug on its website, and the EMA’s recently drafted policy on clinical trial transparency (Policy 70) indicates that the agency is moving towards full disclosure of all submitted clinical trial data by 2014 .
In 2004, Health Canada announced the Summary Basis of Decision (SBD) project. Phase I of this initiative began on January 1, 2005. The SBD is a document issued after a new drug or medical device is approved and explains the scientific and benefit/risk information that was considered prior to approving the product.  Technical writers redact the documents based on the reviewers’ report, and revise them upon input and comments from the review team and the sponsoring company.  A general template drafted by Health Canada outlines the information to be included in the four major sections of the SBD (Figure 1): 1) Product and Submission Information 2) Notice of Decision 3) Scientific and Regulatory Basis for Decision and 4) Submission Milestones. Of particular interest to healthcare professionals is the third section, which contains a description of the premarket clinical trials examined by Health Canada, and a summary of the final benefit/risk assessment for the product. Health Canada’s position is that, as a result of this initiative, “Canadian healthcare professionals and patients will have more information at their disposal to support informed treatment choices.”  SBDs released until the end of August 2012 constituted the first phase of the project .
An examination of the strengths and weaknesses of regulatory transparency in one jurisdiction is valuable not only to those in that jurisdiction but also to regulators, clinicians and consumer in other countries. Learning from mistakes and successes can be of significant benefit in understanding how to expand access to necessary and important information in clinical decision-making.
The only published analysis of the contents of the SBDs looked at three pilot documents that were released prior to the launch of Phase I, and found that, compared to the FDA approval package for these drugs, the information contained in the SBD could not alert the reader to the potential problems that would later emerge in Health Canada’s warning letters.  In this study, we adopt the perspective of clinicians in analyzing the practical utility of SBDs. Specifically, we ask whether SBDs clearly report information that clinicians would want to know when prescribing a new drug: are the patients in the trials described in enough detail that clinicians would know if they resemble their own patients, and is there enough detail about the trials that they can gain a sufficient understanding of the risks and benefits of the drug? Secondarily, we investigate whether basic clinical trial characteristics are documented.
SBDs are available on-line at <http://www.hc-sc.gc.ca/dhp-mps/prodpharma/sbd-smd/drug-med/index-eng.php>. All documents produced between January 1, 2005 and April 30, 2012 were coded. SBDs that did not describe any trials were excluded. Reliability of coding was ensured through duplicate independent coding of a subset of SBDs. RH abstracted information from the first 5 SBDs by alphabetical order according to brand name. JL subsequently did the same and the two sets of data were compared and differences resolved by consensus. JL then did duplicate data abstraction on every 10th document. Consensus was reached on all information extracted. We looked specifically at the Notice of Decision, Clinical Efficacy, Clinical Safety, and Benefit/Risk Assessment and Recommendation sections of each document, as these were the areas where clinical trial information was found. We searched for the following general items for each product: brand and generic name, name of company marketing the product, date of notice of compliance (NOC) and indication(s) for use. Although our aim was to extract information from the pivotal trials described in the SBD, it was not always possible to identify which trials were pivotal due to unclear or ambiguous wording. As such, we extracted information from all trials described in the SBD, unless it was specifically stated that a trial was “supportive”, in which case it was excluded. The following 14 items of clinical trial information were recorded : whether the trial was identified as “pivotal”, the number of pivotal trials per SBD, trial identifiers, trial inclusion requirements, whether the trial was single or multisite, whether the trial was conducted in an inpatient or outpatient setting, the use of placebo or active control, number of patients in each arm, sex distribution in each arm, length of trial, age of patients, results, statistical significance of results, number of withdrawals from each arm of the trial and statistically significant difference, if any, between withdrawal rates.
Each item in a trial was given a score of 2 if it was described completely based on an a priori set of definitions (Table 1). If there was some information about the item, but it was incompletely or inaccurately described, it was scored as 1 and if there was no information it was scored as 0. Some trials were observational studies and for these, certain items were expected to be absent (e.g., statistical significance of results). In these cases, items were scored as “not applicable”. For readability, we hereafter refer to both “trials” and “studies” collectively as trials.
The unit of analysis was the SBD, as this represents the totality of information for each medication. We totaled the number of points that each SBD received and expressed it as a percentage of the total number of points that were attainable. Therefore, if there were 4 individual trials and there were 12 scoreable items per trial then the maximum number of points would be 96 (4 (trials) ×12 (scoreable items) ×2 (maximum number of points per item)). If the SBD received 72 of these then we reported that it received a score of 75%. We grouped SBDs into categories based on the quantity of information present: information always absent (SBD score of 0%), information usually absent (1 to 33%), information sometimes present (>33 to 66%), information usually present (>66 to 99%) and information always present (100%).
Besides assigning a total score to each SBD, we also identified three component scores, based on the three questions we posed earlier. Items were assigned to these different components based on face validity. For clinicians to be able to know if trial participants resembled their own patients we totaled scores for: age, sex, inpatient or outpatient setting, and inclusion criteria. For clinicians to gain a sufficient understanding of the risks and benefits of the drug we totaled scores for: study length, results, statistical significance of results, placebo or active control, study arm withdrawal rate and statistically significant difference, if any, between withdrawal rates. Finally, basic characteristics were: trial identified as pivotal, number of trials, number of patients per trial arm, single or multisite, and unique trial identifier. SBDs were grouped into the same categories described above for these components as well.
Excluding four SBDs that did not describe any clinical trials, we analyzed the full population of SBDs available: 161 SBDs covering 456 trials, with a range of 1 to 26 and a mean of 3 trials per SBD (see Table S1 for a full list of the drugs). Overall analysis of the points earned by each SBD is shown in Table 2. Information was neither always absent nor always present in any SBD. The majority of SBDs (126/161) were rated as having information sometimes present, while 30 had information usually present and 5 information usually absent. Examples of SBDs that scored especially poorly as a percentage of the maximum possible score included tositumomab (Bexxar Therapy) (14.3%), ciclesonide (Alvesco) (24.2%), sulesomab (Leukoscan) (27.4%), and abatacept (Orencia) (31.2%). These SBDs failed to describe basic items such as whether trials were controlled or observational, or risk/benefit items such as the outcomes of primary efficacy endpoints. Some SBDs also mentioned trials in passing, without further elaborating upon their characteristics and results. The SBD for the pneumococcal conjugate vaccine (Synflorix), for example, indicated that 11 studies were used to render a decision, but went on to describe only 2. Conversely, the SBDs with the highest percentage of the maximum possible score were alemtuzumab (Mabcampath) (83.3%), denosumab (Prolia) (78.6%), cabazitaxel (Jevtana) (78.6%) and ceftobiprole (Zeftera) (78.6%). The highest scoring SBDs described only one to two trials each whereas the lowest scoring SBDs described between three to 21 trials each (the SBD for tositumomab never disclosed the number of trials reviewed.).
Of the three components examined in this study, patient characteristics were the least adequately described. For two items in this component, information was completely absent for the large majority of SBDs: sex per arm (129), and age (108). Eligibility criteria earned the most points in this component, with 114 SBDs containing 100% of the information. The overall patient characteristics score, based on the mean scores on all four related items, was 40.1%.
The second component, drug risks and benefits, fared slightly better (mean component score 53.2%). 135 SBDs fully disclosed the comparator(s) used in trials, and 90 the trial length. The discussion of trial results was less fulsome however, with 110 SBDs having the information only sometimes present. Of the 154 applicable SBDs, the majority (85) had the statistical significance of results always absent to sometimes present. Even less information was provided on the withdrawal rate per arm, with 157 SBDs having the information absent to sometimes present. In 153 of the 154 applicable SBDs, no information was provided on whether withdrawal rates differed significantly between patients in the treatment and control arms.
Basic trial characteristics were most often described in their entirety (mean component score of 71.0). 122 SBDs clearly described whether the trials were pivotal or supplementary, 105 SBDs gave the study name or identifier, 99 the study site and 53 the number of patients per arm.
While the unit of analysis in this study was the entire SBD, an analysis of individual trials provided additional insights (Table 3): the items age, sex per arm, withdrawal rate and statistical significance of differences between withdrawal rates all had a median score of 0 (interquartile range 0, 1). The type of comparator was the only item that attained a median score of 2 (interquartile range 2, 2). Other items that attained a median score of 2 albeit with more dispersed interquartile ranges included study ID, study site, pivotal status, length of study, and eligibility criteria.
The stated aims of the SBD initiative are to improve transparency in the drug review process, and to provide physicians and the public with access to unbiased information regarding authorized products.  While the initiative is a laudable departure from a complete lack of data, our findings point to significant room for improvement: Overall, clinical trial information in SBDs is presented in a haphazard manner, with no apparent method to its presentation. The majority of SBDs (126 of 161) obtained overall scores of less than or equal to 66% of the total number of available points, meaning that at least one-third of the potential information about patient trial characteristics and the benefits and risks of tested treatments is missing. While basic details of clinical trials were more frequently described, any omissions or ambiguities in this component were especially troubling given the straightforward nature of the information that needed to be conveyed. In its Phase I form, the SBD offered only a very modest quantity and quality of information to aid in clinical decision-making.
Physicians have good reason to regard newly approved drugs with caution: between 1990 and 2009, 4.2% of the drugs approved by Health Canada were subsequently withdrawn due to safety issues.  Overall, new drugs have almost a 25% chance of acquiring a serious safety warning or being removed from the market and for drugs approved through the priority review process (180 days compared to 300 days for the standard process) that number climbs to 34%.  Efforts in the US to meet deadlines for reviews have also been associated with an increased likelihood of drug withdrawals for safety reasons .
Safety issues discovered through post-market surveillance can help clarify the benefit-to-harm ratio of drugs but knowledge of these problems is not available early on in the lifecycle. Additionally, access to more complete information from the premarket trials would enable clinicians to better contextualize post-market safety signals by asking, for example, whether safety problems were entirely new discoveries or further evidence of a signal that was evident in the premarket trials.
Physicians prefer to use scientific knowledge in making prescribing decisions,  but when drugs first appear on the market there is little peer-reviewed published information. ,  Reliance on the eventual publication of the clinical trials is not enough: in the US, almost one-quarter of the pivotal trials for FDA approved drugs remained unpublished >5 y after approval. . Moreover, the publication of those studies with positive results (publication bias) and the reporting of outcomes with the most impressionable findings (outcome reporting bias) can significantly alter the apparent efficacy of drugs,  misleading clinicians , .
Additionally, there are often marked discrepancies between the results of trials submitted as clinical study reports (CSRs) to regulatory agencies, and the results that appear in publications. While this observation may be partly due to the restrictions imposed by journals, the bias has been consistently in favor of the company funding the research: Vedula et al, for instance, found that publications did not accurately reflect the efficacy results reported in internal company documents  and Wieseler and colleagues showed that the clinical study reports (CSRs) provided considerably more information on harms than did publicly available sources including journal publications and registry reports .
It is also important to consider that new drugs are typically first approved on the basis of clinical trials, which have stringent eligibility criteria, and a relatively homogeneous patient population.  The extrapolation of trial results to the diversity of patients in a physician’s practice is a detail-driven process, and without in-depth knowledge of the characteristics of trial participants, extrapolation becomes much more difficult. Our analysis indicates that Phase I SBDs cannot provide the depth of information that physicians need to translate the results of clinical trials in treatment for the patients that they see in their offices.
Although basic trial information was most frequently described, nearly one quarter (24.2%) of the SBDs failed to indicate whether one or more of the trials that were described were pivotal. It is difficult to understand why this information should be so frequently absent in a decision summary document, since pivotal trials are key to making decisions about approval. Another item in this category, the study identifier, was even more rarely found (54 SBDs had no identification for any of their associated trials). Study identifiers are useful for determining whether clinical trials have been published, and for checking whether the trial has been registered on clinical trial registries such as ClinicalTrials.gov. Physicians’ access to this type of information would enhance their appraisal of the quality of evidence available for a newly approved drug.
Health Canada’s own evaluation of Phase I SBDs, based on the 93 SBDs published up to September 2008, complemented our analysis by revealing that a little over half of respondents to a workbook published on the SBD website found SBDs “useful in helping them make informed treatment choices (for themselves or their patients)”.  However, the report did not disclose either the percent or the absolute number of respondents who were clinicians. The evaluation also acknowledged the varying quality of information in the documents, but attributed it to causes such as the quality of review report, skill of the technical writer and nature of the drug. In Phase II, SBDs have been restyled into a web-based, question-and-answer format with inter-document links to improve navigation. Asked whether the content of the SBDs will be the same as in Phase I, Health Canada responded that the new SBDs would have more information on risk/benefit analyses.  It is unclear whether this means that there will also be more in-depth information about the results and characteristics of the clinical trials submitted by the companies.
The SBD initiative drew inspiration from the European Public Assessment Report (EPAR) started in 1995 by the European Medicines Evaluation Agency (now EMA).  But the quality of information contained in EPARs has also been criticized in the past. ,  In their analysis of the quality of information in EPARs for psychiatric drugs, Barbui and colleagues described an erratic reporting style and revealed that under 50% of the 70 trials described in the EPARs disclosed information about the number of patients allocated to each arm, the number withdrawn in each arm, or the number included in the analysis of the primary outcome (with effect size and precision).  These findings are consistent with our analysis of the SBDs, and as Barbui et al point out, such irregular and unreliable styles of reporting render it impossible to use these documents for analyses of treatment effect. We agree with the authors that a minimum first step towards improving information quality in these documents would be the adoption of a table to systematically organize trial information and results. Tabular presentations, however, would not obviate the need for more commitment on the part of Health Canada, as in the EMA, to disclose clinical trial information pertaining to newly approved drugs. In this regard, disclosure of the complete CSR would provide access to additional important data.
Calls for data transparency are steadily accruing from stakeholders groups across the globe,  and there is now a realization, even amongst some pharmaceutical companies,  that the era of data secrecy may be nearing its end. Viewed in this international context, and sandwiched between the considerably more transparent policies of the FDA and the EMA, Health Canada’s initiative appears conspicuously opaque.
There are several limitations to this paper. First and foremost, our assumption about the type and quantity of information that physicians need in order to make informed clinical decisions has only face validity. There are many factors that influence how doctors use new drugs, ,  but doctors seem to seek information about the products’ safety and effectiveness from all sources.  Therefore, we feel that our focus on these areas in the SBDs is justified. Second, we’ve accorded every item in this study equal weight, though clinicians might not necessarily value all facets of clinical trial information equally (e.g., the statistical significance of a result might be more important to a physician than the numerical result itself). It is quite likely that doctors stratify the amount of information they desire based on the perceived risk of drugs and that consultants and general practitioners behave differently.  It should also be noted that other information missing from the SBD could also have significant clinical importance. It was only by examining the full CSRs that the authors of a recent Cochrane review of neuraminidase inhibitors were able to determine that the increased incidence of gastrointestinal side effects in the group taking the placebo may have been due to ingredients in the placebo.  There is an increasing recognition of the value of regulatory documents such as CSRs in evidence synthesis and review studies. ,  SBDs also have the potential to play a greater role in bolstering the findings of systematic reviews and evidence synthesis documents specific to the Canadian clinical context. Finally, we understand that even if the amount of information in the SBD was significantly expanded, that clinicians may not consult these documents for guidance in day-to-day prescribing decisions. However, even if they do not, the information in them would be of significant value to those who formulate clinical practice guidelines and assemble drug formularies.
Health Canada claims that its SBD project has two goals: 1) to improve the transparency of the review process, and 2) to provide Canadians with unbiased information. Part of the latter goal is to help healthcare professionals make unbiased decisions. To date the evidence suggests that the SBDs are ineffective in reaching that goal. There are minimal legal barriers preventing Health Canada from disclosing more data  and without that information we believe clinicians are being denied crucial tools for decision-making.
Conceived and designed the experiments: JL RH. Performed the experiments: JL RH. Analyzed the data: JL RH. Contributed reagents/materials/analysis tools: JL RH. Wrote the paper: JL RH.
- 1. Herder M (2012) Unlocking Health Canada’s cache of trade secrets: mandatory disclosure of clinical trial results. CMAJ 184: 194–199.
- 2. Science Advisory Board Committee on the Drug Review Process (2000) Report to Health Canada. Ottawa.
- 3. House of Commons Standing Committee on Health (2004) Opening the medicine cabinet: first report on health aspects of prescription drugs. Ottawa.
- 4. Office of the Auditor General of Canada (2011) Report of the Auditor General of Canada to the House of Commons: Chapter 4: regulating pharmaceutical drugs - Health Canada. Ottawa.
- 5. Gøtzsche PC, Jørgensen AW (2011) Opening up data at the European Medicines Agency. BMJ 342: d2686.
- 6. Lurie P, Zieve A (2006) Sometimes the silence can be like the thunder: access to pharmaceutical data at the FDA. Law Contemp Probl 69: 85–97.
- 7. European Medicines Agency (2013) Draft policy 70: publication and access to clinical-trial data. London.
- 8. Health Canada (2004) Issue analysis summary: Summary Basis of Decision. Ottawa.
- 9. Health Canada (2012) Frequently asked questions: Summary Basis of Decision (SBD) project: phase II. Ottawa.
- 10. Health Canada (2012) Launch of phase II of the Summary Basis of Decision project. Ottawa.
- 11. Lexchin J, Mintzes B (2004) Transparency in drug regulation: mirage or oasis? CMAJ 171: 1363–1365.
- 12. Lexchin J (2014) How safe are new drugs? Market withdrawal of drugs approved in Canada between 1990 and 2009. Open Med 8: e14–e19.
- 13. Lexchin J (2012) New drugs and safety: what happened to new active substances approved in Canada between 1995 and 2010? Arch Intern Med 172: 1680–1681.
- 14. Carpenter D, Zucker EJ, Avorn J (2008) Drug-review deadlines and safety problems. N Engl J Med 358: 1354–1361.
- 15. Prosser H, Walley T (2006) New drug prescribing by hospital doctors: the nature and meaning of knowledge. Soc Sci Med 62: 1565–1578.
- 16. Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: a literature analysis. PLoS Med 5: e191.
- 17. Lexchin J (2002) New drugs with novel therapeutic characteristics. Have they been subject to randomized controlled trials? Can Fam Physician 48: 1487–1492.
- 18. Dwan K, Gamble C, Williamson PR, Kirkham JJ (2013) The Reporting Bias Group (2013) Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PLoS ONE 8: e66844.
- 19. Eyding D, Lelgemann M, Grouven U, Härter M, Kromp M, et al. (2010) Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. BMJ 341: c4737.
- 20. Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, et al. (2004) Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet 363: 1341–1345.
- 21. Vedula SS, Li T, Dickersin K (2013) Differences in reporting of analyses in internal company documents versus published trial reports: comparisons in industry-sponsored trials in off-label uses of gabapentin. PLoS Med 10: e1001378.
- 22. Wieseler B, Wolfram N, McGauran N, Kerekes MF, Vervölgyi V, et al. (2013) Completeness of reporting of patient-relevant clinical trial outcomes: comparison of unpublished clinical study reports with publicly available data. PLoS Med 10: e1001526.
- 23. Hoertel N, Le Strat Y, Blanco C, Lavaud P, Dubertret C (2012) Generalizability of clinical trial results for generalized anxiety disorder to community samples. Depress Anxiety 29: 614–620.
- 24. Health Canada (2010) Evaluation of phase I of the Summary Basis of Decision project. Ottawa.
- 25. Barbui C, Baschirotto C, Cipriani A (2011) EMA must improve the quality of its clinical trial reports. BMJ 342: d2291.
- 26. International Society of Drug Bulletins (1998) ISDB assessment of nine European Public Assessment Reports published by the European Medicines Evaluation Agency (EMEA). Paris.
- 27. All trials registered and all results reported (n.d.) All trials registered and all results reported. Available: http://www.alltrials.net/. Accessed 2014 February 10.
- 28. Nisen P, Rockhold F (2013) Access to patient-level data from GlaxoSmithKline clinical trials. N Engl J Med 369: 475–478.
- 29. Jones MI, Greenfield SM, Bradley CP (2001) Prescribing new drugs: qualitative study of influences on consultants and general practitioners. BMJ 323: 378–381.
- 30. McGettigan P, Golden J, Fryer J, Chan R, Feely J (2001) Prescribers prefer people: the sources of information used by doctors for prescribing suggest that the medium is more important than the message. Br J Clin Pharmacol 51: 184–189.
- 31. Schumock GT, Walton SM, Park HY, Nutescu EA, Blackburn JC, et al. (2004) Factors that influence prescribing decisions. Ann Pharmacother 38: 557–562.
- 32. Jefferson T, Jones MA, Doshi P, Del Mar CB, Heneghan CJ, et al.. (2012) Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Database Syst Rev.
- 33. Doshi P, Jones M, Jefferson T (2012) Rethinking credible evidence synthesis. BMJ 344: d7898.
- 34. Doshi P, Jefferson T (2013) Clinical study reports of randomised controlled trials: an exploratory review of previously confidential industry reports. BMJ Open 3: e002496.