Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Disease Specific Productivity of American Cancer Hospitals



Research-oriented cancer hospitals in the United States treat and study patients with a range of diseases. Measures of disease specific research productivity, and comparison to overall productivity, are currently lacking.


Different institutions are specialized in research of particular diseases.


To report disease specific productivity of American cancer hospitals, and propose a summary measure.


We conducted a retrospective observational survey of the 50 highest ranked cancer hospitals in the 2013 US News and World Report rankings. We performed an automated search of PubMed and for published reports and registrations of clinical trials (respectively) addressing specific cancers between 2008 and 2013. We calculated the summed impact factor for the publications. We generated a summary measure of productivity based on the number of Phase II clinical trials registered and the impact factor of Phase II clinical trials published for each institution and disease pair. We generated rankings based on this summary measure.


We identified 6076 registered trials and 6516 published trials with a combined impact factor of 44280.4, involving 32 different diseases over the 50 institutions. Using a summary measure based on registered and published clinical trails, we ranked institutions in specific diseases. As expected, different institutions were highly ranked in disease-specific productivity for different diseases. 43 institutions appeared in the top 10 ranks for at least 1 disease (vs 10 in the overall list), while 6 different institutions were ranked number 1 in at least 1 disease (vs 1 in the overall list).


Research productivity varies considerably among the sample. Overall cancer productivity conceals great variation between diseases. Disease specific rankings identify sites of high academic productivity, which may be of interest to physicians, patients and researchers.


Academic productivity of individuals, institutions, and nations is widely measured, compared, and discussed [1], [2], [3]. [4]. In these measurements, two primary metrics are used 1) bibliometric, i.e. measuring publications or citations and 2) funding. Within academic medical centers, funding from the National Institutes of Health (NIH) (, and the institutional h-index (a measure of publication and citations) have been used to boost morale, allocate resources, and judge leadership [4], [5], [6]. However, within the field of clinical cancer research, a broad overview of productivity is lacking.

The measurement of clinical trial productivity poses special problems. Clinical trials serve a dual role as vehicles for patient care and units of academic productivity. Cancer treatment, and therefore clinical research, is multifaceted, frequently involving surgical, medical, and radiological oncologists, as well as support from diagnosticians, general internists and surgeons. Specific diagnoses and their treatment rely on different specialists and subspecialists to differing degrees. The goal of this work is to provide an overview of clinical research productivity of leading academic cancer hospitals in the United States from 2008–2013, and to reflect the differences in productivity specific to particular diseases. Specifically, we hypothesize that different institutions specialize in research in particular diseases,.

Materials and Methods


Data acquisition and analysis were done using the Python programming language with the pandas, numpy, scipy, and matplotlib extensions. Please see below for a more detailed explanation of the programs’ functions. Code is available at


We used the US News and World Report top 50 hospitals in cancer. These listings are widely discussed and the overall scores and reputations have been reported to correlate with measures of academic productivity [7], [8]. Institutional groupings, e.g. Cornell University, New York Presbyterian Hospital, and Weill-Cornell Medical College were based on the US News rankings and extended to include relevant affiliated institutions. These affiliations are represented in the search terms and dictionary and presented in S1S2 Tables.

Published clinical trials and cumulative impact factor

Information on clinical trial publication and impact factor were determined by automated searching of PubMed using the BioEntrez and BioMedline packages and PubMed syntax (, 6/1/2014). We considered 50 institutions and 27 diseases. For each institution / disease pair (e.g. Washington University (St. Louis) / urothelial cancers), we searched for published clinical trials, either all or restricted to Phase I or Phase II. We used the institutional and disease synonyms listed in S1S2 Tables. For our over-all cancer results, we used the major MeSH category of cancer. As an example, the search for phase II cervical cancer clinical trials based at Washington University was formatted as:

(Barnes-Jewish Hospital[AD] OR Washington University[AD] OR Alvin J. Siteman Cancer Center[AD]) 2008:2013[DP] Clinical trial, Phase II[PT] uterine cervical neoplasms[MESH])

We counted the number of publications. For each publication, we identified the journal, and cross-referenced it with a published list of impact factors for 2012. We summed those impact factors. For example, if there were 3 trials published in journals with impact factors of 1, 2, and 3, respectively, the summed impact factor was 6.

Clinical trials listed at

We searched for all trials at with the search term 'cancer', yielding in 43,339 studies. These trials downloaded in XML format (5/24/2014), and provided data on the trial, start date, study id, phase of drug development, source of funding, number of participants, completion status, and lead institution. We performed an automated search through the trials. For each trial, we determined the disease(s) studied by reading in the title, conditions studied, and description, then searching for keywords specific to a particular disease, for instance, “urothelial cancer,” “bladder cancer,” and “ureteral cancer.” We searched each trial for each of the 27 diagnoses. For consideration of over-all cancer score, we used both classifiable and non-classifiable trials.

In this manner, 1 or more diseases were assigned to 31,164 trials. To check our assignments, we manually reviewed 100 of the most recently started trials for which no disease was identified. This check found 8 studies of our cancers of interest. Of the remaining 92 studies, there were 24 studies of side effects, e.g. mucositis, 15 of risk factors, e.g. psoriasis, 16 of cancers not covered, e.g. neuroblastoma, 13 of advanced solid cancers of unspecified type, 9 of non-cancer, non-risk factor conditions, e.g. diabetes, 5 basic research studies, e.g. drug-drug pharmacokinetic interactions between dabrafenib, rosuvastatin and midazolam, and 10 studies that could not be so grouped. We counted the number of trials in each phase that were administered by each institution. To handle institutions with multiple names, we combined institutions using a dictionary of institutions and common synonyms, which is presented as S2 Table.

Statistical Analysis

To measure the degree to which clinical trial registration and clinical trial publication are redundant, we conducted regression analysis between two related measures, Phase II clinical trial registrations and and Phase II clinical trial summed impact factors using the linear regression function from scipy. This function takes as input an array of values for x and one for y, and determines the slope, intercept, r value, p value, and standard error using a least-squares regression. We used the counts of registered phase II trials for the 50 institutions for each disease as the x and the summed impact factor of phase II trials for the same 50 institutions as the y. We ran a separate regression for the 25 diseases with non-trivial numbers of trials. The results for these regressions are shown in S3 Table. The slopes averaged 5.97+/-3.11 IF/registration (range 13.18–0.87), while the correlations (r2) averaged 0.328+/-0.183 (range 0.758–0.013). While this correlation was significant (p < 0.05) for 22 of 25, it is sufficiently low to justify consideration of both as independent factors in disease specific productivity.

We sought to create a summary measure of disease-specific academic productivity at particular institutions. We chose to focus on Phase II trials is based on the evidence of patient benefit from participation in these trials, as well as under-reporting of Phase I trials and the multi-centric nature of Phase III trials [9], [10]. While there are many measures of productivity based on publications, we sought to create a measure that accounted for clinical trial registrations as well. To this end, we generated a summary measure based on Phase II trials registered at and the summed impact factor of Phase II clinical trials. This score for a given institution for a given disease was generated in the following manner: Where the SIF is the summed impact factor for Phase II trials, Registrations are the trials registered at, maxSIF(disease) is the highest SIF among the 50 institutions for that disease, and maxRegistrations(disease) is the highest number of trials registered for that disease. This gives a maximum score of 100. For example, between 2008–2013, Barnes-Jewish Hospital published Phase II trials on cervical cancer with a summed impact factor of 7.993, and registered 1 Phase II trial at The University of Texas MD Anderson Cancer Center had the highest impact factor in cervical cancer at 10.329, while the University of Iowa Hospitals and Clinics registered the most trials, 2. Therefore, the score for Barnes-Jewish for cervical cancer is:


Overall productivity

We identified 6076 registered trials and 6516 published trials with a combined impact factor of 44280.4, involving 32 different diseases over the 50 institutions. For any disease under study at any institution, there were 11 different variables that could be measured, including 5 reflecting clinical trial registrations and 6 reflecting publication. The full data set is available in S4S5 Tables. We calculated an overall cancer productivity score for each institution, with those results presented in Table 1.

Disease specific productivity

We collected publication and clinical trial data for each institution, produced disease-specific scores as described above and ranked them. There was minimal information on anal, vulvar, testicular, small intestinal, and penis cancers, so we did not analyze them further. We plotted the ranked scores from each disease and overlaid them (Fig. 1). For most diseases, one institution had the most clinical trials and the highest combined impact factor, for a score of 100. The scores for subsequently ranked institutions rapidly dropped.

Fig 1. Rank-ordered scores for 25 cancer diagnoses, overlay.

For most diseases the highest ranking institution (Rank = 1) has a score of 100, i.e. registering the highest number of clinical trials and publishing papers with the highest summed impact factor. As rank increases, the score rapidly declines, such that the institution with the 10th highest score (Rank = 10) shows a score of 16.5 +/- 7.9 (average +/- standard deviation).

Cancer specific rankings

Different cancers are treated and studied by different physicians in different departments using a variety of techniques. To capture this diversity, we generated ranked lists over 25 different conditions. The 10 institutions with the highest score in each category, including ties, are presented as Table 2. M.D. Anderson Cancer center appeared on the most top-10 lists, 24/25, as well as having the highest score in 13/25. However, 43 of the 50 institutions make at least 1 appearance on a top 10 list, and 6 different organization were top ranked in at least one area. A full accounting of these appearances is presented as Table 3.


This paper describes the landscape of clinical research productivity in cancer and 25 of the most common specific diseases within highly ranked academic hospitals in the United States. The main finding is a granular description of what diseases are studied where.

Multiple scales of academic productivity have been proposed and utilized in an academic hospital setting, with varying focus on feasibility, validity, reliability and acceptability [11]. The institutional h-index, defined as h, where an institution has published at least h papers which have been cited at least h times has been used to compare academic departments between hospitals [5]. While papers published, number cited, impact factor, and h-index are predictive of future funding and future publication in academic surgery and neurosurgery departments, h-index was found to be superior to the other measures [6],[12]Since the description of the h-index in 2005, there have been numerous modifications proposed and difficulties identified (discussed in [12]). Funding has also been used, both as a measure of academic productivity and to validate the predictive value of other measures [5],[11].

While it is not our intention to propose yet another metric of research productivity for general use, the specific problems in the area of clinical trial productivity motivated our choice of measurements and summary measure. In terms of bibliometrics, Google Scholar, Web of Science, and Scopus, the 3 publication search engines that allow measurement of the h-index, do not allow restriction to clinical trials, or specific phases of clinical trials, a key feature of PubMed. Since publication of clinical trials makes up the minority of departmental output, this poses significant problems for their reliability. Similarly, the inability to restrict to MeSH terms means that a search for a particular cancer will identify some articles making comparisons to that cancer or discussing drugs used to treat that cancer. For example a search for “breast cancer” could return discussions of ovarian cancer or colon cancer due to the association of these diseases in the BRCA1 and 2 syndromes, or the use of trastuzumab (Herceptin, Roche/Genentech) for a variety of conditions, due to the primary indication of trastuzumab in the use of breast cancer over-expressing hER2. From a technical standpoint, automated PubMed searches can be conducted using the BioEntrez package in Python, while no similar capacity exists for the 3 proprietary databases.

Previous work has shown a high correlation between several different measures of academic productivity and USN&WR reputation[8]. We generated a composite score for each institution based on all phase II clinical trials registered at as well as the impact factor of phase II clinical trials published in MEDLINE. We present the overall scores compared to reputation as Table 1.

Several factors influence patient selection of a cancer hospital. Only 7.3% of patients seek care at an NCI-designated cancer center (NCI-CC)[13]. Although some groups have found an association between NCI-CC attendance and decreased mortality[14], patient characteristics differ. NCI-CC patients are younger, with fewer comorbidities and more advanced disease[13].

An obvious and validated factor in of hospital choice is distance [15],[13]. For patients with treatment standards, who can expect a good outcome with standard-of-care treatment, the downsides of travelling farther may outweigh any benefits.

The benefit of treatment at an NCI-CC is thought to derive from improved process of care, potentially explaining the reduced mortality from both cancer and non-cancer causes[14]. There are also mortality improvements from seeking care at a high-volume facility[16]. We must accept the potential for confounding variables, as in the striking demographic and mortality differences that separate relatively well and well-off travelling patients from relatively ill patients for whom the NCI-CC happens to be their closest center[17],[18].

Most of the institutions in this study are NCI-CC, and all have a high volume of cancer patients. Therefore, patients presenting to any of them can expect the benefits described above. However, defining the marginal benefit of seeking care at a higher ranked hospital is more difficult. The ‘Survival’ subscore given by US News for all of the top 50 hospitals is 8, 9, or 10. The weighting necessary to generate this score means that raters other than US News give different mortality scores for the same hospital[19]. This raises the question of whether any such measurements are feasible.

Our metric focuses on registration of phase II clinical trials and publishing them in high-impact journals. These activities differ from other academic ventures in that they involve potential benefit to patients. A review of phase II trials of molecularly targeted drugs indicated an average overall response rate of 6.4%[9]. This is consistent with the 4% response rate found more generally for phase I cancer trials[20],[21]. This is a small degree of benefit, however it is attributable to the investigational agent and the investigator that administers it.


This analysis is subject to several objections and limitations. Errors of inclusion and exclusion of relevant trials are a potential concern; however, we performed manual checks in and PubMed on a limited number of unusual values. Automated and manual search both rely on proper initial curation of information. The error most likely to alter our rankings is failure to identify an institutional synonym, since this would present with an isolated drop of that institution. For that reason we present our list of synonyms and search terms (S1S2 Tables). It might seem odd that, since these measures are so similar, the correlations between them display a wide range of values over different diseases. This speaks to the value of considering both measures since productivity may be missed if only one is considered. Clinical trial registration is forward looking, while publications are retrospective. Nonetheless, it is probable that some trials contribute to both components of our score. We would consider this as a positive since an institution that registers what it publishes and publishes what it registers is preferable to the alternatives.

We chose to focus on phase II trial counts from and impact factors of published phase II clinical trials from PubMed. We did not use NCI funding as a metric, because of the relatively low correlation with reputation, and the lack of direct patient benefit from the basic studies that form the bulk of NCI grants. We chose to focus on phase II trials because many phase I trials are not reported[10] while phase III trials tend to be multi-centric, and difficult to attribute excellence at the single-institution level. Patient counts in trials are heavily distorted by a small number of very large biobanking and prevention trials. Impact factor allows us to take into account the likelihood of a paper being read and cited. In an environment in which every clinical trial is increasingly expected to be published, small, poorly-designed, or less novel trials may be more likely to be published in lower tier journals.


We provide a view of the landscape disease specific academic productivity in highly reputed American cancer hospitals. These hospitals show academic productivity among several diseases. Whether or not this translates into differences in patient care is unknown, and should be the subject of further study.

Supporting Information

S1 Table. Search terms.

This table lists the disease-specific search terms used to classify trials. For clinical trial entries from, the title, condition, and descriptive text of each trial was searched for each of the keywords (kw1–8), as well as the regular expression. The formatting of the regular expressions is such that, for example, ‘gastric.{1,100}cancer’ will detect any instances of the word ‘cancer’ within 100 characters after the word ‘gastric’, while ‘gastric cancer’ will only detect instances of that exact phrase. Detection of any of the keywords or the regular expression causes a trial to be classified as studying that disease. The MeSH terms are the terms used in PubMed with this ‘[mesh]’ tag to detect publications pertaining to that diagnosis.


S2 Table. Institution dictionary.

This is the list of sub-institutions that are combined in our analysis. For example, Barnes-Jewish Hospital, the Alvin J. Siteman cancer center (which is at Barnes-Jewish Hospital), and Washington University (which houses Barnes-Jewish Hospital) are all renamed Barnes-Jewish Hospital. In this case, Barnes-Jewish Hospital is used to reduce confusion with the University of Washington (Seattle). The synonyms were determined by manual search of the database, the eponymous NCI cancer centers, and frequently encountered abbreviations.


S3 Table. – PubMed correlations.

This list shows the correlation of the number of clinical trials registered at and the summed impact factor of PubMed publications for specific diseases and for cancer overall.


S4 Table. Clinical trial registrations by institution, disease, and phase.

In combination with S5 Table, this represents a full accounting of the data underlying the summary measure used in the main body of the paper.


S5 Table. Clinical trial publications and summed impact factors by institution, disease, and phase.


Author Contributions

Conceived and designed the experiments: JAG VP. Performed the experiments: JAG. Analyzed the data: JAG VP. Contributed reagents/materials/analysis tools: JAG. Wrote the paper: JAG VP.


  1. 1. Huang MH, Lin CS. Counting methods & university ranking by H‐index. Am Soc Inform Sci Annu Meet Proc 2011;48: 1–2.
  2. 2. Eckhouse S, Lewison G, Sullivan R. Trends in the global funding and activity of cancer research. Mol Oncol 2008;2: 20–32. pmid:19383326
  3. 3. Bornmann L, Daniel HD. The state of h index research. EMBO reports 2009;10(1), 2–6. pmid:19079129
  4. 4. Lane J. Let's make science metrics more scientific. Nature 2010;464: 488–489. pmid:20336116
  5. 5. Ponce FA, Lozano AM. Academic impact and rankings of American and Canadian neurosurgical departments as assessed using the h index: Clinical article. J Neurosurg 2010;113: 447–457. pmid:20380531
  6. 6. Sharma B, Boet S, Grantcharov T, Shin E, Barrowman NJ, Bould MD The h-index outperforms other bibliometrics in the assessment of research performance in general surgery: a province-wide study. Surgery 2013;153: 493–501. pmid:23465942
  7. 7. Green J, Wintfeld N, Krasner M. In search of America's best hospitals. The promise and reality of quality assessment. JAMA 1997;277:1152–5. pmid:9087471
  8. 8. Prasad V, Goldstein JA. US News and World Report Cancer Hospital Rankings: Do they reflect measures of research productivity? PLoS One, 2014;9: e107803 pmid:25247921
  9. 9. El-Maraghi RH, Eisenhauer EA. Review of phase II trial designs used in studies of molecular targeted agents: outcomes and predictors of success in phase III. J Clin Oncol 2008;26: 1346–54 pmid:18285606
  10. 10. Decullier E, Chan AW, Chapuis F. Inadequate dissemination of phase I trials: a retrospective cohort study. PLoS Med 2009;6: e1000034. pmid:19226185
  11. 11. Patel VM, Ashrafian H, Ahmed K, Arora S, Jiwan S, Nicholson JK, et al. How has healthcare research performance been assessed? A systematic review. J R Soc Med 2011;104: 251–261 pmid:21659400
  12. 12. Aoun SG, Bendok BR, Rahme RJ, Dacey RG Jr, Batjer HH. Standardizing the evaluation of scientific and academic performance in neurosurgery—critical review of the “h” index and its variants. World Neurosurg 2013;80: e85–e90. pmid:22381859
  13. 13. Onega T, Duell EJ, Shi X, Demidenko E, Goodman D. Determinants of NCI Cancer Center attendance in Medicare patients with lung, breast, colorectal, or prostate cancer. J Gen Intern Med 2009;24: 205–10. pmid:19067086
  14. 14. Onega T, Duell EJ, Shi X, Demidenko E, Gottlieb D, Goodman DC. Influence of NCI cancer center attendance on mortality in lung, breast, colorectal, and prostate cancer patients. Med Care Res Rev 2009;66: 542–60. pmid:19454624
  15. 15. Pope DG. Reacting to rankings: evidence from "America's Best Hospitals". J Health Econ 2009;28: 1154–65. pmid:19818518
  16. 16. Hillner BE, Smith TJ, Desch CE. Hospital and physician volume or specialization and outcomes in cancer treatment: importance in quality of cancer care. J Clin Oncol 2000;18: 2327–40. pmid:10829054
  17. 17. Lamont EB, Hayreh D, Pickett KE, Dignam JJ, List MA, Stenson KM, et al. Is patient travel distance associated with survival on phase II clinical trials in oncology? J Natl Cancer Inst 2003;95: 1370–5. pmid:13130112
  18. 18. Muñoz A, Samet J. Re: Is patient travel distance associated with survival on phase II clinical trials in oncology? J Natl Cancer Inst 2004;96: 411; author reply 411–2. pmid:14996865
  19. 19. Rothberg MB, Morsi E, Benjamin EM, Pekow PS, Lindenauer PK. Choosing the best hospital: the limitations of public quality reporting. Health Aff (Millwood) 2008;27: 1680–7. pmid:18997226
  20. 20. Horstmann E, McCabe MS, Grochow L, Yamamoto S, Rubinstein L, Budd T, et al. Risks and benefits of phase 1 oncology trials, 1991 through 2002. N Engl J Med 2005;352: 895–904. pmid:15745980
  21. 21. Roberts TG Jr, Goulart BH, Squitieri L, Stallings SC, Halpern EF, Chabner BA, et al. Trends in the risks and benefits to patients with cancer participating in phase 1 clinical trials. JAMA 2004;292: 2130–40. pmid:15523074