Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Cluster-Randomized Trial of Two Strategies to Improve Antibiotic Use for Patients with a Complicated Urinary Tract Infection



Up to 50% of hospital antibiotic use is inappropriate and therefore improvement strategies are urgently needed. We compared the effectiveness of two strategies to improve the quality of antibiotic use in patients with a complicated urinary tract infection (UTI).


In a multicentre, cluster-randomized trial 19 Dutch hospitals (departments Internal Medicine and Urology) were allocated to either a multi-faceted strategy including feedback, educational sessions, reminders and additional/optional improvement actions, or a competitive feedback strategy, i.e. providing professionals with non-anonymous comparative feedback on the department’s appropriateness of antibiotic use. Retrospective baseline- and post-intervention measurements were performed in 2009 and 2012 in 50 patients per department, resulting in 1,964 and 2,027 patients respectively. Principal outcome measures were nine validated guideline-based quality indicators (QIs) that define appropriate antibiotic use in patients with a complicated UTI, and a QI sumscore that summarizes for each patient the appropriateness of antibiotic use.


Performance scores on several individual QIs showed improvement from baseline to post-intervention measurements, but no significant differences were found between both strategies. The mean patient’s QI sum score improved significantly in both strategy groups (multi-faceted: 61.7% to 65.0%, P = 0.04 and competitive feedback: 62.8% to 66.7%, P = 0.01). Compliance with the strategies was suboptimal, but better compliance was associated with more improvement.


The effectiveness of both strategies was comparable and better compliance with the strategies was associated with more improvement. To increase effectiveness, improvement activities should be rigorously applied, preferably by a locally initiated multidisciplinary team.

Trial Registration

Nederlands Trial Register 1742


Complicated Urinary Tract Infections (UTIs) are among the most prevalent infectious diseases [1,2], substantially contributing to antibiotic use in the hospital setting. According to medical literature, up to 50% of hospital antibiotic use is inappropriate [35]. Inappropriate antibiotic use has been associated with an increase in morbidity, mortality, length of stay, hospital costs and raising bacterial resistance [610]. In a previous study, we defined appropriate antibiotic use for patients with a complicated UTI with a valid set of nine guideline-based quality indicators (QIs) (Table 1) and showed in a pilot study large room for improvement on most QIs [11].

To improve appropriate antibiotic use, Antimicrobial Stewardship Programs (ASPs) are propagated [12], which can be considered as ‘a menu of interventions that can be designed and adapted to fit the infrastructure of any hospital’ [13]. This menu suggests various improvement interventions of which it has been shown that they effectively improve antibiotic prescribing in hospitals [3]. Unfortunately, direct comparisons of the effectiveness of these different improvement interventions in a methodological powerful design are scarce [314]. Such head-to-head comparisons, for which a cluster-randomized trial design is considered to be the ideal [3], are urgently needed to extend the current evidence for effectively improving antibiotic prescribing [3,15].

Therefore, we conducted a cluster-randomized trial of two interventions, or strategies, to improve antibiotic use in patients with a complicated UTI. We aimed to assess the effectiveness, measured as the before-and-after-intervention performance on the QIs, of two improvement strategies: 1) a Multi-Faceted Strategy, comparable to an effective strategy to improve antibiotic use in patients with lower respiratory tract infections [16] and 2) a ‘Competitive Feedback Strategy’, i.e. providing professionals with non-anonymous comparative feedback on the department’s appropriateness of antibiotic use in patients with complicated UTIs. Additionally, we aimed to identify determinants of successful improvement.


Design and setting

We conducted a multicentre, cluster-randomized trial to compare the effectiveness of two different strategies to improve the appropriateness of antibiotic use in patients with a complicated UTI (QUality of ANtibiotic use in uTI patients (QUANTI) trial,; NTR 1742).

Between February and November 2009 a retrospective baseline measurement was performed at the Internal Medicine and Urology departments of 19 university and non-university hospitals located throughout the Netherlands. By February 2010, hospitals were randomly allocated to one of the improvement strategies. Between April 15 and October 15, 2010, in each hospital the allocated improvement strategy was implemented. From six months later patients were included in a post-intervention measurement (exact starting point differed by hospital, see Fig 1b). For both the baseline and post-intervention measurement the minimal sample size was 50 patients per department. Enrolment of this number of patients took on average, depending on the department, 1.5 years, so (retrospective) post-intervention measurement could not start before April 2012.

Fig 1. Flow charts of study design and participants; MFS = Multi-Faceted Strategy, CFS = Competitive Feedback Strategy.

* CFS: patients enrolled from October 15, 2010: enrolment in post-intervention measurement at least 6 months after providing the feedback reports. MFS: patient enrolment in post-intervention measurement depended on the individual implementation schedule: at least 3 months after the hospital’s kick-off meeting, and at least 6 months after providing the first feedback report.

The medical ethical committee of the Academic Medical Centre Amsterdam considered our study and concluded that it was deemed exempt from their approval (ref 08.17.1775). No informed consent was obtained from patients because no interventions at the patient level were done and patient data were analysed in a retrospective design anonymously, for the aim to improve quality or healthcare.

Informed consent was obtained from the contact persons of the participating hospitals.

Patient selection

From the hospital diagnosis registration system patients were screened for nationally defined categories: cystitis, pyelonephritis, prostatitis and bacteremia. Included were adults (≥ 16 years) who were referred to the hospital (inpatient/outpatient clinic) and diagnosed by an internist or urologist with a complicated UTI (including catheter-associated UTIs) as main diagnosis and treated as such. We defined a complicated UTI as a UTI with one of the following characteristics: male gender, pregnancy, any functional or anatomical abnormality of the urinary tract, immunocompromising disease or medication, or a UTI with symptoms of tissue invasion or systemic infection (pyelonephritis, urosepsis, prostatitis) [17].

Multi-faceted strategy (MFS)

This strategy was comparable to an improvement strategy developed by Schouten et al. [16] that effectively improved antibiotic use in patients with lower respiratory tract infections in a cluster-randomized trial in six Dutch hospitals. Their strategy contained standardized elements that were used in all hospitals and additional interventions that were locally adjusted to the needs and wishes of that specific hospital. The antibiotic use QIs that were most in need of improvement were given priority. All interventions were initiated and coordinated by the researcher and a project quality improvement officer.

Our MFS consisted of comparable standardized elements, but more strongly involved local professionals in the design and performance of the locally tailored interventions. The intervention consisted of three separate phases (see Figs 2 and 3).

Fig 2. Description of standardized (bold) and optional (italic) elements of the multi-faceted strategy (MFS) [3033].

Fig 3. Activities and compliance scores per strategy; MFS = Multi-Faceted Strategy, CFS = Competitive Feedback Strategy.

Elements in bold are standardized strategy elements; elements in italics are optional. LOC: Local Organizing Committee. * more than the median score of all departments.

Competitive feedback strategy (CFS)

In this era of transparency, it is becoming increasingly common to release performance data–either individually or publicly–with the aim of improving healthcare performance [18]. Individual audit and feedback is widely used to improve practice by giving professionals insight into their own performance [15]. Simultaneously, performance data are released into the public domain (i.e. public reporting), aiming to improve quality by ranking performance of different providers [1921]. We combined the best of these two approaches, by providing individual feedback to the professionals by non-anonymously ranking the various departments. In this manner we aimed to include the competitive element of public reporting in the individual feedback strategy, resulting in a so-called competitive feedback strategy.

Competitive feedback reports contained, for each QI, a list of all 38 departments’ performance scores, in which the names of the MFS departments were blinded, but the others were visible. For reasons of confidentiality, feedback reports were sent by regular mail and receipt of the reports was verified.

Variables and data collection

Effect parameters: Quality indicators for appropriate antibiotic use.

The appropriateness of antibiotic use was scored using QIs which are based on the treatment recommendations from the Dutch national, evidence-based guideline for the antimicrobial treatment of complicated UTIs [22]. In an earlier study, a 3-step modified RAND Delphi approach among experts was used to systematically develop a set of nine QIs, which was subsequently validated [11] (Table 1). All QIs are dichotomous variables, distinguishing appropriate from inappropriate antibiotic use in each individual patient. Data were collected by retrospective chart review by the study researcher together with one (baseline) or two (post-intervention) trained research assistants. The assistants were blinded for the strategy status of the hospitals. QI performance was calculated for each patient using previously constructed algorithms [10].

To summarize patients’ performance on the different QIs in one outcome measure, we calculated − at baseline- and post-intervention measurement − for each patient a total QI set performance, defined as the patient’s QI sumscore divided by the number of QIs that applied to that specific patient [10].

Determinants of improvement.

We aimed to identify determinants of successful improvement, because insight into the mechanisms responsible for the results could enhance the validity of the findings and might help to understand the potential generalizability of the strategies [2324]. We examined whether the following variables were associated with improved total QI set performance: compliance with the strategies, department’s baseline performance on the total QI set, inpatient versus outpatient and Internal Medicine versus Urology.

Compliance with improvement strategies: To measure compliance with the various elements of the improvement strategies, we assessed for all departments which improvement activities were actually performed, e.g. whether feedback reports and the improvement plan were disseminated and/or presented by the local contact person and whether/what additional improvement actions were carried out. Compliance data were collected using a questionnaire filled in by the contact persons (February 2011). The attendance of local professionals at the kick-off meetings was registered by the study researcher. A sum score was calculated reflecting the compliance per department (Fig 3).

Department’s baseline performance on the total QI set: For each department, a baseline performance score was assessed by calculating the mean total QI set performance of all patients of that department during the baseline measurement.

Sample size

We anticipated a difference between the two strategies in QI adherence of 15% in favour of the MFS after the intervention (70% versus 55%). Intraclass correlations (ICC) calculated in our QI development study [11] for three of our QIs showed a mean ICC of 0.10. Using alpha = 0.05, two-sided testing, power = 0.80 and ICC = 0.10, we needed 18 clusters of 250 individuals per strategy if only one indicator was measured per individual. Since we measured more indicators per individual, and assuming a correlation of 0.5 between the indicator values from the same individual, the number of individuals per cluster could be reduced by a factor five, requiring a sample size of 18 clusters of 50 patients each per strategy. The total number of patients that should be included for the trial was 3600 patients (18 (clusters) * 50 (patients) * 2 (strategies) * 2 (baseline- and post-intervention measurement)).

Statistical analysis

Randomization of the hospitals was balanced by minimization on hospital’s baseline individual QI performance scores and was performed by a statistician who was blinded to the composition of the groups.

As descriptive statistics, we give the percentage appropriate antibiotic use per QI, and means for total QI set performance. Not every QI was applicable to all included patients, therefore the sample sizes of the QIs varied. For a QI to be included in the analyses, we decided that the minimum sample size was a mean of 15 patients per department in the baseline measurement.

The effectiveness of the strategies on the individual QIs was assessed by multilevel logistic regression analysis, with clusters determined by the unique hospital-department combinations. We allowed the improvement from baseline to post-intervention measurement to differ by strategy. In an adjusted analysis we controlled for the patient characteristics gender, age, urological comorbidity, diabetes and being in- or outpatient.

To test the effectiveness of the strategies on the total QI set performance, multilevel linear regression analysis was performed, in which the residual variance was weighted for the number of QIs that applied to the individual patient. Hence, we assumed a normal distribution of the total QI set performance. Although this is not correct for the individual total QI values, its disturbing effect on the parameter estimates may be reduced due to the large sample size. As an alternative, we performed a logistic regression analysis in which all individual QIs were included. In this model, we assumed that the average value was equal for every QI and that there was no correlation between QI performance scores within a patient.

We also used multilevel linear regression analysis to test for each strategy the association between possible determinants of successful improvement (compliance with strategies, department’s baseline total QI set performance, inpatient/outpatient, Internal Medicine/Urology) and the total QI set performance. For the effect of baseline performance, we regressed the individual post-intervention total QI set performance on the mean value per department at baseline. We performed the analyses using R version 3.0.1 and the lme4 package [25,26]. A value of P < 0.05 was considered statistically significant.


Study population

The baseline population consisted of 1,964 patients, the post-intervention population of 2,027 patients (Fig 1b). Their characteristics are described in Table 2. Of the 19 included hospitals 4 were university hospitals, of which 3 were randomized to the CFS group and 1 to the MFS group.

Table 2. Patient characteristics at baseline and post-intervention; MFS = Multi-Faceted Strategy, CFS = Competitive Feedback Strategy.

Effectiveness of strategies

Table 3 shows for each strategy the baseline (T0) and post-intervention (T1) performance on the individual QIs and total QI set performance. Three QIs did not reach the minimum sample size to be included in the analyses, because they were not applicable in enough patients: ‘Use fluoroquinolones selectively’, ‘Replace catheter after initiation of treatment’, and ‘Adapt antibiotic dose according to renal function’.

Table 3. Performance on quality indicators before (T0) and after intervention (T1).

For the total sample, several performance scores on the individual QIs showed improvement from baseline to post-intervention measurements. Performing a urine culture increased in both strategy groups (MFS: 72.6% to 80%, P = 0.01 and CFS: 76.9% to 83.5%, P = 0.008). Tailoring antibiotic treatment on the basis of culture results (74.1% to 80.9%, P = 0.03) and treating UTI in men in accordance with the national guideline (33.2% to 38.3%, P = 0.05) improved significantly in the CFS group. No significant differences were found between both strategies in the improvement of individual QI scores (Table 3). Correcting for gender, age, urological comorbidity, diabetes and inpatient/outpatient did not significantly change these results.

The total QI set performance improved significantly in both strategy groups (Table 3), but the difference between the strategies was not statistically significant, neither in the multilevel linear regression analysis nor in the logistic analysis (see Methods).

Determinants of improvement

Compliance with improvement strategies.

In the MFS group, the first feedback report was disseminated and presented in 33% (6/18) of departments, it was either disseminated or presented in 44% (8/18) and it was neither disseminated nor presented in 22% (4/18). For the improvement plan, dissemination and presentation occurred in 28% (5/18), dissemination or presentation in 44% (8/18) and none of them in 28% (5/18). For the second feedback report these percentages were 11%, 27% and 62%, respectively. Additional LOC meetings were organized in 28% (5/18) and pocket cards were distributed in 56% (10/18) of departments. Additional improvement actions were performed in 39% (7/18) of departments (S1 Table). The maximum compliance sum score (Fig 3) was 12, with a median of 4. Fig 4a demonstrates a non-significant association between compliance to the MFS and the total QI set improvement (P = 0.095).

Fig 4. Improvement of the total QI set performance in relation to department’s compliance with improvement strategies.

Difference in performance between T1 and T0, 95% CIs are shaded.

In the CFS group, the first feedback report was disseminated and presented in 30% (6/20) of departments, it was disseminated or presented in 40% (8/20) and in 30% (6/20) it was neither disseminated nor presented. Additional improvement actions were performed in 40% (8/20) of departments (S1 Table). The maximum compliance sum score (Fig 3) was 6, with a median of 1. Fig 4b shows that better compliance to the CFS was significantly associated with total QI set improvement (P = 0.04).

Department’s baseline performance on the total QI set.

For both strategies, a lower department's mean baseline performance on the total QI set was associated with more total QI set improvement. For both strategies, total QI set performance significantly improved at departments with a mean baseline total QI set performance ≤60% (Fig 5a+b).

Fig 5. Improvement of the total QI set performance in relation to department’s mean baseline performance on the total QI set.

Difference in performance between T1 and T0, 95% CIs are shaded. Dots indicate individual department’s score.

Inpatients versus outpatients.

Improvement on the total QI set performance was comparable for inpatients and outpatients in both strategies (MFS: P = 0.81, CFS: P = 0.21) (Table 4).

Table 4. Possible determinants of successful improvement.

Internal medicine versus urology.

In both strategies, improvement on the total QI set performance did not differ significantly between patients treated at Internal Medicine or Urology departments (MFS: P = 0.34, CFS: P = 0.21) (Table 4).


In this study, we compared the effectiveness of two strategies to improve antibiotic use in patients with a complicated UTI. We found that performance scores on several individual QIs showed improvement from baseline to post-intervention measurements, but no systematic differences between both strategies were found. The total QI set performance improved statistically significantly in both strategy groups. Better compliance with the strategies was associated with more improvement on the total set of QIs. Low department’s baseline performance on the total set of QIs was associated with a larger effect of both improvement strategies.

In our study, the multi-faceted strategy was less effective than the original strategy by Schouten et al., which was developed to improve antibiotic use in patients with lower respiratory tract infections [16]. A possible explanation for this difference might be that in our strategy, after effectuating the standardized intervention elements, performance of the optional and additional improvement activities strongly depended on local professionals, whereas in the original strategy the study researcher and project quality improvement officer actively initiated and coordinated these flexible activities. We considered involvement of local professionals to be a crucial element of our strategy, as this showed to be successful in achieving sustainable improvement [3,27] and in general contributes to the local performance of improvement strategies. However, we probably overestimated the persuasive power of our strategy to involve local professionals. In this light, evaluation of departments’ compliance with the strategy was relevant [24], as it confirmed suboptimal compliance at many departments, and a trend towards more improvement with better compliance. Our results suggest that for improvement strategies initiated from outside the hospital it is difficult to actually engage professionals from the ‘inside’, resulting in less effectiveness of the strategy. This emphasizes the need for a locally initiated multidisciplinary team, as recommended in the development of antimicrobial stewardship programs [12].

Concerning the competitive feedback strategy, better compliance with the strategy was significantly related to more improvement. This relation was stronger than for the MFS. Our compliance evaluation showed that feedback reports were disseminated and/or presented often and additional improvement actions were performed in 40% of departments. This is in contrast with literature suggesting that physicians are sceptical about public data and consider them of minimal use [21,28,29]. In addition, there is no consistent evidence that the public release of performance data improves care [20]. Possibly, our CFS found the appropriate balance between being a stimulus by creating accountability [20] and ensuring enough confidentiality.

Besides compliance with the various elements of the improvement strategies, other variables turned out to be determinants of successful improvement. A low department’s baseline total QI set performance seemed to be associated with a larger effect of both improvement strategies, which is in line with previous studies on improving professional practice [15,23]. However, regression to the mean effects may be partially responsible for these trends.

Furthermore, subgroup analyses showed that for inpatients and outpatients and for the different departments (Internal Medicine or Urology) the size of improvement was comparable.

The major strength of our study is the large study sample including hospitals from all over the Netherlands and the large patient populations, as required by a cluster-randomized trial with different intervention arms. Second, the design of the study and the collection of data regarding multiple aspects of care that were derived from the medical records of each individual patient contribute to the validity of our results. Finally, we not only assessed effectiveness of two strategies, but also provided insight into what variables determine effectiveness. Future improvement studies can build on this insight.

Our study has a few limitations. First, no control group was included, because the major goal was to compare two improvement strategies. Consequently, differences between baseline to post-intervention measurements might be (partially) explained by the Hawthorne effect, where behaviour changes as a result of being under study. However, two recently updated Cochrane reviews state that head-to-head intervention trials are urgently needed [3,15]. Second, we summarized compliance in a sum score, adding all activities together, without taking into account the more detailed content of them. Nevertheless, we demonstrated a positive relationship between better compliance scores and total QI set performance improvement. We earlier demonstrated that better performance on the total set of QIs as such was associated with a shorter length of hospital stay [10].

In conclusion, our results are relevant because they reflect the difficulties in daily practice when introducing–nowadays popular- public reporting strategies or multi-faceted stewardship programs. We think that these improvement strategies and programs should be developed and carried out by a locally initiated multidisciplinary team to improve effectiveness. Equally important, our study shows that the effectiveness of an intervention strategy increases when it is more rigorously applied: a comprehensive set of interventions ultimately results in more improvement. Finally, we introduced a competitive feedback strategy that appeared to be less time-consuming and equally effective in improving QI performance scores, compared to a multi-faceted strategy. This competitive feedback strategy should be tested in a broader setting.

Supporting Information

S1 Table. Additional improvement actions, performed by the MFS and CFS departments.



The authors thank all medical specialists and staff of the participating hospitals: in the Urology departments G.A.J. Swinnen (Academic Medical Centre), M. Bekker (Antonius Hospital Sneek), R. Vleeming (BovenIJ Hospital), B. Meijer (Flevo Hospital), F. M. J. A. Froeling (Haga Hospital), J. H. van der Veen (Kennemer Hospital), S. D. Bos (Medical Centre Alkmaar), G. van Andel (Onze Lieve Vrouwe Hospital), P. C. Weijerman (Rijnstate Hospital), H. F. M. Karthaus (Canisius-Wilhelmina Hospital), A. Claessen (Rode Kruis Hospital), P. L. M. Vijverberg (St Antonius Hospital Nieuwegein), E. P. van Haarst (Sint Lucas Andreas Hospital), P. J. M. Kil (St Elisabeth Hospital), J. A. Witjes (Radboud University Nijmegen Medical Centre), M. T. W. T. Lock (University Medical Centre Utrecht), R. J. A. van Moorselaar (VU University Medical Center), Y. Reisman (Amstelland Hospital), and K. C. van Dalen (Diaconessen Hospital); and in the Internal Medicine departments P. Speelman (Academic Medical Centre), G. J. Veldhuis (Antonius Hospital Sneek), M. G. W. Barnas, A. R. Jonkhoff and N. Posthuma (BovenIJ Hospital), J. Branger (Flevo Hospital), R. M. Valentijn and E.F. Schippers (Haga Hospital), R. Soetekouw (Kennemer Hospital), W. Bronsveld and G. van Twillert (Medical Centre Alkmaar), K. Brinkman (Onze Lieve Vrouwe Hospital), E. H. Gisolf (Rijnstate Hospital), A. S. M. Dofferhoff (Canisius-Wilhelmina Hospital), E. D. Kerver (Rode Kruis Hospital), H. S. Biemond-Moeniralam and A. J. Meinders (St Antonius Hospital Nieuwegein), J. Veenstra (Sint Lucas Andreas Hospital), M. E. E. van Kasteren (St Elisabeth Hospital), J. W. M. van der Meer (Radboud University Nijmegen Medical Centre), I. M. Hoepelman and J. J. Oosterheert (University Medical Center Utrecht), M. A. van Agtmael (VU University Medical Center), L. A. Noach (Amstelland Hospital), and P. R. J. Gallas and S. U. C. Sankatsing (Diaconessen Hospital). All of these individuals were salaried employees of their respective institutions and no one received additional compensation for their efforts in this study.

The authors also thank R. Akkermans (Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre) for his statistical support and Melissa Verkaik, José de Koning and Evelien Bouman for data collection.

Author Contributions

Conceived and designed the experiments: VS MEJLH TMR JMP SEG. Performed the experiments: VS MEJLH JMP SEG. Analyzed the data: VS MEJLH RBG BCO JMP SEG. Wrote the paper: VS MEJLH JMP SEG. Edited, read and approved the manuscript: VS MEJLH RBG TMR BCO JMP SEG.


  1. 1. Foxman B. The epidemiology of urinary tract infection. Nat Rev Urol 2010; 7:653–60. pmid:21139641
  2. 2. Stamm WE, Norrby SR. Urinary tract infections: disease panorama and challenges. J Infect Dis 2001; 183 Suppl 1:S1–S4. pmid:11171002
  3. 3. Davey P, Brown E, Charani E, Fenelon L, Gould IM, Holmes A, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev 2013; 4:CD003543. pmid:23633313
  4. 4. Hulscher ME, Grol RP, van der Meer JW. Antibiotic prescribing in hospitals: a social and behavioural scientific approach. Lancet Infect Dis 2010; 10:167–75. pmid:20185095
  5. 5. Zarb P, Amadeo B, Muller A, Drapier N, Vankerckhoven V, Davey P, et al. Identification of targets for quality improvement in antimicrobial prescribing: the web-based ESAC Point Prevalence Survey 2009. J Antimicrob Chemother 2011; 66:443–9. pmid:21084362
  6. 6. Costelloe C, Metcalfe C, Lovering A, Mant D, Hay AD. Effect of antibiotic prescribing in primary care on antimicrobial resistance in individual patients: systematic review and meta-analysis. BMJ 2010; 340:c2096. pmid:20483949
  7. 7. de Kraker ME, Wolkewitz M, Davey PG, Koller W, Berger J, Nagler J, et al. Burden of antimicrobial resistance in European hospitals: excess mortality and length of hospital stay associated with bloodstream infections due to Escherichia coli resistant to third-generation cephalosporins. J Antimicrob Chemother 2011; 66:398–407. pmid:21106563
  8. 8. Fraser A, Paul M, Almanasreh N, Tacconelli E, Frank U, Cauda R, et al. Benefit of appropriate empirical antibiotic treatment: thirty-day mortality and duration of hospital stay. Am J Med 2006; 119:970–6. pmid:17071166
  9. 9. Tacconelli E. Antimicrobial use: risk driver of multidrug resistant microorganisms in healthcare settings. Curr Opin Infect Dis 2009; 22:352–8. pmid:19461514
  10. 10. Spoorenberg V, Hulscher MEJL, Akkermans RP, Prins JM, Geerlings SE. Appropriate antibiotic use for patients with urinary tract infections reduces length of hospital stay. Clin Infect Dis 2014; 58:164–169. pmid:24158412
  11. 11. Hermanides HS, Hulscher ME, Schouten JA, Prins JM, Geerlings SE. Development of quality indicators for the antibiotic treatment of complicated urinary tract infections: a first step to measure and improve care. Clin Infect Dis 2008; 46:703–11. pmid:18230045
  12. 12. Dellit TH, Owens RC, McGowan JE Jr., Gerding DN, Weinstein RA, Burke JP, et al. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis 2007; 44:159–77. pmid:17173212
  13. 13. Septimus EJ, Owens RC Jr. Need and potential of antimicrobial stewardship in community hospitals. Clin Infect Dis 2011; 53 Suppl 1:S8–S14. pmid:21795728
  14. 14. Ramsay C, Brown E, Hartman G, Davey P. Room for improvement: a systematic review of the quality of evaluations of interventions to improve hospital antibiotic prescribing. J Antimicrob Chemother 2003; 52:764–71. pmid:14563901
  15. 15. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2012; 6:CD000259. pmid:22696318
  16. 16. Schouten JA, Hulscher ME, Trap-Liefers J, Akkermans RP, Kullberg BJ, Grol RP, et al. Tailored interventions to improve antibiotic use for lower respiratory tract infections in hospitals: a cluster-randomized, controlled trial. Clin Infect Dis 2007; 44:931–41. pmid:17342644
  17. 17. Rubenstein JN, Schaeffer AJ. Managing complicated urinary tract infections: the urologic view. Infect Dis Clin North Am 2003; 17:333–51. pmid:12848473
  18. 18. Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability measures—using measurement to promote quality improvement. N Engl J Med 2010; 363:683–8. pmid:20573915
  19. 19. Webometrics Ranking of World Hospitals. Cybermetrics Lab. Available: Accessed March 2015.
  20. 20. Ketelaar NA, Faber MJ, Flottorp S, Rygh LH, Deane KH, Eccles MP. Public release of performance data in changing the behaviour of healthcare consumers, professionals or organisations. Cochrane Database Syst Rev 2011;(11):CD004538. pmid:22071813
  21. 21. Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148:111–23. pmid:18195336
  22. 22. Geerlings SE, van den Broek PJ, van Haarst EP, Vleming LJ, van Haaren KM, Janknegt R, et al. Optimisation of the antibiotic policy in the Netherlands. X. The SWAB guideline for antimicrobial treatment of complicated urinary tract infections. Ned Tijdschr Geneeskd 2006; 150:2370–6. pmid:17100128
  23. 23. Huis A, Holleman G, van AT, Grol R, Schoonhoven L, Hulscher M. Explaining the effects of two different strategies for promoting hand hygiene in hospital nurses: a process evaluation alongside a cluster randomised controlled trial. Implement Sci 2013; 8:41. pmid:23566429
  24. 24. Hulscher ME, Laurant MG, Grol RP. Process evaluation on quality improvement interventions. Qual Saf Health Care 2003; 12:40–6. pmid:12571344
  25. 25. R Core Team (2013). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available: Accessed March 2015.
  26. 26. Douglas Bates, Martin Maechler and Ben Bolker (2013). lme4: Linear mixed-effects models using S4 classes. R package version 0.999999–2. Available: Accessed March 2015.
  27. 27. Weinberg M, Fuentes JM, Ruiz AI, Lozano FW, Angel E, Gaitan H, et al. Reducing infections among women undergoing cesarean section in Colombia by means of continuous quality improvement methods. Arch Intern Med 2001; 161:2357–65. pmid:11606152
  28. 28. Lindenauer PK, Remus D, Roman S, Rothberg MB, Benjamin EM, Ma A, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med 2007; 356:486–96. pmid:17259444
  29. 29. McKibben L, Horan TC, Tokars JI, Fowler G, Cardo DM, Pearson ML, et al. Guidance on public reporting of healthcare-associated infections: recommendations of the Healthcare Infection Control Practices Advisory Committee. Infect Control Hosp Epidemiol 2005; 26:580–7. pmid:16018435
  30. 30. Schouten JA, Hulscher ME, Natsch S, Kullberg BJ, van der Meer JW, Grol RP. Barriers to optimal antibiotic use for community-acquired pneumonia at hospitals: a qualitative study. Qual Saf Health Care 2007; 16:143–9. pmid:17403764
  31. 31. Majumdar SR, Simpson SH, Marrie TJ. Physician-perceived barriers to adopting a critical pathway for unity-acquired pneumonia. Jt Comm J Qual Saf 2004; 30:387–95. pmid:15279503
  32. 32. Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282:1458–65. pmid:10535437
  33. 33. Flottorp S, Oxman AD. Identifying barriers and tailoring interventions to improve the management of urinary tract infections and sore throat: a pragmatic study using qualitative methods. BMC Health Serv Res 2003; 3:3. pmid:12622873