Effects of quality-based procedure hospital funding reform in Ontario, Canada: An interrupted time series study

Background The Government of Ontario, Canada, announced hospital funding reforms in 2011, including Quality-based Procedures (QBPs) involving pre-set funds for managing patients with specific diagnoses/procedures. A key goal was to improve quality of care across the jurisdiction. Methods Interrupted time series evaluated the policy change, focusing on four QBPs (congestive heart failure, hip fracture surgery, pneumonia, prostate cancer surgery), on patients hospitalized 2010–2017. Outcomes included return to hospital or death within 30 days, acute length of stay (LOS), volume of admissions, and patient characteristics. Results At 2 years post-QBPs, the percentage of hip fracture patients who returned to hospital or died was 3.13% higher in absolute terms (95% CI: 0.37% to 5.89%) than if QBPs had not been introduced. There were no other statistically significant changes for return to hospital or death. For LOS, the only statistically significant change was an increase for prostate cancer surgery of 0.33 days (95% CI: 0.07 to 0.59). Volume increased for congestive heart failure admissions by 80 patients (95% CI: 2 to 159) and decreased for hip fracture surgery by 138 patients (95% CI: -183 to -93) but did not change for pneumonia or prostate cancer surgery. The percentage of patients who lived in the lowest neighborhood income quintile increased slightly for those diagnosed with congestive heart failure (1.89%; 95% CI: 0.51% to 3.27%) and decreased for those who underwent prostate cancer surgery (-2.08%; 95% CI: -3.74% to -0.43%). Interpretation This policy initiative involving a change to hospital funding for certain conditions was not associated with substantial, jurisdictional-level changes in access or quality.


Introduction
Policymakers worldwide are experimenting with hospital funding models to improve system performance [1][2][3]. Although such reforms may contribute to improvements in resource allocation and patient outcomes, they may also invoke unintended consequences [4][5][6].
In April 2011, the Government of Ontario, Canada, announced a multi-year phased-in implementation of "patient-based" hospital funding [7]. These hospital funding reforms reduced reliance on global hospital budgets (i.e., fixed annual amount based largely on historical spending) by introducing two new components to hospital funding: Health Based Allocation Model (HBAM), organizational-level funding based on service and patient characteristics; and Quality-Based Procedures (QBPs), a novel approach to hospital funding sharing some characteristics with activity-based funding (ABF) [8]. QBPs consist of pre-set reimbursement rates for managing patients with specific diagnoses or those undergoing specific procedures, coupled with best-practice clinical handbooks for each QBP [7]. Between April 2012-April 2016, 19 priority QBPs were implemented for a range of medical and surgical conditions [9].
The stated goal of QBPs was to "facilitate adoption of best clinical evidence-informed practices" and appropriately reduce "variation in costs and practice across the province while improving outcomes" [10]. The provincial government's rationale and assumed mechanism of action for QBPs was as follows: "QBPs are specific clusters of patient services that offer opportunities for health care providers to share best practices and will allow the system to provide even better quality care, while increasing system efficiencies. By promoting the adoption of clinical evidence-informed practices, clinical practice variation should be reduced across the province while improving patient outcomes to ensure that patients receive the right care, in the right place, at the right time" [11].
Like all "patient focused" activity-based funding systems, QBPs established a prospective payment rate based on service type and volume. Funding was carved out of hospitals' global budgets and then reallocated to hospitals at the start of the relevant fiscal year as a fixed fee and fixed volume, for each QBP procedure or diagnosis. The fixed volume of QBP-funded cases per hospital is based on historical volume levels at each hospital. The fixed fee is adjusted for each hospital based on its unique case-mix index (CMI) to account for the complexity in its overall patient population.
However, QBPs differ from most ABF reforms in that funding applies only to a very limited set of diagnoses and procedures, and they rely on the use of handbooks to encourage incorporation of best practices [12][13][14][15]. To create these handbooks for each QBP, the Ministry of Health and Long Term Care, in collaboration with partners such as Health Quality Ontario, Cancer Care Ontario, and the Cardiac Care Network, established expert advisory panels with leading clinicians, scientists, and patients. They defined episodes of care for selected diagnoses or procedures, developed best practice recommendations for patient care, and suggested indicators to monitor for ongoing quality improvement. The resulting QBP Clinical Handbooks serve as a compendium of evidence and clinical consensus [11].

PLOS ONE
There is no mechanism in place to enforce adherence to the clinical pathways in the handbooks or to measure adherence; hospitals are paid via QBPs whether they follow the pathways or not, but the intent was that following the pathways would enable hospitals to deliver care for the amount paid by QBPs [7,16].
To date, there has been no peer-reviewed evaluation of the overall effects of QBPs on key indicators of patient care. We took advantage of Ontario's data infrastructure to evaluate this new hospital payment model, focusing on system-level changes in measures of quality of care, access to care, and hospital coding behaviour for four QBPs including planned and unplanned surgical procedures, and medical diagnoses, selected a priori by our research team: (1) congestive heart failure, (2) hip fracture, (3) pneumonia, and (4) prostate cancer surgery.

Setting, context, and design
Hospital-based care in Ontario, Canada is publicly-funded. Ontario's 141 publicly-funded hospital corporations comprise 262 hospital sites [17], of which a majority receive QBP funding. Small hospitals (n = 55 with typically fewer than 2700 inpatients or day surgery cases per year in two of the last three years) and specialty hospitals-such as for mental health, children, chronic care, and rehabilitation-primarily receive funding through global budgets and have only implemented select QBPs (e.g. tonsillectomy) depending on their specific patient population (e.g. children). These hospitals are excluded from our analyses because they only perform a very small number, if any, of the diagnoses and/or procedures performed by QBP-funded hospitals [7].
Using population-based interrupted time series (ITS) analyses, and based on a dated prespecified protocol and dataset creation plans held at ICES, we evaluated patients admitted to Ontario hospitals for four pre-specified QBPs. We selected these QBPs with input from health system decision makers and hospital leaders to represent a range of acute versus elective and surgical versus medical issues. This was further informed by our prior qualitative work which identified sources of potential variation in the extent to which, and the ways in which, hospitals responded to QBPs [7,16]. We chose a priori to incorporate a 3 month transition period to allow time for any clinical changes in response to the funding model change to be implemented. We used an ITS design which is a robust quasi-experimental design that can be used to evaluate policy changes at the whole system-and population-level when randomization is infeasible. [18][19][20][21][22] The study interval depended on data availability and varied by QBP:

Ethics approval
The use of data in this project was authorized under section 45 of Ontario's Personal Health Information Protection Act, which does not require review by a research ethics board.

Study patients
We separately identified patients for each QBP cohort using inclusion and exclusion criteria detailed in each clinical handbook [12][13][14][15]. In short, cohorts for congestive heart failure and pneumonia were defined using specific qualifying hospital discharge diagnoses; hip fracture, using a combination of discharge diagnoses and procedures; and prostate cancer surgery, using specific procedure codes. We considered only admissions to hospitals that received funding for one of the QBPs under evaluation. Episodes of care had to be separated by at least 30 days (to exclude 30-day hospital readmissions). We excluded patients without a valid Ontario Health Insurance Plan (OHIP) number who could not be accurately followed in our data sets, and patients with missing demographic information (<0.1%).

Data sources and quality measures
We used multiple linked health administrative databases to describe study patients and ascertain outcomes. Patient demographic information and vital status were obtained from the Registered Persons Database. Hospital diagnoses and procedures were obtained from the Canadian Institute for Health Information (CIHI) Discharge Abstract Database (CIHI-DAD). Emergency department admissions were captured using CIHI's National Ambulatory Care Reporting System (NACRS).
We described patients according to age at hospital admission, sex, neighbourhood income quintile, rurality of residence, Deyo-Charlson Comorbidity Index [23], and number of emergency department visits and hospitalization days in the year preceding qualifying admission.
Outcomes were assessed for each QBP in three domains: 1. Quality of care: i) death or return to hospital (i.e. unplanned presentation to emergency department or hospital admission within thirty days, among patients discharged alive and not transferred); ii) mean acute hospital length of stay (LOS); and iii) mean total LOS for entire episode of care including transfers; 2. Access to care: i) total volume of admissions; ii) proportion of patients aged 65 years or older; and iii) proportion of patients living in lowest neighborhood income quintile; 3. Coding behaviour: hospital discharge coding behavior as assessed by mean HBAM Inpatient Group (HIG) resource intensity weight. HIG weight is the Ontario-specific acute inpatient grouping methodology used to account for patients' clinical-and resource-utilization characteristics [24].
We selected these measures because policymakers hoped that QBPs would reduce length of stay in settings where it was longer than optimal without decreasing quality (i.e., outcomes such as deaths, return to hospital, or inappropriate coding). The expectation was also that shorter lengths of stay, as typically seen in other countries implementing ABF-like reforms, would facilitate greater throughput to increase total patient volume across the system, and that access to care would not be compromised by inequity across age and income [8,25]. Socioeconomic status (SES) may contribute to inequalities in access to care, so we used neighborhood income quintile is an indicator of SES [26][27][28]. Prior research has shown that financial incentives associated with hospital funding reforms, such as QBPs, may alter coding behaviour to maximize reimbursement [29][30][31][32]. HIG weight is a measure of coding behaviour because it incorporates both case mix and the resource intensity of each patient care episode adjusted for patient characteristics. If upcoding is occurring we would expect to see changes in HIG weight. Thus, to the extent changes in HIG weight do not represent true abrupt changes in patient case mix, the HIG weight is one potential measure by which to evaluate effects on coding.

Statistical analysis
For each outcome, we calculated monthly summaries, aggregated across hospitals (percent, mean, or raw count) and plotted them over time. For each QBP, we excluded three months of data following start of the funding change to account for a policy "transition" period [33]. We chose a three month transition a priori, postulating that it would take a fiscal quarter for any policy effects to occur. We accounted for seasonality by decomposing the data in trend, seasonal, and random components, and then removing the seasonal component [34].
We used segmented linear regression analysis of the seasonally-adjusted data. We used the forecast library in the R statistical software package to fit the model and used an automated stepwise selection procedure based on the Akaike Information Criterion (AIC) to include autoregressive terms accounting for the serial correlation [35,36]. We used visual inspection of the observed and fitted data as well as residual plots to verify goodness of fit. Our model included fixed terms for pre-policy intercept and slope, intercept change at the time of policy (immediate difference in level following implementation of QBPs accounting for the postulated three-month transition period), and post-policy trend change (difference in the slope following implementation of QBPs). For the main results, we expressed the effect of QBP on each outcome as the counterfactual difference after 2 years, that is, the difference between the observed rate and the rate that would have occurred had QBPs been not implemented. This was estimated as the difference at 2 years post-implementation between the fitted post-implementation rates and the projected rates estimated from the pre-intervention intercept and slope. All analyses were performed at ICES (www.ices.on.ca) using linked, coded data. We used SAS v. 9.3 to prepare the monthly time series data for each outcome measure, and R version 3.4.4 to perform the regression analyses (nlme, car) and plot and compute the 95% confidence intervals for the counterfactual [37].

Patient characteristics
S1-S4 Tables describe the overall characteristics for each cohort. The patient characteristics remained largely unchanged throughout the study period.

Results from segmented regression analysis
The counterfactual estimates from the segmented regression analyses, representing the effect of QBPs on outcomes at 2 years post-implementation, are presented in Table 1. Figs 1-3 and S1-S4 Figs present the observed data and fitted values from the segmented regression analyses. The full results from the segmented regression analyses are presented in S5 and S6 Tables.
Quality of care. At 2 years post-implementation, the estimated percentage of hip fracture patients who returned to hospital or died within 30 days was higher by an absolute 3.13% (95% CI: 0.37% to 5.89%) than if QBPs had not been introduced. There was no change in LOS for hip fracture patients in comparison to the counterfactual (Table 1). For prostate cancer surgery patients, the increase in mean acute LOS over the counterfactual was 0.33 days (95% CI: 0.07 to 0.59) and mean total LOS 0.34 days (95% CI: 0.06 to 0.61). There were no other statistically significant changes observed for the LOS outcome (Fig 2 and S1 Fig). Access to care. At 2 years post-implementation, the volume of patients admitted with congestive heart failure was higher by 80 patients (95% CI: 2 to 159) than if QBPs had not been introduced ( Table 1). The volume of hip fracture admissions was lower by 138 patients (95% CI: -183 to -93). Percentage of admitted patients living in the lowest income quintile was higher (1.89%; 95% CI: 0.51% to 3.27%) for those diagnosed with congestive heart failure and decreased for those with prostate cancer surgery (-2.08%; 95% CI: -3.74% to -0.43%).
Hospital coding behaviour. We observed no statistically significant changes in mean HIG weight for any of the four cohorts (S4 Fig).

Summary of findings
For the seven outcomes across the four diagnoses and/or procedures we studied, we compared the observed against expected counterfactual findings across outcomes in the domains of quality, access, and coding behaviour and found an inconsistent and generally weak response to the QBP funding reform at the system-level. In general, QBPs did not appear to result in changes to prevailing trends in return to hospital or death. Contrary to expectations, LOS increased slightly for prostate cancer surgery. Counterintuitively, despite no change in LOS, QBP funding was associated with a decrease in overall volume of admissions for hip fracture surgeries and a small absolute increase (of 3%) in the monthly percentage of hip fracture patients who returned to hospital or died within 30 days of discharge. We observed small increases in the percentage of patients admitted with congestive heart failure and decreases for prostate cancer surgery residing in the lowest neighborhood income quintile (<2%).

Explanation of findings
To our knowledge, there is no published quantitative research on the broad effects of the implementation of the QBP funding reform policy. Prior qualitative analyses of QBPs has revealed challenges associated with implementation of this complex hospital funding reform policy [7,16,38].
Previous studies found mixed reactions after other types of hospital funding reform [25,39]. For example, evaluation of a limited experiment with activity-based funding in British Columbia, Canada, showed small decreases in volume, small increases in patients' length of stay, and no changes in measures of quality (i.e., unplanned readmissions and in-hospital mortality) [25]. Conversely, a systematic review of ABF affirmed that transition to activity-based funding initially decreased length of stay in the US and internationally, and also found important policy-and clinically-relevant changes, including substantial increases in admissions to post-acute care following hospitalization [8].

PLOS ONE
Effects of quality-based procedure hospital funding reform in Ontario, Canada Unintended consequences typically associated with ABF-like reforms include, for example, patients being discharged "sicker and quicker" to post-acute care facilities or home, and "upcoding", which may be appropriate if it represents more accurate coding, or inappropriate [8]. We did not see the decrease in LOS that might have been precipitated by accelerated discharge. Nor did we observe changes in coding behaviour for the QBPs studied.
The slight variation we observed in effects-some positive, others negative-may be partly explained by our prior qualitative work, in which we observed variation in response to the reform related to complexity of changes required, internal capacity for organizational change, and availability and appropriateness of supports to manage change [16]. It may also simply represent noise rather than signal. Our goal in this paper was not to understand quantitatively

PLOS ONE
Effects of quality-based procedure hospital funding reform in Ontario, Canada the variation in responsiveness by hospital, but to evaluate the system-level effects of the jurisdictional policy change, as this is the level at which 'success' of the policy reform must ultimately be judged.
The lack of large-scale meaningful changes in association with Ontario's shift to QBP funding is perhaps not surprising. Funding reforms may not be necessary or effective when desirable changes are already occurring. For example, hospitals were already under long-standing pressures to reduce length of stay and may have reached a floor, which may explain why further financial pressure from QBPs had little effect. Similarly, hospitals may have also lacked effective incentives or supports to address readmissions, since, unlike activity-based funding reforms in other countries, Ontario's QBP funding reform did not financially disincentivize return to hospital nor link funding to care outcomes.

PLOS ONE
Effects of quality-based procedure hospital funding reform in Ontario, Canada

Limitations
Our study has several important limitations that are common to observational studies of policy changes. Teasing apart the effect of QBPs in the presence of multiple system-level changes is challenging. First, other initiatives to improve patient care and/or control costs may have overlapped with the timing of QBP implementation. Specific initiatives that we are aware of included passage in 2010 of the Excellent Care for All Act (ECFA) [40], the introduction of Health Based Allocation Model hospital funding reforms in April 2012 [24], and the introduction of Community Health Links in December 2012 [41] (S5-S7 Figs, S7 Table). Visual inspection of data points around the timing of introduction of these initiatives however suggests that they are unlikely to have had a major impact on the outcomes studied in our analyses. A possible exception is the introduction of Community Health Links in December 2012: due to its timing close to that of the QBPs for congestive heart failure (CHF) in April 2013, it

PLOS ONE
Effects of quality-based procedure hospital funding reform in Ontario, Canada is difficult to independently assess the effect of the QBPs for this condition. Undetected confounding is always possible in any uncontrolled study. Policies aimed at improving health care are constantly being tinkered with, which may influence any particular intervention, such as QBPs, in ways not easily detected. Second, given the nearly ubiquitous implementation of QBP in Ontario, we did not identify suitable contemporaneous comparators in this study, which could have strengthened the inferences drawn. Although one of the ways to optimize an ITS is to add negative controls, we did not add these because the goal of the reform was to effect broad change across the entire system, resulting in only a small number of unique and, therefore, non-comparable hospitals being exempted from implementing QBPs (i.e., very small hospitals with few beds and/or those with unique targeted populations). Third, because our analyses were restricted to QBP-funded hospitals rather than all hospitals, we cannot be certain that our results are generalizable to the whole system; however, the proportion of QBP procedures occurring outside of QBP-funded hospitals is low (<11%). Fourth, examining a broader range of outcomes (e.g. extent to which patient care is aligned with evidence-based care processes described in QBP clinical pathways; reduction in inter-hospital variation in care, cost, and wait-times) may be more sensitive to, or reveal different effects of QBPs on, patient care and outcomes, care providers, and the health care system as a whole. Fifth, it is noteworthy that QBPs are unique to Ontario, making generalizability to other jurisdictions (both within and outside Canada) difficult to assess. Although somewhat similar in design to ABF reforms elsewhere, critical differences include the absence of financial disincentives for readmission with QBPs; a smaller and less ubiquitous funding scope limited to fewer priority diagnoses and procedures than ABF reforms elsewhere. However, this study is relevant to health system funding reforms that attempt to improve quality while also cutting costs, though contextual factors that influence the linkage between quality and cost are difficult to capture in relatively simple funding reforms [42]. Sixth, we did not assess how QBPs impacted hospitals' finances, so we cannot make any inferences about whether increases or decreases in hospitals' budgets affected patient care and/or outcomes for the QBPs we evaluated. Seventh, there may be benefits or harms of QBPs that we did not measure, or other policy objectives that may have been met, such as those related to total cost per episode-of-care or cost to the system overall. We were careful to limit our conclusions to only the QBPs and outcomes we evaluated to avoid being overbroad.

Conclusion
We found mixed and generally very small effects on quality of care, access to care, and coding behaviour, across the four QBPs we studied. We speculate that challenges with implementing the best practice pathways featured in the QBP handbooks, together with progressive controls on hospital expenditures, and a worsening overall fiscal picture in Ontario coincident with QBP implementation, may have led to inconsistent and weak signals. Further experimentation with funding reform as a potential mechanism to improve outcomes might yield greater impact if focused on specific diagnoses and procedures in which suboptimal process or outcome measures are well-established and for which efforts to improve outcomes by other means have been inadequate.