Figures
Abstract
Introduction
Readmissions after an acute care hospitalization are relatively common, costly to the health care system, and are associated with significant burden for patients. As one way to reduce costs and simultaneously improve quality of care, hospital readmissions receive increasing interest from policy makers. It is only relatively recently that strategies were developed with the specific aim of reducing unplanned readmissions using prediction models to identify patients at risk. EPIC’s Risk of Unplanned Readmission model promises superior performance. However, it has only been validated for the US setting. Therefore, the main objective of this study is to externally validate the EPIC’s Risk of Unplanned Readmission model and to compare it to the internationally, widely used LACE+ index, and the SQLAPE® tool, a Swiss national quality of care indicator.
Methods
A monocentric, retrospective, diagnostic cohort study was conducted. The study included inpatients, who were discharged between the 1st of January 2018 and the 31st of December 2019 from the Lucerne Cantonal Hospital, a tertiary-care provider in Central Switzerland. The study endpoint was an unplanned 30-day readmission. Models were replicated using the original intercept and beta coefficients as reported. Otherwise, score generator provided by the developers were used. For external validation, discrimination of the scores under investigation were assessed by calculating the area under the receiver operating characteristics curves (AUC). Calibration was assessed with the Hosmer-Lemeshow X2 goodness-of-fit test This report adheres to the TRIPOD statement for reporting of prediction models.
Citation: Hwang AB, Schuepfer G, Pietrini M, Boes S (2021) External validation of EPIC’s Risk of Unplanned Readmission model, the LACE+ index and SQLape as predictors of unplanned hospital readmissions: A monocentric, retrospective, diagnostic cohort study in Switzerland. PLoS ONE 16(11): e0258338. https://doi.org/10.1371/journal.pone.0258338
Editor: Michele Provenzano, Magna Graecia University of Catanzaro: Universita degli Studi Magna Graecia di Catanzaro, ITALY
Received: January 16, 2021; Accepted: September 24, 2021; Published: November 12, 2021
Copyright: © 2021 Hwang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: A minimal anonymized data set necessary to replicate the study findings was uploaded to the platform DRYAD (https://doi.org/10.5061/dryad.70rxwdbxw).
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Background
Readmissions after acute care hospitalization are relatively common, costly to the health care system and associated with a significant burden for patients [1–5]. A readmission increases the risk of dependence and functional or psychosocial decline [5]. Moreover, readmission increases the risk of decompensation of other comorbid conditions, thus increasing the frailty of elderly patients [6].
The belief that readmission rates are a valid indicator to assess quality of care has led to their inclusion in hospital quality surveillance [6, 7]. In December 2020, the Swiss National Association for Quality Development in Hospitals and Clinics (ANQ) reported its most recent findings. Accordingly, based on 2018 figures, the number of Swiss hospitals that reported more readmissions, as expected according to their patient mix, declined from a high in 2016. In total, 26 out of 193 hospitals reported rates (observed/expected) outside the norm, i.e., significantly higher than 1 [8].
In the last few years, the observed increase in costs has posed major challenges for the healthcare system. Healthcare costs in Switzerland have risen by a third within the last decade [9]. As one way to reduce costs and simultaneously improve quality of care, unplanned hospital readmissions have received increasing interest from policy makers. It is only relatively recently that policies were developed with the specific aim of reducing unplanned readmissions. In Switzerland, the readmission policy involves financial penalties, i.e., that patient records of the first admission and the relevant readmission are merged into a single case if certain criteria are met. Consequently, hospitals receive only one DRG-based payment for both admissions [10]. As a result, although some readmissions cannot be avoided and the proportion of potentially avoidable readmissions (PARAs) remains debatable, health care organizations invest considerable resources in efforts to reduce unplanned hospital readmissions [11–13].
To most efficiently reduce unplanned readmissions, hospitals need to target effective discharge and post-discharge interventions at those who need them the most. One of the more recent strategies is the application of prediction models. As systematic reviews have shown, there are many models aimed at identifying those at greater risk of readmission [14, 15]. The majority include readily available predictors such as demographic and administrative data, or even comorbidities, laboratory results, and medications [14]. Among these models, the Epic Risk of Unplanned Readmission model, developed in 2015 for the U.S. acute care hospital setting, promises superior calibration and discriminatory abilities. The model was developed by Epic Systems Corporation based on data from 26 Epic community member hospitals, including more than 275,000 inpatient hospital admission encounters, to determine a patient’s risk of unplanned readmission within 30 days of being discharged from an index admission.
Rationale
Rising awareness about the importance of electronic health records (EHRs) for enhancing the efficiency and quality of patient care has augmented global electronic health records industry growth. In fall 2019, the Lucerne Cantonal Hospital rolled out Epic’s EHR system as the first hospital in a German-speaking country. Herewith, the conditions to apply more complex (in terms of the included number and type of predictors) prediction models were created. With the intention of routine application, the Epic Risk of Unplanned Readmission model was externally validated. Although the Epic model was developed for the acute care hospital setting, variations in demographic features, disease prevalence, and differences in test conditions (e.g., defining criteria of relevant readmissions, time frame of measurement) entailed external validation prior to routine application in the Swiss acute care hospital setting. External validation means applying the model with its predictors and assigned weights, as estimated from the development study, to a new population; measuring the predictor and outcome values; and quantifying the model’s predictive performance (calibration and discrimination) [16].
For comparison, the SQLape® tool (Striving for Quality Level and Analyzing of Patient Expenditures), a Swiss national quality of care indicator that has become a quasi-standard in Switzerland and allows the prediction of hospital readmissions, was included [17]. Based on a systematic review of models to predict unplanned hospital readmissions from 2016 [14] and a literature search on PubMed for validation studies published after 2015, the LACE+ model was included as the second comparative model. The LACE+ score is easily producible and has been analyzed in various prospective and retrospective studies, including studies with medical inpatient cohorts from Swiss tertiary care providers [18–22].
Methods
Design
This monocentric, retrospective, diagnostic cohort study included inpatient hospitalization cases from the Lucerne Cantonal Hospital (LUKS), which is the largest tertiary healthcare provider in Central Switzerland with a beneficiary population of ~ 800,000. The LUKS is a three site, 800-bed hospital with all medical and surgical specialties present, four Level 3 intensive care units, and four 24 h/7 days per week emergency departments (EDs). This study was approved by the Ethics Committee Northwest- & Central Switzerland (October 7, 2019, project-ID 2019–01861). An informed consent was not obtained, because this study was conducted as a quality control project that used anonymized data. This study was conducted according to the principles of the Declaration of Helsinki.
Participants
All inpatients between one and 100 years old who were discharged between 1 January 2018 and 31 December 2018 were included as Cohort A. Inpatients discharged between 23 of September 2019 and 31 December 2019 were included as Cohort B. Inpatients were excluded as follows: (a) admissions/transfers from another psychiatric, rehabilitative, or acute care ward from the same institution; (b) discharge destinations other than the patient’s home, considered treatment continuation; (c) foreign or unknown residence; and (d) deceased before discharge. For individuals with multiple hospitalizations, only the first hospital stay was included in the analysis.
Outcome
The study outcome was unplanned 30-day readmission to the same hospital. An unplanned readmission was defined as an urgent readmission, i.e., not scheduled in advance and requiring treatment within 12 hours [23]. No more than one readmission for each discharge was considered.
Prediction models
The following paragraph provides a brief description of the prediction models evaluated in this study. It should be noted that the Epic Risk of Unplanned Readmission and the SQLape® model are commercially distributed products. Implicitly, due to copyright issues, not all information about the prediction models required to replicate this validation study was disclosed in sufficient detail. Replicating this study requires licensing.
Epic Risk of Unplanned Readmission model: The Epic Risk of Unplanned Readmission model is a logistic regression model that predicts the risk of unplanned readmissions within 30 days of the index hospital discharge date. An unplanned readmission was defined by the Centers for Medicare & Medicaid Services (CMS) in the 2015 Measure Information About the 30-Day All-Cause Hospital Readmission Measure, Calculated For the Value-Based Payment Modifier Program [24]. Adaptations were made and included patients aged between 1 and 100 years at the time of admission, patients of any payer, and hospital encounters for which patients left against medical advice. The development data set included more than 275,000 hospital inpatient encounters from 26 different hospitals. Three of these hospitals were large academic medical centers (1000+ beds each), while the others were either smaller regional or community hospitals. All included sites were chosen from very distinct geographic regions in the US to ensure as diverse a population as possible. Selection by specialties/medical disciplines was not applied.
After feature selection, using a least shrinkage and selection operator (LASSO) penalty, the final model consisted of 27 predictive parameters [25]. The internal and external validation of the model showed acceptable discrimination at predicting unplanned readmission within 30 days post discharge, with an area under the receiver operating characteristics curve (AUC of the ROC curve) ranging from 0.69 to 0.74 [26].
LACE+: The LACE+ risk index is a point score derived from a logistic regression model that was developed to predict the risk of 30-day postdischarge death or urgent readmission [27, 28]. It was developed and internally validated based on a large, randomly selected, population-based sample from Ontario, Canada in 2012. The development sample excluded patients who underwent same-day surgeries and psychiatric and obstetric admissions. Backward feature selection was performed (with a significance level of α = 0.05) and resulted in 11 significant parameters [29]. The final point score ranges from -15 to 114, and a score greater than 90 is considered to indicate a high risk for urgent readmission or death within 30 days after discharge. The internal validation of the 11-item index, excluding the Canada-specific case-mix group (CMG) score, showed acceptable discrimination with an AUC of 0.743 for urgent readmission only but poor calibration (H-L statistic 58.93, p < 0.0001) [27].
SQLape®: The SQLape® model (Striving for Quality Level and Analyzing of Patient Expenditures), a computerized validated algorithm, was developed in 2002 in Switzerland [6]. The SQLape® model predicts potentially avoidable hospital readmissions within 30 days after hospital discharge. An unplanned readmission was defined according to the “Algorithm for the Identification of Potentially Avoidable Rehospitalizations” [8]. The development sample consisted of 131,809 inpatient stays from 49 Swiss acute care hospitals (including the Lucerne Cantonal Hospital), of which 12 hospitals were located in the French-speaking part of Switzerland. Among others, healthy newborns, residents outside of Switzerland, and elective surgical patients who could usually receive same-day surgery were excluded. After backward elimination was performed, the Poisson regression model consisted of six variable groups. Of the 131,809 inpatient stays mentioned above, 66,069 were used for internal validation. Discrimination was measured by Harrell’s C statistic, which is also referred to as the estimated area under the receiver operating characteristics (ROC) curve (AUC). A value of 0.72 showed acceptable discriminative ability [6].
Descriptive and predictive variables
The following data were retrospectively extracted from an enterprise data repository that integrates routinely collected information from multiple clinical information (CIS) and enterprise resource planning systems (ERPs):
- Socio-demographic data: Date of birth, gender, nationality, postal code, type of medical insurance
- Hospital administrative data: Patient origin (home, nursing home, or other institution), admission date, length of stay (LOS), admission type (elective, urgent, etc.), discharge date, discharge destination (home, nursing home, or other institution), discharge decision (initiated by the physician, initiated by the patient, etc.), cost weight, diagnostic-related group (DRG), primary diagnosis, procedure codes, readmission date, readmission reason, final discharge date, major diagnostic category, admission and discharge medical specialty, and admission and discharge ward
- Clinical data: Charlson Comorbidity Index (CCI), imaging orders, electrocardiogram, specific laboratory results, and medications (clinical data were only extracted for Cohort A)
- Risk of Unplanned Readmission score: 8 a.m. and 12 a.m. scores (scores were only extracted for Cohort B)
All prediction model input parameters (predictors) are detailed in Table 1 (Model Predictors). While the SQLape® model was developed within the Swiss context, the LACE+ and Epic models were designed for and trained on patient populations outside of Switzerland. For this reason, aspiring model validation and implementation considerations were followed by the adaptation of certain model input parameters to “alleviate” setting specific discrepancies. In regard to the LACE+ model, only minor adaptations were carried out. First, the case-mix group (CMG) score was excluded because CMG scores can only be calculated for hospital admissions inside Canada (CMGs aggregate acute care inpatients with similar clinical and resource-utilization characteristics) [27]. Second, all codes from the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) used by the Charlson Comorbidity Index (CCI) [30] to quantify patient burden of disease were mapped onto ICD-10 codes of the German Modification (GM) version, which is used in Switzerland. This was done by a clinical expert with extensive working experience in medical coding. Epic’s Risk of Unplanned Readmission model required much more comprehensive adjustments due to its higher number of predictors and their nature. Analogous to LACE+, all ICD-10-CM codes were mapped onto ICD-10-GM codes (for the Charlson Comorbidity Index and diagnoses). The mapping table is available from the corresponding author upon request. Five medication subclasses contribute to the Risk of Unplanned Readmission score. An experienced pharmacist developed Swiss-specific therapeutic subgroups of the Anatomical Therapeutic Chemical Classification System (ATC) based on the original specifications and with regard to local regulations and classifications. All subgroups and therein included codes are listed in the S1 Appendix. With respect to the biological data, which are captured differently depending on the site’s laboratory system, a laboratory analyst matched all required input parameters with the local system’s laboratory components (lab components and applied reference ranges are detailed in S2 Appendix). Finally, the input parameter “restraining order” was excluded because restraining measures were not performed in the period under consideration; remaining order variables were matched with their corresponding “Tarif Medical” (TARMED) service codes (catalog version 1.09, valid from 01.01.2018) as a means of data collection through service recording and billing information [31]. The SQLape® tool was used without any adaptations to produce predictions. Variable specifications can be found online [32, 33]. All adaptations and modifications, as well as differences in variable definition between the original derivation study and this external validation study, are described in detail in S3 Appendix (detailed description of prediction model variables).
This study is based on routinely collected patient data. Therefore, the number of data points per patient available for extraction was dependent on the forgone hospitalization characteristics (i.e., main diagnosis, disease severity, course of disease, and length of stay) and restricted by documentation standards, and/or their compliance, the structural quality of the electronic health record systems (EHRS), and their management [34]. For each predictor, either several input data points (documented throughout the hospital stay) or a single input data point (documented at admission or discharge) existed (Table 1). In general, the last data point was carried forward [35]. Missing input data required to compute the prediction models were interpreted as follows: missing biological input data (hemoglobin, sodium, etc.) were coded as normal values. Missing comorbidity and medication input data were considered absence of the condition and no active medication, respectively. Missing order input data (imaging or electrocardiogram) were considered as an intervention not ordered. Last, missing records of utilization of healthcare resources (e.g., ED visits, scheduled future admissions, etc.) were considered as nonutilization. This way of dealing with missing values was justified with the common documentation method “Charting by Exception” and is in line with the approach used during the model development. Missing data required to describe patient characteristics (demographics and administrative data) led to patient exclusion from the analysis.
Sample size
Sample size calculations resulted in an aspired sample size of at least 1000 participants (500 cases and 500 controls) for each site. The proposed sample size is based upon precision. Five hundred cases and 500 controls will ensure that the half-width of a 95% confidence interval for sensitivity and specificity (using frequencies of predicted vs. actual outcome) does not exceed 5%; even for a point estimate of 50%, leading to the widest possible confidence interval, the half-width is supposed to remain slightly below 4.5%. This can be considered an appropriate target precision for the purpose of this study.
Statistical analysis methods
Model Validation and Comparison: Descriptive analysis was performed for all variables. Categorical variables were described as frequencies (percentages), and continuous variables were described as the means (standard deviations [SDs]) or medians (interquartile ranges [IQRs]), as appropriate. Baseline characteristics (index hospitalization) were compared between unplanned 30-day readmitted and nonreadmitted patients. Differences between groups were tested using binomial, Pearson’s χ2, or Wilcoxon rank-sum tests, as appropriate. The unit of analysis was hospitalization.
For Cohort A, the LACE+ and the Epic Unplanned Readmission model predictions were calculated as the inverse of , where β is the regression coefficient of each covariate (x) and K is the total number of covariates. SQLape® readmission probabilities were calculated by the SQLape® tool [33]. All information on model variables was either collected from published derivation studies or provided by the developers directly.
To assess performance, the traditional statistical approach is to quantify how close predictions are to the actual outcome. As an overall performance measure, composed of discrimination and calibration, the Brier score was calculated [36]. Assessed separately, the discriminative ability was measured using Harrel’s C-statistic, which, for binary outcomes, is identical to the area under the receiver operating characteristic (ROC) curve (AUC). For calibration, the Hosmer-Lemeshow goodness-of-fit test was graphically illustrated by plotting the predicted outcomes by decile against the observations [29]. Furthermore, another novel performance measure, category-based net reclassification improvement (NRI) was computed [37, 38]. The NRI separately considers individuals who develop and who do not develop the event of interest and therefore provides additional information not available from the AUC. The NRI is defined as the sum of the net proportions of correctly reclassified patients with and without the outcome [37]. To be able to compare with the results of the derivation studies, performance analysis was performed based on the latest input data points available, but no later than 8 a.m. on the day of discharge.
Cohort B analysis: The descriptive analysis was performed analogous to the analysis ran for Cohort A. Supplementary, to test the comparability of Cohort A and Cohort B, a set of patient characteristics was compared using binomial, Pearson’s χ2, or Wilcoxon rank-sum tests, as appropriate.
For Cohort B, the Epic Unplanned Readmission model predictions were calculated by the Epic Electronic Health Record “AI & Analytics” module. The performance assessment was conducted analogous to the evaluation performed for Cohort A but it focused only on the Epic prediction model. To provide better insight into the predictive ability at different times throughout the hospital stay, subgroup analyses were performed. The Epic prediction model’s performance was assessed for the admission day, first to fifth day of hospitalization, day before discharge, and discharge day. For each day, the predictive ability was assessed for input data points available at 8 a.m. and 12 a.m. and presented in a forest plot. All statistical analyses were performed in RStudio, Version 1.2.5019 and STATA/SE, Version 16.0.
Risk groups
Based on the recommendation of the developers of the Epic model, patients were divided into four risk groups using the organization’s current unplanned readmission rate as baseline risk: “no risk = 0 –baseline”, “low risk = baseline– 2 x baseline”, “medium risk = 2 x baseline– 3 x baseline”, and “high risk = > 3 x baseline”.
Development vs. validation
The characteristics of the derivation cohorts are summarized in Table 2 (Model transportability–summary characteristics). The readmission rate of this study’s cohort was approximately 4.7%, whereas it ranged between 5.2 and 16.9% in the derivation cohorts. With regard to the outcome of all derivation studies, a distinction was made between planned and unplanned readmissions. Knowing that some treatments require repeated hospitalization (e.g., multicourse treatments such as chemotherapy), readmissions by definition were no true indicator of quality. On that account, the LACE+ study excluded readmission that was foreseeable, i.e., nonurgent. The SQLape® and Epic model operationalized the exclusion of planned readmissions by applying a more complex decision rule. This rule checks for specific procedure and diagnosis categories that are usually considered planned, acute, or complication of care [8, 24]. A distinction was also made between avoidable and nonavoidable readmissions. This validation study followed the approach taken by the LACE+ derivation study. All predictors were defined in line with the derivation studies, except for the following: the “Current length of hospital stay” considered the time spent in the ED, while the derivation study did not; the “Numbers of active medication orders” considered all active medications throughout the hospitalization, while the derivation study considered only the active medications at the day of score calculation; the “Diagnoses and comorbidities” were based on the discharge diagnosis instead of the known diagnosis at the day of score calculation; see Table 1 (Model predictors), column “Differences”, and S3 Appendix (Detailed description of prediction model variables). All beforementioned differences hold true only for the baseline cohort. Cohort B was not affected by actual differences but by setting specific adaptations.
Results
Participants
During the study period, a total of 53,497 discharges were recorded. All discharges of 2018 were grouped as Cohort A, including 42,381 records; discharges of the last quarter in 2019 were grouped as Cohort B, including 11,116 records. After exclusions, Cohorts A and B comprised 28,112 and 7071 records, respectively (see Fig 1. Flow Chart). For 23,116 records (82.2%) of Cohort A, data sufficed to replicate the LACE+, SQLape®, and Epic Risk of Unplanned Readmission scores for the day of discharge at 8 a.m. (for 28,112 records, LACE+ and Epic scores were available). For each of the 7071 records in Cohort B, numerous Epic risk scores were exported from the EHR “AI & Analytics” module. The number of scores was dependent on the LOS, admission, and discharge time. In total, 30,187 discharges with scores were included in the analysis (Cohorts A + B). Of Cohort A, out of 23,116 inpatients, 1181 (5.1%) were readmitted within 30 days of the index discharge date. Of Cohort B, 303 inpatients were readmitted. This corresponds to 4.3% of the 7071 inpatients (see Table 3. Score schedule).
(A) After the exclusion of all hospitalizations but the index hospitalization, discharge numbers equal the number of distinct inpatients.
The baseline characteristics of both cohorts are summarized in Table 4 (Baseline characteristics) as per the occurrence of the event of interest and according eligibility. After exclusions, the mean age (SD) was 51 years (24) in both cohorts; between 53 and 54% were female; and the average LOS (SD) was 4.8 (5.3) and 4.6 (5.3) days, respectively. Compared to Cohort A, Cohort B consisted of a higher proportion of surgical and fewer medical inpatients. In contrast, average LOS was shorter. Overall, inpatients with an unplanned readmission were older, had an urgent index admission more often, had a longer LOS, were more severely sick (according to the CMI), and had higher risk scores. All these differences were statistically significant (P < 0.05). Finally, Table 5 shows the characteristics of the predictor variables of Cohort A, grouped by patients with and without an unplanned readmission.
Overall performance
To quantify the overall performance, the Brier score was used. It is a quadratic scoring rule, where the squared difference between the actual binary outcome and the predictions are calculated. The Brier score can range from 0 for a perfect model to 0.25 for a noninformative model (the lower the better) [37]. The Brier scores were as follows: Epic model– 0.0484, LACE+– 0.0474, and SQLape® – 0.0473 (based on the maximum number of available scores of Cohort A, N = 28,112, Brier scores were: Epic model—0.0457 and LACE+ - 0.0437, respectively). The Brier score for the Epic model on the day of discharge at 8 a.m., based on Cohort B, was 0.0414 (other Brier scores are detailed in S4 Appendix, Cohort B–Brier scores). According to the student’s t-test results, only the Epic model yielded a significantly different Brier score (p<0.001) compared to LACE+ and SQLape®.
Calibration
For calibration, the Hosmer-Lemeshow goodness-of-fit test was graphically illustrated by plotting the predicted risk by deciles (and risk group thresholds) against the observations. The diagonal line is the line of perfect calibration, described with an intercept alpha of 0 and slope of 1. The graph indicates that the Epic model had a poor fit and generally overestimated the observed probability, especially at higher deciles of risk (Table 6). The intercept, which relates to the calibration-in-the-large (CITL), was -0.542, and the slope was 1.105. The SQLape® and LACE+ showed very similar results but underestimated at higher deciles of risk. The SQLape® intercept was 0.550, and the slope was 0.759; the intercept and slope of LACE+ were 0.605 and 0.798, respectively. The p-values of the Hosmer-Lemeshow χ2 statistic were p<0.001 for all three models. Calibration plots by decile are presented in Fig 2 (calibration plots Cohort A), and calibration plots by risk group thresholds are accessible as S5 Appendix. Calibration plots based on Cohort B were computed and are illustrated in S6 Appendix.
(A) Abbreviations: AUC–Area under the curve; CITL–Calibration-in-the-large; E:O–Expected: Observed. (B) Notification: Associated 95% CI were too narrow to be clearly displayed.
Discrimination
In theory, the AUC ranges between 0.5 and 1.0. The AUCs for the risk scores based on Cohort A on the day of discharge at 8 a.m. were as follows: Epic AUC 0.692 (95% CI 0.676–0.708), LACE+ index AUC 0.703 (95% CI 0.687–0.719), and SQLape® AUC 0.705 (95% CI 0.690–0.720). Neither the LACE+ nor the Epic model yielded a significantly different AUC than that of the SQLape® (p>0.05). The ROC curves are presented in Fig 3. Using the maximum number of available scores of Cohort A (N = 28,112) did not lead to a significant change in AUC (Epic model: 0.680, 95% CI 0.664–0.696; LACE+: 0.693, 95% CI 0.677–0.709; p<0.05).
(A) Red graph line = LACE+, Green = Epic model, Blue = SQLape®.
The predictive ability of the Epic Risk of Unplanned Readmission model was also assessed at different times throughout the hospital stay based on all records of Cohort B. The AUCs ranged between 0.527 and 0.677. Using the AUC on the day of discharge at 8 a.m. as a reference, the Epic model yielded significantly different AUCs only compared to the scores computed on the admission day (p<0.024). All AUCs are presented in Fig 4.
(A) Abbreviations: P = prevalence of the event of interest–unplanned readmissions.
Reclassification
Using SQLape® as a quasi standard for predicting the risk of unplanned readmission in Swiss inpatient populations, the category-based net reclassification improvement (NRI) for the LACE+ and the Epic Risk of Unplanned Readmission model was computed. For LACE+, the NRI for events was 1.01%, and the NRI for nonevents was 3.27%; for the Epic model, the NRI for events was 71.54%, and the NRI for nonevents was -67.24%. The sum of both components resulted in an overall NRI of 0.042 for LACE+ and 0.043 for the Epic model. The category-based NRI components can be interpreted as net percentages of persons with or without events correctly reclassified. Negative percentages are interpreted as a net worsening in risk classification (NRI components range between -100% and +100%). However, the overall category-based NRI is a statistic that is implicitly weighted for the event rate and cannot be interpreted as a percentage. Its theoretical range is -2 to +2 [38]. Reclassification tables are accessible as S7 Appendix.
Discussion
Limitations
This study has several important limitations that need to be addressed. All limitations primarily but not exclusively relate to Cohort A. First, this study is a single-center study shaped by patient characteristics, local practice patterns, and EHR systems in place. Therefore, the findings may not be generalizable to all Swiss hospitals, particularly concerning university hospitals, whose patient populations often differ in characteristics, and organizations outside of Switzerland, where certain data points might be captured differently, or not at all. Even when specific data points exist, their distribution might vary (e.g., emergency department utilization, medications, etc.). Second, although this study was primarily about the validation of the Epic model, its comparison with the quasi standard SQLape® required the application of exclusion criteria different from criteria used in each individual derivation study but appropriate for the Swiss setting (see Table 2 Model transportability–summary characteristics). Discrepancies were minor in regard to the LACE+ model but more significant concerning the Epic model. This could have introduced selection bias, resulting in lower model accuracy (compared to the derivation study), with model predictions over- or underestimating the actual risk. Third, only readmissions to the same hospital were considered. According to a survey of the Swiss National Association for Quality Development in Hospitals and Clinics, external readmissions account for approximately 10% of all readmissions [8]. This may have contributed to readmission rates being at the lower end (5.1%) compared to the derivation studies (ranging from 5.2 to 16.9%). In addition, patients who died after index hospital discharge were not excluded (e.g., by contacting each discharged patient 30 days after discharge). Both limitations may have led to over- or underestimations. Fourth, this study applied a different endpoint than the one originally investigated in the derivation study of the Epic and SQLape® model. As described earlier, the Epic and SQLape® models targeted the same basic endpoint (compared to LACE+) but used a more sophisticated definition. However, more sophisticated approaches may allow to overcome the lack of precision of the one used in the LACE+ derivation study (i.e., using unplanned readmissions as a proxy of potentially avoidable readmissions), they have not yet become standard in research studies and thus encumber benchmarking. Consequently, there is a chance of reduced performance for the Epic and SQLape® models. Last, it must be acknowledged that this study was conducted based on the assumption that if a specific condition, order, or test result was not documented in the medical records, it was absent/not prescribed/negative. This design is less powerful than a prospective study in which each variable would be collected and documented (positive and negative answers).
Interpretation
In this study, EMR data were used to externally validate the Epic Risk of Unplanned Readmission model and to compare it with the LACE+ and SQLape® models. Until the date of submission, this was the first external study scientifically validating the Epic model as a predictor of unplanned readmission within 30 days of hospital discharge. The principal findings can be summarized as follows. The performance measure Brier score indicates a superior, comparative overall performance of the SQLape® (for Cohort A, scores were 0.0484 for the Epic model, 0.0474 and 0.0473 for the LACE+ and the SQLape® models, respectively).
The Epic Risk of Unplanned Readmission model has poor discrimination and calibration to predict unplanned readmissions at a Swiss tertiary-care hospital. In comparison, LACE+ and SQLape® yielded acceptable discrimination and calibration for the same study population (Cohort A). The discriminatory ability of the Epic model (AUC: 0.692) was at the lower end of the results provided by the developers (AUC ranged from 0.69 to 0.74). In terms of calibration, reference results were not available for the derivation study. The observed discrepancies can be explained by the differences between the study populations and outcomes. According to EPIC systems, the development sample had a much higher rate of readmissions (16.9 vs 5.1%), a higher sample size (203,500 vs. 23,116), and younger subjects (mean 48 vs. 51 years). Using current national survey data as a reference, it is most likely that variables describing the utilization of healthcare resources (particularly the number of ED visits and hospital discharges), as well as also practice patterns (e.g., imaging order rates) were diverging as well [44–47].
Similarly, the discriminatory ability of LACE+ (AUC: 0.703) proved to be lower than that in the original derivation study, where the AUC ranged from 0.749 to 0.757 [27]. Admittedly, this comparison has one limitation; that is, the 95% CI was computed including the CMG variable, which can only be calculated for hospitals in Canada. An AUC of 0.743 was reported for LACE+ without CMG (all patients were assigned 0 points for the CMG score). Following a general rule of thumb, LACE+ falls into a category of more discriminative models (0.7 ≤ AUC < 0.8; “acceptable”) by marginally outperforming the Epic model [29]. Unfortunately, distribution and calibration data were reported only for the combined 30-day death or unplanned readmission outcome. The calibration graph of this study showed very similar rates (observed vs. expected) but demonstrated a tendency to overestimate at higher risk values. In the derivation study, these high-risk patients represented only approximately 1.6% of the validation set. Although there was no difference in the outcome and the exclusion criteria were fairly similar, the selected patient characteristics differed substantially. The original derivation study, compared to this study, had many older subjects (58 vs. 51 years) and significantly higher rates of ED visits (37.7% ≥ 1 ED visit the in previous 6 months vs. 11.4%) and hospitalizations (14.0% ≥ 1 urgent admission in the previous year vs. 1.7%). Additionally, 64.4% of patients included in the original derivation study were admitted urgently compared to 53.1% in this study. LACE+ has recently been investigated in various retrospective studies. However, samples were highly fragmented and ill-fitted for comparison [48–52].
The SQLape® model, which was developed based on a large Swiss development sample, demonstrated deviance to a lesser extent. The data shown in this study indicate that the SQLape® has an acceptable discriminatory ability. Compared to the original derivation study, the AUC was only slightly lower (AUC: 0.705 vs. 0.720), but the 95% CI was intersecting (0.690–0.720) [6]. In terms of calibration, no reference values were available. Although SQLape® was designed specifically to focus on readmissions that are potentially avoidable, model performance did not appear to have been affected tremendously in this study. Presumably, this can be attributed to overall few differences in methodology. Ultimately, in regard to the SQLape® model, this study is more like a temporal validation than an external validation study, i.e., including new individuals from the same institution but in a different time period [16].
For the assessment of discriminative ability, the NRI provides supplementary information that, however, needs to be interpreted in light of known limitations of the NRI [38, 53–55]. The main criticisms are that the NRI is highly sensitive to the number of risk categories and thresholds and is unstable when used to compare miscalibrated models. According to the net reclassification improvement (NRI), the LACE+ and Epic models yielded minimally better performance than the reference standard SQLape®. For LACE+, the NRI components (events and nonevents) indicate that LACE+ is slightly better at the detection of subjects with and without the outcome of interest. In regard to the Epic model, the results are inconsistent–a strong improvement in the detection of subjects with the outcome is indicated; in contrast, the detection of subjects without the outcome worsens.
In summary, the Epic Risk of Unplanned Readmission model performs similarly to its direct comparators (LACE+ and SQLape®) and other commonly used prediction models. As a broader comparison, the well-known LACE (the predecessor model of the LACE+), the HOSPITAL, and the PAR-Risk score, all validated in the Swiss population, have a C-statistic of 0.73 or less in external validation studies [18–21]. Compared to the promotional information of EPIC Systems, it must be noted that the performance did not meet expectations. Presumably, this can be explained by the differences in patient populations for model fit and external validation. Despite a rather comprehensive approach, the Epic model includes 27 variables from various domains and was outperformed by both comparators. For example, LACE+ and SQLape® do not consider medications but focus on admission characteristics (index admission type, existing comorbidities, health service utilization, etc.). In this regard, a recent study showed that prescription of corticosteroids and antidepressants was associated with a greater risk of unplanned readmission [56]. In conclusion, it may be assumed with caution that the Epic model´s relative complexity (compared to its comparators) has hampered its generalizability to different patient populations and settings–model updating is therefore warranted. An important advantage of the Epic model is that risk scores can be generated anytime throughout the hospital stay. Admittedly, the main caveat in this regard is its unstable predictive ability depending on day and time, as was pointed out during the subgroup analysis of discriminative ability.
Implications for clinical practice
As the digital transformation has reached the healthcare sector, hospitals increasingly invest in IT systems and competencies to be able to turn immense volumes of data into actionable insights. For that matter, predictive analytics and machine learning have become some of the most discussed disruptive innovations in healthcare. In the context of unplanned hospital readmissions, the introduction of EHR systems has alleviated major limitations, such as the restricted accessibility of appropriate predictors and delayed reporting capabilities. Medical facilities adopting EPIC´s EHR system are advised to look at its “AI & Analytics” module, containing the Epic Risk of Unplanned Readmission model and others. Despite its underperformance in both respects (compared to investigated competitors and promotional information), for organizations switching from paper-based documentation or clinical data housed in multiple disconnected systems, the use of the Epic prediction model holds great potential to improve efficiency in service provision and patient outcomes. The model allows real-time identification of patients at high risk for unplanned hospital readmission. Incorporated clinically actionable variables, such as medications, that could be used to triage patients to different types of interventions are another noteworthy model feature. According to Leppin and colleagues, effective interventions are complex and seek to enhance patient capacity to reliably access and enact postdischarge transitional care. Their findings also suggest that providing comprehensive and context-sensitive support reduces the risk of hospital readmission within 30 days; the overall pooled relative risk of readmission was 0.82 (95% CI, 0.73–0.91; p<0.001) [12]. In general, interventions included anywhere from 1 to 7 unique activities, including case management, patient education, medication intervention, and timely follow-ups.
In regard to the optimal moment to identify patients at highest risk (i.e., facilitating the best predictive performance), it may not be worth the loss of sufficient lead time to leverage additional data on disease progression and hospital complications. Speaking of trade-offs relevant to the application in routine care, a value- or “utility”- based decision is also indicated in regard to the threshold–with the costs of misclassification (e.g., expressed in terms of mortality and morbidity) on one side, and the allocation of resources (e.g. for subsequent detailed assessments or interventional measures) on the other. Since EPIC system´s predictive analytics platform update, as of then powered by cloud, updating EPIC developed models based on setting specific data prior to routine application is possible. Although this component is subject to charge, improvement in performance may warrant the expenditure of financial means.
Implications for research
This study raises a number of opportunities for future research, both in terms of model validation and updating. First, it has become common practice to compare prediction models based on their performance on the day of discharge. However, given that most transitional care interventions need lead time, predictive risk scores should ideally provide information early enough during hospitalization [12]. Therefore, to contribute to clinicians´ choice of the best prediction model, validation studies should imbed adequate sample sizes that allow comprehensive subgroup analysis, i.e., provide information in regard to the predictive ability at different times throughout the hospital stay. Second, the Epic model showed reduced predictive performance; in particular, calibration was rather disappointing. Different sources may have distorted the calibration, including discrepancies in patient characteristics, outcome prevalence, and definition, and systematic differences in measurement errors. When a prediction model performs inadequately during external validation, it has been shown that the model can often be updated using data from the validation setting. Updating of regression-based algorithms may vary from simply changing the intercept (for differences in outcome frequency), adjusting the relative weights of the predictors (to represent setting specific associations of the predictors), to adding new predictors [16, 57]. Third, despite verified associations, only a few prediction tools for unplanned readmission include environmental and/or functional status [14, 58–61], as they are rarely readily available (e.g., data are often housed in multiple disconnected systems or paper-based systems). With the advancing adoption of commercial EHR systems and their instantaneous potential to provide clinically granular data from the entire course of hospitalization, factors such as living situation and nursing scores need further investigation. Fourth, a potential path to developing more comprehensive patient-risk models is machine learning, which has proven to be able to process extremely large numbers of input features and to be typically more predictive than standard logistic regression methods [62–65]. Last, interventional research is now needed to better understand the effects of risk prediction scores followed by available transitional care measures.
Supporting information
S1 Appendix. Anatomical Therapeutic Chemical Classification System (ATC)—subgroups.
https://doi.org/10.1371/journal.pone.0258338.s001
(DOCX)
S2 Appendix. Laboratory components and corresponding reference ranges.
https://doi.org/10.1371/journal.pone.0258338.s002
(DOCX)
S3 Appendix. Detailed description of prediction model variables.
https://doi.org/10.1371/journal.pone.0258338.s003
(DOCX)
S5 Appendix. Calibration plots by risk group thresholds, Cohort A.
https://doi.org/10.1371/journal.pone.0258338.s005
(DOCX)
S7 Appendix. Reclassification tables–Cohort A.
https://doi.org/10.1371/journal.pone.0258338.s007
(DOCX)
Acknowledgments
The authors would like to thank Johannes Rogger, clinical pharmacist and EPIC Willow Analyst, for the development of Swiss specific therapeutic subgroups of the Anatomical Therapeutic Chemical Classification System, based on the original US specifications and with regard to local regulations and classifications. Also, the authors would like to thank Dr. med. Pius Estermann (clinical coder) who was responsible for the mapping of ICD-10-CM codes onto ICD-10 codes of the German Modification (GM) version, Sebastian Zimmermann (laboratory analyst) for his critical review of laboratory input parameters and Madlene Michel (Head of Discharge Management) for her collaboration and valuable expertise. For statistical consultancy, and methodological advice, the authors thank Dr. Dirk Lehnick, Head Biostatistics & Methodology CTU-CS, and Prof. Dr. Konstantin Beck, former director of the CSS Institute for empirical Health Economics. Finally, the authors are very grateful to Roger Wicki for providing precious positivity and considerable technical support in regard to data curation, model deployment and maintenance.
References
- 1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. New England Journal of Medicine. 2009;360(14):1418–28. pmid:19339721
- 2. Donzé J, Lipsitz S, Bates DW, Schnipper JL. Causes and patterns of readmissions in patients with common comorbidities: retrospective cohort study. Bmj. 2013;347:f7171. pmid:24342737
- 3. Bosco JA, Karkenny AJ, Hutzler LH, Slover JD, Iorio R. Cost Burden of 30-Day Readmissions Following Medicare Total Hip and Knee Arthroplasty. The Journal of Arthroplasty. 2014;29(5):903–5. pmid:24332969
- 4. Smedira NG, Hoercher KJ, Lima B, Mountis MM, Starling RC, Thuita L, et al. Unplanned hospital readmissions after HeartMate II implantation: frequency, risk factors, and impact on resource use and survival. JACC: Heart Failure. 2013;1(1):31–9. pmid:24621797
- 5. Wong EL, Cheung AW, Leung MC, Yam CH, Chan FW, Wong FY, et al. Unplanned readmission rates, length of hospital stay, mortality, and medical costs of ten common medical conditions: a retrospective analysis of Hong Kong hospital data. BMC Health Services Research. 2011;11(1):149. pmid:21679471
- 6. Halfon P, Eggli Y, Pêtre-Rohrbach I, Meylan D, Marazzi A, Burnand B. Validation of the Potentially Avoidable Hospital Readmission Rate as a Routine Indicator of the Quality of Hospital Care. Medical Care. 2006;44(11):972–81. pmid:17063128
- 7. Wish JB. The role of 30-day readmission as a measure of quality. Am Soc Nephrol; 2014. pmid:24509293
- 8.
Eggli Y, Wetz S. Nationaler Vergleichsbericht (BFS-Daten 2018) des SQLape® Indikators der potentiell vermeidbaren Rehospitalisationen, Akutsomatik. Nationaler Verein für Qualitätsentwicklung in Spitälern und Kliniken (ANQ); 2020.
- 9. Adler O, Christen A. Healthcare System: Growth Market under Cost Pressure. Credit Suisse; 2017.
- 10. Kristensen SR, Bech M, Quentin W. A roadmap for comparing readmission policies with application to Denmark, England, Germany and the United States. Health Policy. 2015;119(3):264–73. pmid:25547401
- 11. Halfon P, Eggli Y, van Melle G, Chevalier J, Wasserfallen J-B, Burnand B. Measuring potentially avoidable hospital readmissions. Journal of Clinical Epidemiology. 2002;55(6):573–87. pmid:12063099
- 12. Leppin AL, Gionfriddo MR, Kessler M, Brito JP, Mair FS, Gallacher K, et al. Preventing 30-Day Hospital Readmissions: A Systematic Review and Meta-analysis of Randomized TrialsPreventing 30-Day Hospital ReadmissionsPreventing 30-Day Hospital Readmissions. JAMA Internal Medicine. 2014;174(7):1095–107. pmid:24820131
- 13. Van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. Canadian Medical Association Journal. 2011;183(7):E391–E402. pmid:21444623
- 14. Zhou H, Della PR, Roberts P, Goh L, Dhaliwal SS. Utility of models to predict 28-day or 30-day unplanned hospital readmissions: an updated systematic review. BMJ open. 2016;6(6):e011060. Epub 2016/06/30. pmid:27354072; PubMed Central PMCID: PMC4932323.
- 15. Kansagara D, Englander H, Salanitro A, Kagen D, Theobald C, Freeman M, et al. Risk Prediction Models for Hospital Readmission: A Systematic Review. Jama. 2011;306(15):1688–98. pmid:22009101
- 16. Moons KG, Kengne AP, Grobbee DE, Royston P, Vergouwe Y, Altman DG, et al. Risk prediction models: II. External validation, model updating, and impact assessment. Heart. 2012;98(9):691–8. pmid:22397946
- 17.
SQLAPE—a Swiss national quality of care indicator [Internet]. Swiss National Association for Quality Development in Hospitals and Clinics (ANQ); 2019 [cited 2020 April 12]. Available from: https://www.anq.ch/de/fachbereiche/akutsomatik/messinformation-akutsomatik/rehospitalisationen/.
- 18. Struja T, Baechli C, Koch D, Haubitz S, Eckart A, Kutz A, et al. What Are They Worth? Six 30-Day Readmission Risk Scores for Medical Inpatients Externally Validated in a Swiss Cohort. Journal of General Internal Medicine. 2020. pmid:31965531
- 19. Donzé JD, Williams MV, Robinson EJ, Zimlichman E, Aujesky D, Vasilevskis EE, et al. International Validity of the HOSPITAL Score to Predict 30-Day Potentially Avoidable Hospital ReadmissionsThe HOSPITAL Score and Hospital ReadmissionsThe HOSPITAL Score and Hospital Readmissions. JAMA Internal Medicine. 2016;176(4):496–502. pmid:26954698
- 20. Blanc A-L, Fumeaux T, Stirnemann J, Dupuis Lozeron E, Ourhamoune A, Desmeules J, et al. Development of a predictive score for potentially avoidable hospital readmissions for general internal medicine patients. PLOS ONE. 2019;14(7):e0219348. pmid:31306461
- 21. Aubert CE, Folly A, Mancinetti M, Hayoz D, Donzé J. Prospective validation and adaptation of the HOSPITAL score to predict high risk of unplanned readmission of medical patients. Swiss medical weekly. 2016;146:w14335. pmid:27497141
- 22. Uhlmann M, Lécureux E, Griesser A-C, Duong HD, Lamy O. Prediction of potentially avoidable readmission risk in a division of general internal medicine. Swiss medical weekly. 2017;147.
- 23.
Variables of the Medical Statistics—Specifications valid from 01.01.2019 [Internet]. Swiss Federal Statistical Office (BFS); 2019 [cited 2020 April 11]. Available from: https://www.bfs.admin.ch/bfs/de/home/statistiken/kataloge-datenbanken/publikationen.assetdetail.7066232.html.
- 24.
2015 Measure Information About the 30-Day All-Cause Hospital Readmission Measure, Calculated for the VAlue-Based Payment Modifier Program [Internet]. Centers For Medicare & Medicaid Services (CMS); 2015 [cited 2020 April 10]. Available from: https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/2015-ACR-MIF.pdf.
- 25. Tibshirani R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological). 1996;58(1):267–88.
- 26.
EPIC. Cognitive Computing Model Brief: Risk of Unplanned Readmission2016 [cited 2020 Sept. 13]. Available from: https://www.epic.com/contact.
- 27. van Walraven C, Wong J, Forster AJ. LACE+ index: extension of a validated index to predict early death or urgent readmission after hospital discharge using administrative data. Open medicine: a peer-reviewed, independent, open-access journal. 2012;6(3):e80–90. Epub 2012/01/01. pmid:23696773; PubMed Central PMCID: PMC3659212.
- 28. van Walraven C, Dhalla IA, Bell C, Etchells E, Stiell IG, Zarnke K, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ: Canadian Medical Association journal = journal de l’Association medicale canadienne. 2010;182(6):551–7. Epub 2010/03/03. pmid:20194559; PubMed Central PMCID: PMC2845681.
- 29.
Hosmer DW Jr, Lemeshow S, Sturdivant RX. Applied logistic regression: John Wiley & Sons; 2013.
- 30. Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi J-C, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Medical care. 2005:1130–9. pmid:16224307
- 31.
Tarif Medical (TARMED) [Internet]. Federal Office of Public Health (FOPH); 2019 [cited 2020 April 16]. Available from: https://www.bag.admin.ch/bag/de/home/versicherungen/krankenversicherung/krankenversicherung-leistungen-tarife/Aerztliche-Leistungen-in-der-Krankenversicherung/Tarifsystem-Tarmed.html.
- 32.
Eidgenössisches Departement des Inneren E, Bundesamt für Statistik B. Variablen der Medizinischen Statistik Spezifikationen gültig ab 1.1.20192019 12.04.2019. Available from: https://www.bfs.admin.ch/bfs/de/home/statistiken/gesundheit/erhebungen/ms.assetdetail.7066232.html.
- 33.
SQLape.com [Internet]. SQLape; 2019 [cited 2020 April 9]. Available from: https://www.sqlape.com/implementation/.
- 34. Winter A, Takabayashi K, Jahn F, Kimura E, Engelbrecht R, Haux R, et al. Quality Requirements for Electronic Health Record Systems*. A Japanese-German Information Management Perspective. Methods Inf Med. 2017;56(7):e92–e104. pmid:28925415.
- 35. Little RJ, D’Agostino R, Cohen ML, Dickersin K, Emerson SS, Farrar JT, et al. The Prevention and Treatment of Missing Data in Clinical Trials. New England Journal of Medicine. 2012;367(14):1355–60. pmid:23034025.
- 36. Pepe MS, Janes H, Longton G, Leisenring W, Newcomb P. Limitations of the Odds Ratio in Gauging the Performance of a Diagnostic, Prognostic, or Screening Marker. American Journal of Epidemiology. 2004;159(9):882–90. pmid:15105181
- 37. Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, et al. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010;21(1):128–38. pmid:20010215.
- 38. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician’s guide. Annals of internal medicine. 2014;160(2):122–31. pmid:24592497
- 39. van Walraven C, Jennings A, Forster AJ. A meta-analysis of hospital 30-day avoidable readmission rates. Journal of Evaluation in Clinical Practice. 2012;18(6):1211–8. pmid:22070191
- 40. Vasilevskis EE, Ouslander JG, Mixon AS, Bell SP, Jacobsen JML, Saraf AA, et al. Potentially Avoidable Readmissions of Patients Discharged to Post-Acute Care: Perspectives of Hospital and Skilled Nursing Facility Staff. Journal of the American Geriatrics Society. 2017;65(2):269–76. pmid:27981557
- 41. Donzé J, Lipsitz S, Bates DW, Schnipper JL. Causes and patterns of readmissions in patients with common comorbidities: retrospective cohort study. BMJ: British Medical Journal. 2013;347:f7171. pmid:24342737
- 42.
Candidates for one-day surgery [Internet]. SQLape.com; 2019 [cited 2020 April 19]. Available from: https://www.sqlape.com/day-surgery/.
- 43. Gilliard N, Eggli Y, Halfon P. A methodology to estimate the potential to move inpatient to one day surgery. BMC health services research. 2006;6(1):78. pmid:16784523
- 44.
Emergency Department Visits [Internet]. Centers for Disease Control and Prevention; 2020 [cited 2020 May 23]. Available from: https://www.cdc.gov/nchs/fastats/emergency-department.htm.
- 45. Vilpert S. Konsultationen in Schweizer Notfallstationen. Obsan Bulletin [Internet]. 2013 [cited 2020 May 23]; 3. Available from: https://www.obsan.admin.ch/sites/default/files/publications/2015/obsan_bulletin_2013-03_d.pdf.
- 46.
Hospital discharge rates [Internet]. Organisation for Economic Cooperation and Development (OECD); 2018 [cited 2020 May 24]. Available from: https://www.oecd-ilibrary.org/social-issues-migration-health/hospital-discharge-rates/indicator/english_5880c955-en.
- 47.
Computed tomography and magnetic resonance imaging exams [Internet]. Organisation for Economic Cooperation and Development (OECD); 2018 [cited 2020 May 24]. Available from: https://data.oecd.org/healthcare/magnetic-resonance-imaging-mri-exams.htm#indicator-chart.
- 48. Glauser G, Winter E, Caplan IF, Goodrich S, McClintock SD, Guzzo TJ, et al. The LACE+ Index as a Predictor of 30-Day Patient Outcomes in a Urologic Surgery Population: A Coarsened Exact Match Study. Urology. 2019;134:109–15. pmid:31487509
- 49. Garrison GM, Robelia PM, Pecina JL, Dawson NL. Comparing performance of 30-day readmission risk classifiers among hospitalized primary care patients. Journal of Evaluation in Clinical Practice. 2017;23(3):524–9. pmid:27696638
- 50. Ibrahim AM, Koester C, Al-Akchar M, Tandan N, Regmi M, Bhattarai M, et al. HOSPITAL Score, LACE Index and LACE+ Index as predictors of 30-day readmission in patients with heart failure. BMJ Evidence-Based Medicine. 2019:bmjebm-2019-111271. pmid:31771947
- 51. Caplan IF, Sullivan PZ, Kung D, O’Rourke DM, Choudhri O, Glauser G, et al. LACE+ Index as Predictor of 30-Day Readmission in Brain Tumor Population. World Neurosurgery. 2019;127:e443–e8. pmid:30926557
- 52. Caplan IF, Zadnik Sullivan P, Glauser G, Choudhri O, Kung D, O’Rourke DM, et al. The LACE+ index fails to predict 30–90 day readmission for supratentorial craniotomy patients: A retrospective series of 238 surgical procedures. Clinical Neurology and Neurosurgery. 2019;182:79–83. pmid:31102908
- 53. Kerr KF, Wang Z, Janes H, McClelland RL, Psaty BM, Pepe MS. Net reclassification indices for evaluating risk-prediction instruments: a critical review. Epidemiology. 2014;25(1):114. pmid:24240655
- 54. Pepe MS, Fan J, Feng Z, Gerds T, Hilden J. The net reclassification index (NRI): a misleading measure of prediction improvement even with independent test data sets. Statistics in biosciences. 2015;7(2):282–95. pmid:26504496
- 55. Cook NR, Paynter NP. Performance of reclassification statistics in comparing risk prediction models. Biometrical Journal. 2011;53(2):237–58. pmid:21294152
- 56. Pavon JM, Zhao Y, McConnell E, Hastings SN. Identifying risk of readmission in hospitalized elderly adults through inpatient medication exposure. Journal of the American Geriatrics Society. 2014;62(6):1116–21. pmid:24802165
- 57. Van Calster B, McLernon DJ, van Smeden M, Wynants L, Steyerberg EW, Bossuyt P, et al. Calibration: the Achilles heel of predictive analytics. BMC Medicine. 2019;17(1):230. pmid:31842878
- 58. Tao H, Ellenbecker CH, Chen J, Zhan L, Dalton J. The influence of social environmental factors on rehospitalization among patients receiving home health care services. Advances in nursing science. 2012;35(4):346–58. pmid:23107991
- 59. Shih SL, Zafonte R, Bates DW, Gerrard P, Goldstein R, Mix J, et al. Functional Status Outperforms Comorbidities as a Predictor of 30-Day Acute Care Readmissions in the Inpatient Rehabilitation Population. Journal of the American Medical Directors Association. 2016;17(10):921–6. pmid:27424092
- 60. Tonkikh O, Shadmi E, Flaks-Manov N, Hoshen M, Balicer RD, Zisberg A. Functional status before and during acute hospitalization and readmission risk identification. Journal of Hospital Medicine. 2016;11(9):636–41. pmid:27130176
- 61. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge Environmental and Socioeconomic Factors and the Likelihood of Early Hospital Readmission Among Community-Dwelling Medicare Beneficiaries. The Gerontologist. 2008;48(4):495–504. pmid:18728299
- 62. Morgan DJ, Bame B, Zimand P, Dooley P, Thom KA, Harris AD, et al. Assessment of Machine Learning vs Standard Prediction Rules for Predicting Hospital Readmissions. JAMA Network Open. 2019;2(3):e190348–e. pmid:30848808
- 63. Mišić VV, Gabel E, Hofer I, Rajaram K, Mahajan A. Machine Learning Prediction of Postoperative Emergency Department Hospital Readmission. Anesthesiology: The Journal of the American Society of Anesthesiologists. 2020;132(5):968–80. pmid:32011336
- 64. Jamei M, Nisnevich A, Wetchler E, Sudat S, Liu E. Predicting all-cause risk of 30-day hospital readmission using artificial neural networks. PLOS ONE. 2017;12(7):e0181173. pmid:28708848
- 65. Hao S, Wang Y, Jin B, Shin AY, Zhu C, Huang M, et al. Development, Validation and Deployment of a Real Time 30 Day Hospital Readmission Risk Assessment Tool in the Maine Healthcare Information Exchange. PLOS ONE. 2015;10(10):e0140271. pmid:26448562