Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Validation of algorithms to identify elective percutaneous coronary interventions in administrative databases

  • Catherine G. Derington,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

    Current address: Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, UT, United States of America

    Affiliations Pharmacy Department, Kaiser Permanente Colorado, Aurora, CO, United States of America, Department of Clinical Pharmacy, University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences, Aurora, CO, United States of America

  • Lauren J. Heath,

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Writing – review & editing

    Affiliation Department of Pharmacotherapy, University of Utah College of Pharmacy, Salt Lake City, UT, United States of America

  • David P. Kao,

    Roles Conceptualization, Funding acquisition, Methodology, Visualization, Writing – review & editing

    Affiliations Cardiac and Vascular Center, University of Colorado Health, Aurora, CO, United States of America, Department of Cardiology, University of Colorado School of Medicine, Aurora, CO, United States of America

  • Thomas Delate

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – review & editing

    Affiliations Pharmacy Department, Kaiser Permanente Colorado, Aurora, CO, United States of America, Department of Clinical Pharmacy, University of Colorado Skaggs School of Pharmacy and Pharmaceutical Sciences, Aurora, CO, United States of America



Elective percutaneous coronary interventions (PCI) are difficult to discriminate from non-elective PCI in administrative data due to non-specific encounter codes, limiting the ability to track outcomes, ensure appropriate medical management, and/or perform research on patients who undergo elective PCI. The objective of this study was to assess the abilities of several algorithms to identify elective PCI procedures using administrative data containing diagnostic, utilization, and/or procedural codes.

Methods and results

For this retrospective study, administrative databases in an integrated healthcare delivery system were queried between 1/1/2015 and 6/31/2016 to identify patients who had an encounter for a PCI. Using clinical criteria, each encounter was classified via chart review as a valid PCI, then as elective or non-elective. Cases were tested against nine pre-determined algorithms. Performance statistics (sensitivity, specificity, positive predictive value, and negative predictive value) and associated 95% confidence intervals (CI) were calculated. Of 521 PCI encounters reviewed, 497 were valid PCI, 93 of which were elective. An algorithm that excluded emergency room visit events had the highest sensitivity (97.9%, 95%CI 92.5%-99.7%) while an algorithm that included events occurring within 90 days of a cardiologist visit and coronary angiogram or stress test had the highest positive predictive value (62.2%, 95%CI 50.8%-72.7%).


Without an encounter code specific for elective PCI, an algorithm excluding procedures associated with an emergency room visit had the highest sensitivity to identify elective PCI. This offers a reasonable approach to identify elective PCI from administrative data.


In 2014, approximately 480,000 percutaneous coronary interventions (PCI) and over 1 million inpatient diagnostic cardiac catheterizations were performed in the United States [1]. PCI remains a pillar of treatment for patients who require revascularization in the setting of acute coronary syndromes [24]. Additionally, PCI may be used electively to reduce anginal symptomatology or risk for a cardiovascular event in patients with high-risk lesions who are optimized on medical therapy [5]. Studies on elective PCI have largely been conducted within prospective registries or clinical trials, but outcomes of PCI derived from real-world data (i.e., electronic health record or administrative data) are lacking. This is possibly due to a lack of diagnostic and procedural codes that distinguish between elective and non-elective PCI.

Elective PCI remains a controversial treatment for patients diagnosed with chronic stable angina. Results of both the COURAGE and ORBITA trials demonstrated no benefit for mortality, myocardial infarction, or exercise time in patients randomized to elective PCI compared to medical therapy alone and a sham procedure, respectively [6,7]. In addition, elective PCI is an attractive target for national payers to simultaneously reduce cost, optimize quality, and augment patient satisfaction by implementing a “same-day discharge” strategy [810]. With the emphasis on outcomes, quality of care, patient satisfaction, and costs, an elective PCI cohort developed in a database that reflects real-world use is necessary to enable comparative effectiveness analyses of management strategies for this high-risk patient population. The ability to identify and track patients who undergo elective PCI in real-world data would enable individual institutions to track and research health outcomes associated with elective PCI, particularly those institutions that do not contribute to national registries like the American College of Cardiology’s National Cardiovascular Data Registry (NCDR). Additionally, the NCDR only links to Medicare fee-for-service data; thus, the ability is limited to evaluate outcomes on non-Medicare populations with this registry.

To our knowledge, no literature exists regarding the development of a validated elective PCI cohort using administrative data. The objective of this study was to develop, validate, and contrast algorithms that use diagnostic, utilization, and/or procedural codes to discriminate elective PCI procedures from non-elective PCI procedures using administrative data.


Study design and setting

This was a retrospective study of adult patients ≥ 18 years of age who underwent a PCI procedure between 1/1/2015 and 6/30/2015 or 1/1/2016 and 6/30/2016. The study was conducted at Kaiser Permanente Colorado (KPCO), an integrated healthcare delivery system providing care to over 650,000 members in urban and rural areas of Colorado in the United States. A random sample of patients who had an encounter with a Current Procedural Terminology (CPT) code for a PCI performed in an ambulatory surgery center (ASC), emergency department (ED), or hospital during the study periods was obtained from an administrative encounter database. Procedures were then assessed via manual chart review by trained abstractors for 1) confirmation that a PCI was actually performed, and 2) whether the PCI was elective or non-elective. Algorithms based on diagnostic, utilization, and/or procedural codes were then applied to assess their abilities to discriminate between elective and non-elective PCI.

KPCO operates 29 medical offices with embedded pharmacies and utilizes an electronic health record (EHR) where information on diagnoses and surgical procedures are documented. In addition, KPCO operates two ASC where patients receive same-day surgical care, including diagnostic and preventive procedures. Furthermore, KPCO contracts with local facilities to provide ED and inpatient services. Coded and free-text medical, pharmacy, laboratory, ED, hospitalization, and membership information from within the delivery system, as well as from contracted and non-contracted facilities, are captured in KPCO’s electronic administrative and claims databases. At the time of the study, there were no KPCO guidelines to direct the appropriateness of performing an elective PCI. All aspects of this study, including non-anonymized medical record review, were reviewed and approved by the KPCO Institutional Review Board with a waiver of informed consent. Because the data used for this study contain business-sensitive information, the data and study materials will not be made available to other researchers.

Patient population

All adult patients who had an encounter with at least one CPT code (92920–92924, 92928, 92929, 92933, 92934, 92937, 92938, 92941, 92943, and 92944) for a PCI procedure between 1/1/2015 and 6/30/2015 or 1/1/2016 and 6/30/2016 were identified from an administrative database. Time frames around the October 2015 transition from the International Classification of Diseases, Ninth Edition (ICD-9) to the International Classification of Diseases, Tenth Edition (ICD-10) were used to increase the generalizability to broad clinical and research settings [11,12]. The index date was the date of the included PCI procedure. Patients undergoing multiple PCI within the study time periods were included if the PCI were at least one day apart. Therefore, a patient could be included multiple times in the study. PCI were chart reviewed to confirm as true PCI. If unable to confirm as a true PCI or classify as elective or non-elective, the procedure was excluded from analysis.

Classification of PCI procedures

A sample of 500 PCI procedures were randomly selected for chart review. There is no accepted standard for the number of events to review for a validation study such as this; therefore, 500 was chosen for reviewing as this count of medical charts was feasible to review with the available resources at the time of the study. The randomization procedure was not enriched for codes that may increase or decrease the number of elective PCI cases (i.e., all codes were given equal weight). Two clinicians (C.G.D. and L.J.H.) were trained to abstract information from the EHR and categorize each procedure as a true PCI and then as “elective” or “non-elective.” Definitions were determined a priori with input from the study cardiologist (D.P.K.) and adjusted throughout the course of chart reviews, as needed, with discussion and consensus between the investigators.

Hospitalization, ED, procedure, clinical, and telephone practitioner notes documented in the EHR were used to validate each procedure. Procedures were considered elective if the terms “elective,” “non-primary,” “non-emergent,” or “staged” were used in any of the documentation. Procedures were considered non-elective if the terms “emergent,” “primary,” “urgent,” “immediate,” “ASA Status 5E,” “cardiac alert,” or “cardiac arrest” were used. If none of these terms were used, a set of clinical criteria informed by current guidelines [24,13], appropriate use criteria [5], and input from the study cardiologist, was used to determine if a procedure was non-elective. These clinical criteria included: ruptured plaque, stent thrombosis, hemodynamic instability (i.e., tachycardia, bradycardia, hypotension, shock), any myocardial infarction, unstable angina, troponin values greater than 0.5 ng/mL, or chest pain (at rest or with accelerating tempo and/or severity). In the case that no consensus could be reached, a third abstractor (T.D.) acted as a tiebreaker.

To assess the inter-rater reliability, 20 identical PCI initially were reviewed independently by each abstractor. The kappa statistic indicated strong agreement (K = 0.85, 95% CI 0.59, 1.00). For the study, each reviewer assessed 250 procedures; however, due to limited information in the EHR for 21 procedures, an additional 21 procedures were selected for review.


After each procedure was categorized as elective or non-elective, nine algorithms determined a priori were tested against the sample. These algorithms were developed with the purpose of identifying an elective PCI using administrative data. They varied in terms of time from index date with combinations of diagnosis codes, healthcare utilizations (i.e., office visits and hospitalizations), procedure setting (i.e., ambulatory or inpatient), and specific procedures (Table 1 and Fig 1). The algorithms are broadly defined into 4 groups. Algorithms 1 and 2 assessed for ED or hospitalizations for acute myocardial infarction or unstable angina relative to the index date using ICD-9 (410.x, 411.1, 413.0, 411.81, 411.89) or ICD-10 diagnosis codes (I21.x-I23.x, I20.0, and I24.x). Algorithms 3 and 4 assessed for documentation of an acute myocardial infarction ICD-9 or ICD-10 coded with any encounter within the 30 and 90 days prior to the index date (same codes as above). Algorithms 5 through 7 assessed for either an ambulatory cardiology visit, diagnostic procedure (i.e., stress test or coronary angiography), or combination of the two within the 90 days prior to the index date. CPT codes were used for diagnostic procedures (92978, 92979, 93454, 93463, 93464, 93571, and 93572). Algorithms 8 and 9 assessed the acuity of the setting where the associated PCI procedure occurred; algorithm 8 assessed whether the encounter place of service was an “ambulatory” visit (as would occur for routine, low-risk outpatient procedures such as an elective PCI) while algorithm 9 defined procedures where the place of service was not an emergency visit (e.g., nursing homes, skilled nursing facilities, hospice, telephone encounters) (as would occur for emergent PCI cases). The places of service are mutually exclusive of one another in the data but not inclusive of other settings; therefore, separate algorithms were needed.

Fig 1. Description of algorithms relative to the index PCI event.

Each algorithm was defined by diagnosis codes, utilization codes, procedure codes, or place of service codes alone or in combination relative to the date of the PCI (Day Zero on timeline; see Methods). The algorithms were designed to either exclude emergent events (white boxes) or include elective events (grey boxes). ASC = ambulatory surgery center; AMI = acute myocardial infarction; ED = emergency department; IP = inpatient; PCI = percutaneous coronary intervention; UA = unstable angina.

Data analysis

Age was calculated as of the index date. The Chronic Disease Score, a measure of a patient’s chronic illness burden, was calculated from ambulatory medication dispensings recorded during the 180 days prior to the index date [14]. A Charlson Comorbidity Index was calculated from diagnoses that were recorded during the 180 days prior to the index date [15]. Patient characteristics were reported as means ± standard deviations (SD) for normally-distributed data and medians [interquartile ranges, IQR] for non-normally-distributed data, with appropriate tests used to assess for significant differences between the groups (e.g., chi-square test of association, t-test, Wilcoxan rank-sum test).

The results of each algorithm (yes/no the criteria were met) were compared to the chart-review validation gold standard of elective PCI to calculate the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for each algorithm. No specific threshold for any of these performance metrics formally exists to define the adequacy of an algorithm’s performance, as these measures are sensitive to the population in which the metrics are evaluated [16]. Previous validation studies have evaluated algorithms with >80% performance as “adequate” for a variety of codes and clinical conditions (e.g., stroke, heart failure, rheumatoid arthritis) [1720]. We also performed a sensitivity analysis for the algorithms that used diagnosis codes (algorithms 1, 3, and 4), stratifying the analysis by date (before and after October 1, 2015) to assess accuracy of the algorithms based on ICD-9 versus ICD-10 codes.

The 95% confidence interval (CI) for each performance statistic was calculated using the Wilson Score Method [21,22]. Given that PPV is highly influenced by prevalence and there is a need to identify pragmatically true elective PCI cases, sensitivity was the primary outcome. An alpha level of 0.05 was used for significance. All analyses were conducted using SAS version 9.4 (SAS Institute, Cary, NC, USA) [23].


There were 911 potential PCI identified during the study periods, and a random sample of 497 procedures were categorized as elective or non-elective (Fig 2). Ninety-three (18.7%) and 404 (81.3%) were determined to be elective and non-elective, respectively. Patients with an elective PCI were more likely to be male, have comorbid hypertension or diabetes, and have a higher mean Chronic Disease Score (Table 2, p < 0.05 for all). Patients with a non-elective PCI were more likely to have had a previous myocardial infarction (p = 0.007).

Fig 2. Development of cohort for analysis of algorithm performance.

In the study period, 911 PCI events occurred. Of 521 events reviewed, 497 could be categorized as elective or non-elective and analyzed for algorithm performance. PCI = percutaneous coronary intervention.

Table 2. Patient characteristics by elective and non-elective PCI status.

The nine algorithms varied regarding performance measures (Table 3). Sensitivity ranged from 5.3% (algorithm 1) to 97.9% (algorithm 9), PPV ranged from 1.5% (algorithm 1) to 62.2% (algorithm 7), specificity ranged from 5.7% (algorithm 9) to 92.3% (algorithm 7), and NPV ranged from 57.8% (algorithm 1) to 97.0% (algorithm 3). Algorithm 9 had the highest sensitivity (97.9%, 95% CI 92.5%, 99.5%), although its PPV and specificity were low (19.5% and 5.7%, respectively). Algorithm 7 had the highest PPV (62.2%, 95% CI 50.8%, 72.7%). Algorithm 7 also had the highest specificity (92.3%, 95% CI 89.3%, 94.7%). Algorithm 3 had the highest NPV (97%, 95% CI 94.5%, 98.5%). Algorithm 3 also had high sensitivity (89.4%) and specificity (79.2%), although its PPV was low (50.0%). Algorithm 4 had similar performance statistics to Algorithm 3. Algorithms 1, 2, and 9 had the highest counts of false positives (Table 4). Algorithms 1, 2, and 7 had the highest counts of false negatives.

Table 4. Counts of true positives, false positives, false negatives, and true negatives for each algorithm.

Table 5 shows the results of the sensitivity analyses for algorithms that used diagnosis codes. The performance statistics were similar between ICD-9 and ICD-10 codes for algorithm 1. For algorithms 3 and 4, specificity was higher with ICD-10 codes compared to ICD-9 codes.


This retrospective analysis of patients from an integrated healthcare delivery system who underwent a PCI procedure identified several algorithms, which use diagnostic, utilization, and/or procedural codes that reasonably distinguish between elective and non-elective PCI. While the algorithms tested varied substantially in their ability to identify an elective PCI, an algorithm that excluded PCI as a part of an emergency department visit had the highest sensitivity (97.9%) to identify an elective PCI. An algorithm that assessed for an outpatient cardiology provider visit and either a stress test or angiography CPT code in the prior 90 days had the highest PPV (62.2%). To our knowledge, our study is the first to validate methods to identify patients who have had an elective PCI using administrative data. Our findings are important because providing a sensitive means to identify an elective PCI cohort represents a substantial step forward in methodology to measure the frequency of elective PCI, describe patient outcomes from elective PCI, and develop models predictive of patients requiring elective PCI. In addition, our findings may enable comparative analyses to assess specifically the outcomes of elective PCI in real-world settings.

Other investigations have used algorithms that combine encounter codes to identify a cardiovascular procedure. Davis and colleagues reported that a combination of ICD-9 and CPT codes resulted in a 53% sensitivity and 100% PPV to identify any PCI in veterans with rheumatoid arthritis even though PCI procedures were rare (prevalence = 2%) [24]. Another study designed to identify a revascularization procedure in patients receiving statin medications reported that a combination of CPT codes for coronary artery bypass grafting, angioplasty, and stenting had a PPV of 90–97% [25]. Interestingly, the majority of the CPT codes used in these studies are no longer available for use in the medical billing field (i.e., 92980–92982, 92984, and 92995–92996). To our knowledge, only one other study utilized place of service to identify an elective PCI population in a national claims database using discharge status and admission status, which can be classified as “outpatient’ or “inpatient,” and “elective” or “nonelective,” respectively [26]; the primary limitation with this unvalidated approach would be potential for misclassification bias. Further, none of our algorithms had a PPV greater than 62%, suggesting that current CPT codes were not developed in consideration of elective PCI. Development of CPT code(s) specific to elective PCI would be the most effective way to document these procedures and allow accurate identification of patients who received an elective PCI. Until such code(s) are available, our methodology provides a pragmatic approach to identifying elective PCI in administrative data.

The differences in performance statistics between algorithms observed in our study may reflect the complexity of the components included in each algorithm. CPT codes were the underlying mechanism for each algorithm, and algorithms 1–9 added levels of complexity through the addition of timing parameters, other healthcare utilizations (e.g., hospitalizations, inpatient admissions, diagnostic procedures), and diagnosis codes. Increasing the level of complexity narrows a target population, thereby reducing the potential to detect true positives, which may be particularly hard to detect for conditions with a low prevalence. Algorithm 9 was one of the simplest algorithms that we tested, perhaps indicating why it had the highest sensitivity; however, given its simplicity, algorithm 9 also had a high rate of false positives, reducing PPV and specificity. The selection of an algorithm should be based on the desire to increase true positives (i.e., optimize sensitivity or PPV) or prevent true negatives (i.e., optimize specificity or NPV). Future investigators may improve upon our algorithms by attempting to minimize false positives and false negatives; therefore, the algorithms with the highest counts of these parameters may be difficult to further refine (i.e., algorithms 1, 2, and 9). Algorithms 4, 5, and 7 had relatively high global performance statistics (i.e., across all performance statistics) compared to the others and may serve as meaningful candidates for future refinement.

Analyses of clinical registries have estimated that elective PCI constitute between 29% and 80% of PCI performed [10,2732]. In our study, we identified that approximately 19% of PCI procedures were elective. Our lower rate may be because our study assessed PCI procedures that occurred within a defined time period, while clinical registry data may contain procedures across numerous years. In addition, we used a contemporary cohort of patients who received a PCI. The results of the COURAGE trial in 2007 that demonstrated equivalent rates of death and myocardial infarction in patients who underwent optimal medical treatment alone or optimal medical treatment plus PCI [6] and the development of appropriate use criteria in 2009 [5] likely reduced the frequency of elective PCI. Furthermore, the large range of reported elective PCI rates could be representative of significant practice variation in PCI application across geographic regions of the United States [33]. Further research is needed to describe trends in elective PCI use, broadly or geographically, across time and patient populations.

Our study of 497 validated PCI tested an expansive set of algorithms to discriminate elective PCI using administrative data. While we identified an algorithm with high sensitivity, our study had limitations. Our findings are dependent upon the accurate recording of CPT codes by medical billing specialists. If specialists are not billing appropriately (e.g., incorrect place of service), our algorithms will likely have lower sensitivity. As we conducted chart review to validate the type and place of service of and other exposures around the PCI in our study, we are confident that our algorithms are accurate. Since performance statistics are affected by prevalence, a larger sample of validated elective PCI may alter the sensitivity, specificity, PPV, and NPV values we report. Future studies should be undertaken with larger sample sizes to validate our findings. Finally, we did not assess the accuracy of the individual PCI CPT codes or algorithms within subgroups of patients due to our low rate of elective PCI. This may be of interest for future research.

This retrospective analysis using administrative data found that an algorithm which excluded PCI occurring in an emergency department setting was sensitive to elective PCI. Other algorithms had higher specificity, PPV, and NPV. Investigators who wish to use our methodology should chose an algorithm based on their desired outcome. Without a CPT code(s) specific to elective PCI, our algorithms offer a reasonable approach to identify elective PCI in administrative data.


  1. 1. Benjamin EJ, Muntner P, Alonso A, Bittencourt MS, Callaway CW, Carson AP, et al. Heart Disease and Stroke Statistics—2019 Update: A Report from the American Heart Association. Circulation. 2019;139: 00–00. pmid:30700139
  2. 2. O’Gara PT, Kushner FG, Ascheim DD, Casey DD Jr, Chung MK, de Lemos JA, et al. 2013 ACCF/AHA Guideline for the Management of ST-Elevation Myocardial Infarction: A report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation. 2013;127: e362–e425. pmid:23247304
  3. 3. Levine GN, Bates ER, Blankenship JC, Bailey SR, Bittl JA, Cercek B, et al. 2015 ACC/AHA/SCAI Focused Update on Primary Percutaneous Coronary Intervention for Patients With ST-Elevation Myocardial Infarction: An Update. Circulation. 2016;133: 1135–1147. pmid:26490017
  4. 4. Amsterdam EA, Wenger NK, Brindis RG, Casey DE Jr, Ganiats TG, Holmes DR Jr, et al. 2014 AHA/ACC Guideline for the Management of Patients With Non–ST-Elevation Acute Coronary Syndromes: A Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014;130: e344–e426. pmid:25249585
  5. 5. Patel MR, Dehmer GJ, Hirshfeld JW, Smith PK, Spertus JA, Masoudi FA, et al. ACCF/SCAI/STS/AATS/AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: A Report of the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of T. Circulation. 2009;119: 1330–1352. pmid:19131581
  6. 6. Boden WE, O’Rourke RA, Teo KK, Hartigan PM, Maron DJ, Kostuk WJ, et al. Optimal Medical Therapy with or without PCI for Stable Coronary Disease. N Engl J Med. 2007;356: 1503–1516. pmid:17387127
  7. 7. Al-Lamee R, Thompson D, Dehbi H-M, Sen S, Tang K, Davies J, et al. Percutaneous coronary intervention in stable angina (ORBITA): a double-blind, randomised controlled trial. Lancet. 2018;391: 31–40. pmid:29103656
  8. 8. Heyde GS, Koch KT, De Winter RJ, Dijkgraaf MGW, Klees MI, Dijksman LM, et al. Randomized trial comparing same-day discharge with overnight hospital stay after percutaneous coronary intervention: Results of the Elective PCI in Outpatient Study (EPOS). Circulation. 2007;115: 2299–2306. pmid:17420341
  9. 9. Hodkinson EC, Ramsewak A, Murphy JC, Shand JA, McClelland AJ, Menown IBA, et al. An audit of outcomes after same-day discharge post-PCI in acute coronary syndrome and elective patients. J Interv Cardiol. 2013;26: 570–577. pmid:24112741
  10. 10. Amin AP, Patterson M, House JA, Giersiefen H, Spertus JA, Baklanov D V, et al. Costs Associated With Access Site and Same-Day Discharge Among Medicare Beneficiaries Undergoing Percutaneous Coronary Intervention An Evaluation of the Current Percutaneous Coronary Intervention Care Pathways in the United States. JACC Cardiovasc Interv. 2017;10: 342–351. pmid:28231901
  11. 11. National Center for Health Statistics. International Classificationof Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). 7 Sep 2011 [cited 7 Mar 2020]. Available:
  12. 12. National Center for Health Statistics. International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM). [cited 7 Mar 2020]. Available:
  13. 13. Levine GN, Bates ER, Blankenship JC, Bailey SR, Bittl JA, Cercek B, et al. 2011 ACCF/AHA/SCAI Guideline for Percutaneous Coronary Intervention: A Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines and the Society for Cardiovascular Angiography and Interventions. Catheter Cardiovasc Interv. 2011;82: E266–E355. pmid:22065485
  14. 14. Von Korff M, Wagner EH, Saunders K. A chronic disease score from automated pharmacy data. J Clin Epidemiol. 1992;45: 197–203. pmid:1573438
  15. 15. Quan H, Li B, Couris CM, Fushimi K, Graham P, Hider P, et al. Updating and validating the Charlson Comorbidity Index and Score for risk adjustment in hospital discharge abstracts using data from 6 countries. Am J Epidemiol. 2011;173: 676–682. pmid:21330339
  16. 16. Šimundić A-M. Measures of Diagnostic Accuracy: Basic Definitions. EJIFCC. 2009;19: 203–211. pmid:27683318
  17. 17. Schultz SE, Rothwell DM, Chen Z, Tu K. Identifying cases of congestive heart failure from administrative data: a validation study using primary care patient records. Chronic Dis Inj Can. 2013;33: 160–166. pmid:23735455
  18. 18. Olson KL, Wood MD, Delate T, Lash L, Rasmussen J, Denham AM, et al. Positive predictive values of ICD-9 codes to identify patients with stroke or TIA. Am J Manag Care. 2014;20: e27–e34. pmid:24738552
  19. 19. Kumamaru H, Judd SE, Curtis JR, Ramachandran R, Hardy NC, Rhodes JD, et al. Validity of claims-based stroke algorithms in contemporary Medicare data: Reasons for Geographic and Racial Differences in Stroke (REGARDS) study linked with Medicare claims. Circ Cardiovasc Qual Outcomes. 2014;7: 611–619. pmid:24963021
  20. 20. Widdifield J, Bombardier C, Bernatsky S, Paterson JM, Green D, Young J, et al. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance. BMC Musculoskelet Disord. 2014;15: 216. pmid:24956925
  21. 21. Wilson EB. Probable Inference, the Law of Succession, and Statistical Inference. J Am Stat Assoc. 1927;22: 209–212.
  22. 22. Newcombe RG. Two-sided confidence intervals for the single proportion: comparison of seven methods. Stat Med. 1998;17: 857–872. pmid:9595616
  23. 23. SAS Institute Inc. SAS Analytics Software & Solutions. 2020 [cited 7 Mar 2020]. Available:
  24. 24. Davis LA, Mann A, Cannon GW, Mikuls TR, Reimold AM, Caplan L. Validation of Diagnostic and Procedural Codes for Identification of Acute Cardiovascular Events in US Veterans with Rheumatoid Arthritis. EGEMS (Washington, DC). 2013;1: 1023. pmid:25848582
  25. 25. Wei W-Q, Feng Q, Weeke P, Bush W, Waitara MS, Iwuchukwu OF, et al. Creation and Validation of an EMR-based Algorithm for Identifying Major Adverse Cardiac Events while on Statins [abstract]. AMIA Jt Summits Transl Sci Proc. 2014;2014: 112–119. pmid:25717410
  26. 26. Amin AP, Pinto D, House JA, Rao S V., Spertus JA, Cohen MG, et al. Association of Same-Day Discharge after Elective Percutaneous Coronary Intervention in the United States with Costs and Outcomes. JAMA Cardiol. 2018;3: 1041–1049. pmid:30267035
  27. 27. Chieffo A, Meliga E, Latib A, Park S-J, Onuma Y, Capranzano P, et al. Drug-Eluting Stent for Left Main Coronary Artery Disease: The DELTA Registry: A Multicenter Registry Evaluating Percutaneous Coronary Intervention Versus Coronary Artery Bypass Grafting for Left Main Treatment. J Am Coll Cardiol Cardiovasc Interv. 2012;5: 718–727. pmid:22814776
  28. 28. Kiviniemi TO, Pietila A, Gunn JM, Aittokallio JM, Mahonen MS, Salomaa V V, et al. Trends in rates, patient selection and prognosis of coronary revascularizations in Finland between 1994 and 2013: the CVDR. EuroIntervention. 2016;12: 1–9. pmid:27753597
  29. 29. Grodzinsky A, Kosiborod M, Tang F, Jones PG, McGuire DK, Spertus JA, et al. Residual angina after elective percutaneous coronary intervention in patients with diabetes mellius. Circ Cardiovasc Qual Outcomes. 2017;10: e003553. pmid:28904076
  30. 30. Matthew Brennan J, Peterson ED, Messenger JC, Rumsfeld JS, Weintraub WS, Anstrom KJ, et al. Linking the National Cardiovascular Data Registry CathPCI Registry With Medicare Claims Data Validation of a Longitudinal Cohort of Elderly Patients Undergoing Cardiac Catheterization. Circ Cardiovasc Qual Outcomes. 2012;5: 134–140. pmid:22253370
  31. 31. Chan PS, Klein LW, Krone RJ, Dehmer GJ, Kennedy K, Nallamothu BK, et al. Appropriateness of Percutaneous Coronary Intervention. JAMA. 2011;306: 53–61. pmid:21730241
  32. 32. Cavender MA, Joynt KE, Parzynski CS, Resnic FS, Rumsfeld JS, Moscucci M, et al. State Mandated Public Reporting and Outcomes of Percutaneous Coronary Intervention in the United States. Am J Cardiol. 2015;115: 1494–1501. pmid:25891991
  33. 33. Schneider PM, Bradley SM. Finding the optimum in the use of elective percutaneous coronary intervention. JCOM. 2014;21: 281–288.