Health systems routinely implement changes to the design of electronic health records (EHRs). Physician behavior may vary in response and methods to identify this variation could help to inform future interventions. The objective of this study was to phenotype primary care physician practice patterns and evaluate associations with response to an EHR nudge for influenza vaccination.
Methods and findings
During the 2016–2017 influenza season, 3 primary care practices at Penn Medicine implemented an active choice intervention in the EHR that prompted medical assistants to template influenza vaccination orders for physicians to review during the visit. We used latent class analysis to identify physician phenotypes based on 9 demographic, training, and practice pattern variables, which were obtained from the EHR and publicly available sources. A quasi-experimental approach was used to evaluate response to the intervention relative to control practices over time in each of the physician phenotype groups. For each physician latent class, a generalized linear model with logit link was fit to the binary outcome of influenza vaccination at the patient visit level. The sample comprised 45,410 patients with a mean (SD) age of 58.7 (16.3) years, 67.1% were white, and 22.1% were black. The sample comprised 56 physicians with mean (SD) of 24.6 (10.2) years of experience and 53.6% were male. The model segmented physicians into groups that had higher (n = 41) and lower (n = 15) clinical workloads. Physicians in the higher clinical workload group had a mean (SD) of 818.8 (429.1) patient encounters, 11.6 (4.7) patient appointments per day, and 4.0 (1.1) days per week in clinic. Physicians in the lower clinical workload group had a mean (SD) of 343.7 (129.0) patient encounters, 8.0 (2.8) patient appointments per day, and 3.1 (1.2) days per week in clinic. Among the higher clinical workload group, the EHR nudge was associated with a significant increase in influenza vaccination (adjusted difference-in-difference in percentage points, 7.9; 95% CI, 0.4–9.0; P = .01). Among the lower clinical workload group, the EHR nudge was not associated with a significant difference in influenza vaccination rates (adjusted difference-in-difference in percentage points, -1.0; 95% CI, -5.3–5.8; P = .90).
Citation: Changolkar S, Rewley J, Balachandran M, Rareshide CAL, Snider CK, Day SC, et al. (2020) Phenotyping physician practice patterns and associations with response to a nudge in the electronic health record for influenza vaccination: A quasi-experimental study. PLoS ONE 15(5): e0232895. https://doi.org/10.1371/journal.pone.0232895
Editor: Holly Seale, University of New South Wales, AUSTRALIA
Received: February 18, 2020; Accepted: April 23, 2020; Published: May 20, 2020
Copyright: © 2020 Changolkar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The electronic health record data contains PHI. Data availability requests can be sent to the University of Pennsylvania Institutional Review Board (firstname.lastname@example.org) and Dr. Mitesh Patel (email@example.com).
Funding: This study was supported by the University of Pennsylvania Health System through the Penn Medicine Nudge Unit. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: Dr. Patel is supported by a career development award from the Department of Veterans Affairs HSR&D. Dr. Patel is founder of Catalyst Health, a technology and behavior change consulting firm. Dr. Patel also has received research funding from Deloitte, which is not related to the work described in this manuscript. This does not alter our adherence to PLOS ONE policies on sharing data and materials. No other disclosures were reported.
Nearly 90% of primary care physicians (PCPs) in the United States use an electronic health record (EHR) to facilitate medical decision-making [1,2]. Health systems are increasingly implementing changes to the design of EHRs to influence physician behavior . These interventions are typically deployed broadly to all physicians within a clinical specialty or throughout the entire health system . In some cases these may lead to benefits for the overall group. However, there may be some physicians for whom these interventions are not effective. Moreover, for some physicians, these design changes could have a negative impact either directly on the targeted behavior or indirectly on other behaviors. However, there is a lack of evidence on methods to identify groups of physicians with differential responses to these types of interventions.
Existing data from EHRs could be used to identify physicians with different behavioral phenotypes. For example, physician practices patterns may vary in terms of the volume or types of patients they provide care for in clinic. Model-based approaches that could segment physicians into different phenotype groups may allow for tailoring of behavioral interventions to improve patient care. For example, latent class analysis has been used to classify phenotypes using clinical [5,6], behavioral [7,8], and activity data [9,10].
Nudges are subtle changes to the design of choice architecture that can have a significant impact on behavior . In prior work by members of our group, we found that an active choice intervention in the EHR to prompt medical assistants to template influenza vaccination orders for physicians during primary care visits led to a 9.5-percentage point increase in influenza vaccination in intervention practices relative to control practices over time . However, physicians may have varied in their response to this intervention. In this study, our objective was to phenotype physicians using EHR data on their practice patterns and then evaluate associations with responses to an active choice nudge in the EHR for influenza vaccination.
The University of Pennsylvania Institutional Review Board approved this study and waived informed consent because it was infeasible given the study design, and the study posed minimal risk.
Setting and participants
Similar to prior work , the sample comprised primary care physicians (PCPs) from 10 primary care practices (3 intervention, 7 control) at Penn Medicine and patients who visited those PCPs during two influenza seasons (September 1st to March 31st) between 2015 and 2017. We excluded PCPs who did not see patients during the entire practice period or had at least one month during the 2015–16 influenza season (pre-intervention period) without any patient visits (n = 7). We evaluated each patient’s first new or return visit with their PCP during the study period. Acute, sick, or other visits were excluded because influenza vaccination may not be appropriate at those times. Patients were excluded if EHR documentation indicated they were already vaccinated prior to the visit.
Prior to the intervention, PCPs had to remember to manually check if a patient was due for influenza vaccination, discuss it with the patient, and then place an order for it in the EHR. During the 2016 to 2017 influenza season, three Penn Medicine primary care practices implemented an active choice intervention in the EHR using a best practice alert in Epic, directed to medical assistants . Prior to meeting with the physician, patients met with a medical assistant to check their vitals. At that time, the EHR assessed patient eligibility for the influenza vaccine and prompted medical assistants to accept or cancel an order for the vaccine. If accepted, the order was templated for the physician to review and sign during the patient visit. The control group comprised seven primary care clinics that did not implement the intervention.
Clarity, an Epic reporting database, was used to obtain data on physicians (demographics, training, and practice patterns), patients (demographics, insurance, comorbidities, prior influenza vaccination status, and PCP), and clinic visits (date, appointment time, practice site, visit type, and presence of an order for influenza vaccination or not). Publicly available data sources were used for information on physician years of experience and sex. U.S. Census data was used to find the median household income by zip code, when available.
Physician practice attributes were defined as follows. Physician years of experience was calculated as the number of years a physician had been in practice between earning a medical degree and the beginning of the study period (2015). The number of encounters was the total number of patient visits for a physician at Penn Medicine during the 2015–2016 influenza season. For each physician, we estimated the mean Charlson Comorbidity Index  (CCI) among their patients. Each physician’s average number of appointments per hour was estimated by first identifying the individual hours of day they worked during the influenza season. Any hour of the day that amounted to fewer than 5% of total hours (typically hours like 6:00 am) were dropped. Finally, the physician’s total number of completed appointments was divided by the distinct hours in which they saw patients. We estimated the mean number of days per week a physician saw patients in clinic. Delay was estimated as the mean number of minutes between the scheduled appointment and physician opening the patient chart; we excluded outliers attributed to miscoding of data in the EHR . The percent of new patient encounters for each physician was estimated using visit type data. To capture differences in vaccination rates by appointment time [11,14], the percent of visits in the morning (after 8am and before 1pm) was calculated out of the total number of visits between 8am and 6pm for each physician.
The primary outcome measure was the presence of an order for influenza vaccination. In prior work, over 99.9% of vaccine orders also had an insurance claim indicating the vast majority of these orders resulted in actual vaccination . Insurance information was not available for this study.
To phenotype physician practice patterns, we used latent class analysis (LCA) which is a model-based approach that uses observable variables to classify individuals into previously unmeasured subgroups . Variables are identified as latent class indicators with the goal of distinguishing between classes and categorizing physicians into their most likely classes given observed data . The following nine physician variables were used in the LCA: years of experience, sex, number of patient encounters, mean CCI of patient encounters, mean number of patient appointments per weekday, mean number of weekdays in clinic per week, mean minutes of appointment delay, percent of new patient visits, and percent of visits per weekday in the morning before 1pm. Variable distributions were assessed to inform balanced categorization of continuous variables.
To identify the optimal number of latent classes, we used several measures to assess model fit . The Bayesian information criterion (BIC) was used to evaluate goodness of model fit . The parametric bootstrapped likelihood ratio test (LRT) was used to assess whether a given model with k classes is significantly more informative than one with k-1 classes . Entropy was used to evaluate distinctness between classes . We also required that each class have at least 5% of physicians to prevent underrepresentation of certain characteristics. LCA modeling was conducting using MPlus (Version 8.2) .
To evaluate the association of physician phenotype classification with response to the intervention, we used a difference-in-differences approach , as used in prior work [11,15,23]. Changes in influenza vaccination between groups (intervention versus control practices) and time (post-intervention year versus pre-intervention year) were compared for each latent class. A generalized linear model with logit link was fit to the binary outcome of influenza vaccination at the patient visit level for each class of physicians. These models were adjusted for patient demographics (age, sex, race/ethnicity), CCI , and insurance type. The models were adjusted by practice site and month fixed effects, an interaction term for year and group, and were clustered by physician. The adjusted difference-in-difference in percentage points with 95% confidence intervals were generated using the bootstrapping procedure [24,25], resampling patients 1000 times. Resampling of patients was conducted by physician to maintain clustering at the physician level. Two-sided hypothesis tests used a significance level of 0.05. Regression analyses were conducted in R (Version 3.5.1; R Foundation for Statistical Computing).
The sample comprised 56 physicians with mean (SD) of 24.6 (10.2) years of experience and 53.6% were male (Table 1). These physicians had a mean (SD) number of 819 (429) patient encounters, 11.6 (4.7) appointments per day, and 4.0 (1.1) days per week in clinic. The sample comprised 45,410 patients with mean (SD) age of 58.7 (16.3) years, 67.1% were white, and 22.1% were black (Table 2).
The two-class model had good fit with a BIC of 906.1, an entropy of 1.0, and a significant parametric bootstrapped likelihood ratio test (P<.001) when compared to 3- and 4-class models (S1 Table). Class 1 comprised 15 physicians with a mean (SD) of 343.7 (129.0) patient encounters, 8.0 (2.8) patient appointments per day, and 3.1 (1.2) days per week in clinic. Class 2 comprised 41 physicians with a mean (SD) of 818.8 (429.1) patient encounters, 11.6 (4.7) patient appointments per day, 4.0 (1.1) days per week in clinic. These classes varied in their level of workload and therefore were labeled as lower clinical workload (Class 1) and higher clinical workload (Class 2) (Table 1). Among the 15 physicians in the lower clinical workload group, 3 were in the intervention practices and 12 were in control practices. Among the 41 physicians in the higher clinical workload group, 23 were in the intervention practices and 18 were in control practices.
Physician phenotypes and changes in influenza vaccination
Influenza vaccination rates for the lower clinical workload group at control practices were 47.8% in 2015–16 and 49.2% in 2016–17, and at intervention practices were 47.1% in 2015–16 and 51.5% in 2016–17. For the higher clinical workload group, influenza vaccination rates at control sites were 40.9% in 2015–16 and 41.8% in 2016–17, and at intervention practices were 42.0% in 2015–16 and 51.4% in 2016–17. The unadjusted difference for intervention versus control sites in the intervention period relative to the pre-intervention period was 8.6-percentage points for higher clinical workload group and 1.9-percerntage points for the lower clinical workload group (Fig 1).
The unadjusted percentage of patients that received influenza vaccination among physicians in the lower clinical workload group (A) and physicians in the higher clinical workload group (B). The active choice intervention was implemented at the intervention practices during the 2016–2017 year.
Among the higher clinical workload group, the EHR nudge was associated with a significant increase in influenza vaccination (adjusted difference-in-difference in percentage points, 7.9; 95% CI, 0.4–9.0; P = .01) (Table 3). Among the lower clinical workload group, the EHR nudge was not associated with a significant difference in influenza vaccination rates (adjusted difference-in-difference in percentage points, -1.0; 95% CI, -5.3–5.8; P = .90). Regression tables for both difference-in-difference models are available (S2 and S3 Tables).
In this study of 10 primary care practices, we found that a model-based approach categorized physician practice patterns into higher and lower clinical workload groups. While a prior study among these practices found that an EHR-based nudge increased influenza vaccination rates relative to control practices over time, we found differential responses based on the identified physician subgroups. The intervention was associated with a significant increase in influenza vaccination among physicians in the higher clinical workload group, but not among those in the lower workload group. To our knowledge, this is one of first studies use this type of approach to phenotype physician practice patterns and compare responses to a behavioral intervention.
These findings have several important implications. First, behavioral phenotyping has been described previously, but mostly in the context of identifying patients with differential response to failure [26,27]. In this study, we used available EHR data to identify physicians with different practice pattern phenotypes. Since more than 90% of health systems use EHRs, this is a scalable approach that could be applied to other areas of health care.
Second, the design of the nudge intervention may reveal insights into mechanisms for differential responses between the physician groups. Most patients that present to primary care visits during influenza season are eligible for vaccination if they have not already received it. The EHR intervention was delivered to medical assistants who could template vaccination orders for physicians to review and discuss with patients . Among physicians with higher clinical workloads, this may have helped alleviate the effort needed to do this consistently for patients throughout the day. Physicians with higher patient volumes may be more likely than physicians with lower clinical workloads to face decision fatigue, which is the depletion of self-control and active initiative that results from the cumulative burden of making decisions . They may also be more likely to fall behind schedule as the day progresses. Our prior work has shown that these two factors can lead to lower vaccination rates and well as worsening in other aspects of care such as cancer screening [11,14].
Third, we identified that about 27% of physicians in the overall sample had no benefit from the intervention. While the difference between intervention and control was not significant for the lower clinical workload group, these physicians started at a higher baseline vaccination rate, which the intervention helped higher workload clinicians reach, but not exceed. EHR-based interventions have been known to create alert fatigue and this could be an opportunity to reduce this burden among these physicians and their staff [29–31]. It could also allow for the recognition that another form of intervention may be better suited to nudge physicians with this lower clinical workload phenotype to improve vaccination rates. Additionally, influenza vaccination rates might be further enhanced if coupled with a patient-facing nudge.
This study has limitations. First, any observational study is susceptible to unmeasured confounders. However, we used a difference-in-differences approach which reduces potential bias from unmeasured confounders by comparing changes in vaccination over time between intervention and control practices. Second, this study was conducted within a single health system, which may limit generalizability. However, we included 10 practice sites from 2 different states. Third, we evaluated influenza vaccination order status at the time of first visit during influenza season, but patients who subsequently receive a timely influenza vaccination were not captured in this study. Fourth, while we were able to identify physician subgroups with differential response to the intervention, our study design did not evaluate specific mechanisms that led to these responses.
A model-based approach categorized physician practice patterns into higher and lower clinical workload groups. The EHR nudge was associated with a significant increase in influenza vaccination orders among physicians in the higher clinical workload group, but not among those in the lower workload group. This approach could be used in other areas of health care to identify variation in response and better design the targeting of future interventions.
S2 Table. Regression lower clinical workload.
S3 Table. Regression higher clinical workload.
- 1. Hsiao CJ, Hing E. Use and characteristics of electronic health record systems among office-based physician practices: United States, 2001–2012. NCHS Data Brief. 2012(111):1–8.
- 2. Bhounsule P, Peterson AM. Characteristics of Hospitals Associated with Complete and Partial Implementation of Electronic Health Records. Perspect Health Inf Manag. 2016;13:1c.
- 3. Patel MS, Volpp KG, Asch DA. Nudge Units to Improve the Delivery of Health Care. N Engl J Med. 2018;378(3):214–216. pmid:29342387
- 4. Horwitz LI, Kuznetsova M, Jones SA. Creating a Learning Health System through Rapid-Cycle, Randomized Testing. N Engl J Med. 2019;381(12):1175–1179. pmid:31532967
- 5. Demissei BG, Finkelman BS, Hubbard RA, et al. Cardiovascular Function Phenotypes in Response to Cardiotoxic Breast Cancer Therapy. J Am Coll Cardiol. 2019;73(2):248–249. pmid:30654897
- 6. Kao DP, Wagner BD, Robertson AD, Bristow MR, Lowes BD. A personalized BEST: characterization of latent clinical classes of nonischemic heart failure that predict outcomes and response to bucindolol. PLoS One. 2012;7(11):e48184. pmid:23144856
- 7. Cornelius T, Voils CI, Birk JL, Romero EK, Edmondson DE, Kronish IM. Identifying targets for cardiovascular medication adherence interventions through latent class analysis. Health Psychol. 2018;37(11):1006–1014. pmid:30198738
- 8. Mann K, Roos CR, Hoffmann S, et al. Precision Medicine in Alcohol Dependence: A Controlled Trial Testing Pharmacotherapy Response Among Reward and Relief Drinking Phenotypes. Neuropsychopharmacology. 2018;43(4):891–899. pmid:29154368
- 9. Full KM, Moran K, Carlson J, et al. Latent profile analysis of accelerometer-measured sleep, physical activity, and sedentary time and differences in health characteristics in adult women. PLoS One. 2019;14(6):e0218595. pmid:31247051
- 10. Silverwood RJ, Nitsch D, Pierce M, Kuh D, Mishra GD. Characterizing longitudinal patterns of physical activity in mid-adulthood using latent class analysis: results from a prospective cohort study. Am J Epidemiol. 2011;174(12):1406–1415. pmid:22074812
- 11. Kim RH, Day SC, Small DS, Snider CK, Rareshide CL, Patel MS. Variations in influenza vaccination by clinic appointment time and an active choice intervention in the electronic health record to increase influenza vaccination. JAMA Network Open. 2018;1(5):e181770. pmid:30646151
- 12. Charlson ME, Pompei P, Ales KL, Mackenzie CR. A New Method of Classifying Prognostic Co-Morbidity in Longitudinal-Studies—Development and Validation. J Chron Dis. 1987;40(5):373–383. pmid:3558716
- 13. Ghasemi A, Zahediasl S. Normality tests for statistical analysis: a guide for non-statisticians. Int J Endocrinol Metab. 2012;10(2):486–489. pmid:23843808
- 14. Hsiang EY, Mehta SJ, Small DS, et al. Association of Primary Care Clinic Appointment Time With Clinician Ordering and Patient Completion of Breast and Colorectal Cancer Screening. JAMA Network Open. 2019;2(5):e193403–e193403. pmid:31074811
- 15. Patel MS, Volpp KG, Small DS, et al. Using Active Choice Within the Electronic Health Record to Increase Influenza Vaccination Rates. J Gen Intern Med. 2017;32(7):790–795. pmid:28337690
- 16. Hagenaars JA, McCutcheon AL. Applied latent class analysis. Cambridge; New York: Cambridge University Press; 2002.
- 17. Nylund KL, Asparouhov T, Muthén BO. Deciding on the Number of Classes in Latent Class Analysis and Growth Mixture Modeling: A Monte Carlo Simulation Study. Structural Equation Modeling: A Multidisciplinary Journal. 2007;14(4):535–569.
- 18. Vrieze SI. Model selection and psychological theory: a discussion of the differences between the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychol Methods. 2012;17(2):228–243. pmid:22309957
- 19. Schwarz G. Estimating the dimension of a model. The Annals of Statistics. 1978;6(2):461–464.
- 20. Lo Y, Mendell N, Rubin D. Testing the number of components in a normal mixture. Biometrika. 2001;88(3):767–778.
- 21. Mplus User’s Guide (Sixth Edition) [computer program]. Los Angeles, CA: Muthén & Muthén; 2007.
- 22. Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. JAMA. 2014;312(22):2401–2402. pmid:25490331
- 23. Patel MS, Volpp KG, Small DS, et al. Using active choice within the electronic health record to increase physician ordering and patient completion of high-value cancer screening tests. Healthc (Amst). 2016;4(4):340–345.
- 24. Efron B, Tibshirani R. An introduction to the bootstrap. New York: Chapman & Hall; 1993.
- 25. Davison AC, Hinkley DV. Bootstrap methods and their application. Cambridge; New York, NY, USA: Cambridge University Press; 1997.
- 26. Kangovi S, Asch DA. Behavioral Phenotyping in Health Promotion: Embracing or Avoiding Failure. JAMA. 2018.
- 27. Volpp KG, Krumholz HM, Asch DA. Mass Customization for Population Health. JAMA Cardiol. 2018.
- 28. Vohs KD, Baumeister RF, Schmeichel BJ, Twenge JM, Nelson NM, Tice DM. Making choices impairs subsequent self-control: a limited-resource account of decision making, self-regulation, and active initiative. J Pers Soc Psychol. 2008;94(5):883–898. pmid:18444745
- 29. Black AD, Car J, Pagliari C, et al. The impact of eHealth on the quality and safety of health care: a systematic overview. PLoS Med. 2011;8(1):e1000387. pmid:21267058
- 30. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–147. pmid:16357358
- 31. Avery AJ, Savelyich BS, Sheikh A, et al. Identifying and establishing consensus on the most important safety features of GP computer systems: e-Delphi study. Inform Prim Care. 2005;13(1):3–12. pmid:15949170