Evidence-based medicine employs expert opinion and clinical data to inform clinical decision making. The objective of this study is to determine whether it is possible to complement these sources of evidence with information about physician “group intelligence” that exists in electronic health records. Specifically, we measured laboratory test “repeat intervals”, defined as the amount of time it takes for a physician to repeat a test that was previously ordered for the same patient. Our assumption is that while the result of a test is a direct measure of one marker of a patient's health, the physician's decision to order the test is based on multiple factors including past experience, available treatment options, and information about the patient that might not be coded in the electronic health record. By examining repeat intervals in aggregate over large numbers of patients, we show that it is possible to 1) determine what laboratory test results physicians consider “normal”, 2) identify subpopulations of patients that deviate from the norm, and 3) identify situations where laboratory tests are over-ordered. We used laboratory tests as just one example of how physician group intelligence can be used to support evidence based medicine in a way that is automated and continually updated.
Citation: Weber GM, Kohane IS (2013) Extracting Physician Group Intelligence from Electronic Health Records to Support Evidence Based Medicine. PLoS ONE 8(5): e64933. https://doi.org/10.1371/journal.pone.0064933
Editor: Indra Neil Sarkar, University of Vermont, United States of America
Received: October 31, 2011; Accepted: April 22, 2013; Published: May 29, 2013
Copyright: © 2013 Weber and Kohane. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This study was supported by Informatics for Integrating Biology and the Bedside, a National Institutes of Health (NIH) funded National Center for Biomedical Computing (U54LM008748). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: This study was conducted at Partners HealthCare System, a non-profit academic healthcare center in Boston, Massachusetts. Author Weber is paid as a software consultant by Partners HealthCare System; author Kohane is not paid by Partners HealthCare System. This does not alter the authors' adherence to all the PLOS ONE policies on sharing data and materials. The data used for this study is not proprietary; however, it contains identifiable patient information. As a result, access to this data would require approval of the Partners Human Research Committee (PHRC), which is the Institutional Review Board (IRB) of Partners Research Management at Partners HealthCare.
In evidence-based medicine (EBM), clinical practice guidelines are driven by expert consensus, which is typically based on review of the literature, clinical experience, and outcomes analyses , . A major challenge of EBM is the effort and cost needed to keep the knowledge of clinical practice up to date across an ever-widening array of diagnostic and therapeutic options . One way to approach this problem is through analysis of the large amounts of data collected in electronic health records (EHR) . Usually the variable being examined in these datasets is a patient outcome, such as survival . However, in this study we will demonstrate that EHRs not only contain information about patient outcomes, but they also provide insight to providers' knowledge of their patients' state of health, which can also be used in generating EBM guidelines. We will do this in the context of laboratory tests. Instead of looking at the results of the tests, we will examine when physicians ordered the tests. Whereas the result of a test is a direct measure of one maker of a patient's health, a physician's decision to order a test is based on multiple factors including past experience, available treatment options, and information about the patient that might not be coded in the EHR.
Specifically, we will measure laboratory test “repeat interval”, defined as the amount of time it takes for a physician to repeat a test that was previously ordered for the same patient. For example, if a white blood cell count (WBC) test is ordered for a patient, and the next time that patient has a WBC test is seven days later, then the repeat interval is seven days. The physician ordering the repeat test is not necessarily the same person who ordered the previous test, but could presumably access the result of the previous test through the EHR. By examining these repeat intervals in aggregate over large numbers of patients, we can quantify physician behavior and observe how it varies under different conditions. To demonstrate how this can be used for EBM, we will use the laboratory test repeat intervals from the EHRs of two large and independent hospitals in the Boston area to answer three questions: Firstly, can collective physician laboratory test-ordering behavior, which we call physician “group intelligence”, be used to define what it means for a laboratory test result to be “normal”? Secondly, can subpopulations of patients be identified when their physicians' behavior differs from the norm? Finally, can physician group intelligence be used to identify situations where laboratory tests are over-ordered?
The data used for this study were laboratory test results contained within the Partners Research Patient Data Repository (RPDR), a large clinical database, which combines data from Brigham and Women's Hospital (BWH) and Massachusetts General Hospital (MGH) –. From an initial dataset, which included 3,534,666 patients with 465,313,629 laboratory test results between 1/1/1986 and 6/30/2004, we extracted two datasets: (1) Firstly, we obtained a random sample of 100,000 repeat intervals for each of the 97 different laboratory tests listed in Table 1 (9.7 million repeat intervals). Other laboratory tests were excluded either because they have fewer than 100,000 occurrences, or there are known problems with how the data are recorded. Although there are 4,926 tests in the RPDR, these 97 represent 71% of all test results because they are the ones most frequently ordered. (2) Secondly, we obtained a random sample of 1,000,000 repeat intervals for white blood cells (WBC), which indicated the patient age in days at the time of the tests and whether the tests were performed in inpatient or outpatient settings. The laboratory test dates in the RPDR are typically the dates when the results are ready, rather than when the specimens were obtained or when the results were read. The datasets may be requested by registration and submission of a Data Use Agreement at http://www.i2b2.org/Publication_Data/.
Reference ranges of laboratory test values are defined by sampling a healthy population and recording the upper and lower nth percentiles –. There are numerous challenges with determining these ranges and in using them for clinical decision-making. Many factors such as age, sex, and sampling bias can influence these values; it can be difficult to identify healthy individuals; and there is disagreement over which statistical techniques and percentiles to use –. Furthermore, it is unclear how useful reference ranges are in clinical decision-making since there is a distinction between a reference limit and the value that will actually change a physician's clinical decision –. The latter is based not on healthy population percentiles, but rather the types of clinical actions that are available to the physician and his or her clinical knowledge, prior experience, and intuition. Can we quantify this to define a new robust measure of laboratory test value normality that reflects clinical expertise?
We defined repeat interval as the amount of time it took for physicians to repeat the same test in the same patient. A repeat interval consists of two tests—an initial test and a repeat test. In this study, we looked at the relationship between the result of the initial test and when the repeat test is ordered. To study this relationship, for each of the 97 laboratory tests we partitioned the 100,000 repeat intervals into 20 equal-size bins based on the result of the initial test. For example, the first bin contains the 5,000 repeat intervals with the smallest initial test result values, and the 20th bin contains the 5,000 repeat intervals with the highest initial values. For each bin, we calculated the median repeat interval duration and the 25th and 75th percentiles. We did not use the result of the repeat test in this study—we only measured the amount of time that had elapsed since the initial test. Note how this differs from traditional EBM studies, in which physicians perform interventions, and then the patient outcomes are measured. In this study, we start with data about the patients (their initial laboratory test results), and then measure the interventions chosen by their physicians (the time until the test was repeated). In other words, we are examining the physicians as a way of indirectly learning more about the patients.
In the first part of this study, we used repeat intervals to examine normality in laboratory tests. Whereas laboratory test reference ranges suggest there are only two states of patient health, normal and abnormal, we hypothesized that repeat intervals would reveal more subtle patterns that demonstrate the variability among patients and the different clinical contexts in which they are seen.
To determine if we can automatically identify the various factors that can influence physician behavior, such as patient demographics and clinical settings, we calculated the median repeat intervals for white blood cells (WBC) for different pediatric age groups and for inpatient vs outpatient visits. If these subpopulations indeed represent distinct patient states that have different clinical meaning, then differences in normative behavior might be detectable.
The initial test result may or may not influence when the repeat test is ordered. We used entropy as a measure of how much the median repeat interval varies across the 20 bins for each test. If all 20 median repeat intervals are equal, then the initial test result provides no information towards predicting when the repeat test will be ordered, and the entropy is therefore zero. Because physician behavior is not being affected by the result of the test, we hypothesize that some tests with low entropy are being over-ordered. In contrast, tests whose initial result has a greater influence over physician behavior will have higher entropy, suggesting that those tests are more informative.
In order to calculate entropy, we first discretized the median repeat interval for each laboratory test's 20 value bins by mapping it to one of 20 frequently observed time periods (Table 2). These time periods were determined by combining the repeat intervals for all 97 laboratory tests and noting from its frequency distribution that there are approximately 20 peaks (Figure 1a). The points between the peaks with the fewest repeat intervals were chosen as the boundaries of the time periods. This ensured that most repeat intervals would be near the center of a time period rather than at the boundary, thus making the results less sensitive to the precise location of the time period boundaries. Entropies were then calculated using the equation -Sum[p(x)*log2(p(x))] where p(x) is the fraction of a laboratory test's 20 value bins whose median repeat intervals fall within time period x. For example, if a laboratory test has 10 value bins whose median repeat intervals fall within time period 6 (2 days), 5 value bins that fall within time period 4 (12 hours), and 5 value bins that fall within time period 7 (3 days), then the entropy is −[0.5*log2(0.5)+0.25*log2(0.25)+0.25*log2(0.25)] = 1.5.
(a) Frequency distribution of repeat intervals for all labs. Vertical bars indicate the boundaries used in the entropy calculations to convert repeat intervals to one of 20 discrete categories. (b) Median repeat interval for each of 97 tests. Vertical bars indicate the 25th and 75th percentiles.
Table 2 and Figure 1a show that the frequency of 9.7 million repeat intervals across the 97 tests has approximately 20 peaks, with 24 hours being the most common, followed by 2 days, 1 year, 7 days, and 6 months. When looking at individual laboratory tests, Table 1 and Figure 1b show that the median repeat interval can range from as small as 3 hours for blood gases to as large as year for cholesterol and prostate-specific antigen (PSA), with a large variance for most tests. However, the repeat intervals can be highly dependent on the initial value of the test as well as the patient population and clinical setting. The next three sections describe this relationship by testing three hypotheses.
Can physician group intelligence derive knowledge that all physicians already know, but can be difficult to quantify?
The reference ranges for white blood cell count (WBC) in adult patients at BWH and MGH are 4.0–10.0 and 4.5–11.0, respectively , . In Figure 2a, which illustrates the repeat intervals for WBC, we can see a complex relationship between the initial WBC value and when physicians order a second WBC test. In general, the repeat interval for WBC is larger within the hospital reference ranges (indicated by markers on the horizontal axis) than outside. However, it is not a binary response. Rather, there is a continuum, with a maximum median repeat interval of almost two weeks at an initial WBC value of 6, and gradually decreasing at larger or smaller values. As seen in Figure 2b and Figure 2c, a similar pattern exists for other tests, such as high-density lipoprotein (HDLc) and hemoglobin A1c (HbA1c), where the largest repeat intervals occur when the initial test results are within the hospital reference ranges, and the intervals decrease the further outside those ranges.
Error bars represent the 25th and 75th percentiles. Triangles indicate reference values for BWH (black) and MGH (gray).
The vertical bars in Figure 2 represent the 25th and 75th percentiles of repeat intervals. The initial test result not only affects the median repeat interval, but it also greatly affects the variance. If we think about an initial test result being followed by a large median repeat interval as a “good” test result, and an initial test result being followed by a small median repeat interval as a “bad” test result, then the amount of variance corresponds to the degree of consensus among physicians on whether a particular test result is “good” or “bad”. For example, on average, a WBC of 6 is “good”, but the large variance means that other information is needed to determine the patient's state of health. At the upper value of the reference range (10.0–11.0), the repeat interval is smaller, but there is still large variability. However, once the WBC is greater than 16, then there is agreement among physicians that the result is “bad”.
Laboratory tests can be classified according to how their repeat intervals vary with different initial values. Although WBC is “good” in mid-range values and “bad” at the low and high extremes (“bad-good-bad”, or “BGB”), the repeat intervals for HDLc are largest at high values (“BG”), and the repeat intervals for HbA1c are largest at low values (“GB”). Table 1 shows that most laboratory tests fall into one of these three categories, with 44 BGB tests (e.g., sodium and glucose), 19 BG tests (e.g., hematocrit and vancomycin), and 24 GB tests (e.g., bilirubin and erythrocyte sedimentation rate (ESR)). An exception is human chorionic gonadotropin (hCG), which has not one, but two “good” states (“GBG”) depending on whether the patient is pregnant (Figure 2d).
Although we are not arguing that this method should replace the standard way of determining laboratory test reference ranges, we want to highlight how remarkable it is that repeat intervals alone, without any additional information about the patients' health, can be used to derive physician consensus around what it means for a test result to be “normal”. In other words, we can use physician group intelligence to quantify the significance of different test results and determine the values that require immediate action.
Can group intelligence capture the knowledge of subsets of physicians that treat specific patient populations?
Normality as defined by physician behavior can vary greatly with different subpopulations. In neonates, for example, the typical WBC is higher than in adult populations. Figure 3a shows that physicians adjust their ordering behavior for this, with a peak time to repeat for patients less than 1 month old at a WBC of 16.3 (58,121 repeat intervals). As pediatric patients age, the “ideal” WBC value decreases and the maximum repeat interval increases. For patients 1–5 months the preferred value is 12.6 (16,237 repeat intervals), and for patients 6–23 months the preferred value is 8.9 (32,556 repeat intervals). The median time to repeat of WBC is at a maximum of 153 days when patients are between 2–5 years old (33,666 repeat intervals). Beyond this age, physician behavior mimics that seen throughout adulthood (38,051 repeat intervals). However, while the preferred WBC remains consistent until old age, the repeat intervals decrease for all values in elderly populations.
Can group intelligence identify inconsistencies in clinical behavior and situations where the frequency of ordering laboratory tests can be reduced?
Figure 3b shows that physician ordering behavior for WBC also changes when patients are in an inpatient setting compared to when they are relatively healthy in an outpatient setting. In both cases, the maximum repeat interval is at a WBC value of about 6. However, that interval is 22.9 hours for inpatients (365,769 repeat intervals) and 59.1 days for outpatients (481,591 repeat intervals). Thus, the same laboratory test result can have a dramatically different effect on clinical decisions depending on physician's perceived state of the patient. It might also suggest that hospital guidelines in an inpatient setting influence ordering behavior in ways that are counterintuitive to physicians' true estimate of risk.
The extent to which the initial value of a laboratory test affects the repeat interval can indicate how informative that test is. For nearly all 97 laboratory tests studied, the initial value does indeed influence the repeat interval greatly (Table 1). For example, the ratio between WBC's best bin's repeat interval (15.4 days) and the worst (0.77 days) is 20-fold. There was at least a 2-fold difference in 87 tests, a 10-fold difference in 35 tests, a 50-fold difference in 13 tests, and a more than 100-fold difference in three tests (serum protein, albumin, and cholesterol). However, this does not tell the full story. A test whose repeat interval is the same in nearly all cases except for the most extreme values might provide less information to a physician, on average, than a test whose repeat intervals vary across the full range of values for that test. This can be quantified using entropy.
Of the 97 tests, albumin and neutrophil fraction had the highest observed entropies (3.141), meaning that their values, more than any other tests, had the greatest influence on physician behavior (Table 1). There are several explanations for why the entropy can be low for certain laboratory tests: a) they can be routinely ordered as part of a hospital protocol (e.g. Troponin T has zero entropy), b) they are ordered automatically as part of a panel but are not generally the reason for which the panel was ordered (e.g. mean corpuscular volume (MCV) in a complete blood count (CBC) has an entropy of 1.076), or c) they are part of a screening protocol in which the vast majority of the test results are normal (e.g. prostate-specific antigen (PSA) has an entropy of 1.076 because 75% of its values are less than 3.6 and are not repeated for a one year).
We introduced this study by enumerating three questions that we sought to answer, at least preliminarily in a study of two large academic hospitals. We have shown that we can use collective physician behavior to identify normal ranges that correspond to the published normal ranges used in these institutions but without the threshold effect of strict limits and instead providing a smooth function relating these values to normality and disease acuity. Secondly, we have shown that these normative ranges are very specific to the subpopulations being treated going from adult to childhood and the neonatal period where the personalized interpretation of these laboratory studies is markedly different. Thirdly, we have shown that clinical setting, the grouping of tests into panels, and screening guidelines can potentially lead to overuse of laboratory tests. This automated form of EBM does not depend on an ongoing knowledge extraction process from experts; it is driven directly from aggregate physician behavior as seen in EHRs. If styles of practice change, if the meaning of particular clinical variables and their values are understood differently over time, if additional phenotypes such as genomic are introduced then the normative practice for the patient's state induced from physician behavior will automatically be changed. This study represents only a beginning in developing an automated application of physician group intelligence, similar to what has been done with “crowdsourcing” for scientific discovery in other fields –.
There are far more sources of data that are accessible beyond laboratory data, that are driven by physician behavior and their integrated understanding of the patient's state. For example, one could examine which medications are prescribed and the number of refills included on the initial prescriptions, which procedures are ordered and the time intervals between them, how often follow-up visits are scheduled, and the number of different physicians that treat a patient. These are processes, not outcome measures, but in aggregate represent a consensus estimate.
As in other applications of group intelligence, the use of physician behavior rather than measured outcomes to drive the personalization of medical practice has some obvious risks that are built upon several underlying assumptions. The most important of these is that physicians in aggregate are well informed of the current state of the art. Further, over large populations of patients, sufficient numbers of decisions can be measured that across the varying states of patients there will be robust characterizations of the patient subpopulations. These assumptions can be tested empirically in the future by comparing physician behavior at different institutions and determining, for example, how rapidly physician behavior changes to account for the emergence of innovative and expert-approved clinical practices.
The intent of this study was not to draw conclusions about specific laboratory tests. A more detailed analysis of which tests are grouped into panels, how policies vary across different clinics, and what changes have been seen over time would be needed for that. Rather, our goal was to demonstrate that a wealth of often overlooked information about physician behavior exists in EHRs, which could provide an important source of data for future EBM research.
We thank Shawn Murphy, MD, PhD, for helpful discussions and assistance with obtaining data and computational resources.
Conceived and designed the experiments: GMW ISK. Performed the experiments: GMW. Analyzed the data: GMW ISK. Wrote the paper: GMW ISK.
- 1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS (1996) Evidence based medicine: what it is and what it isn't. BMJ 312 (7023) 71–2.
- 2. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, et al. (1995) Users' guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group. JAMA 274 (22) 1800–4.
- 3. Timmermans S, Mauck A (2005) The promises and pitfalls of evidence-based medicine. Health Aff (Millwood) 24 (1) 18–28.
- 4. Allison J, Kiefe CI, Weissman N (1999) Can data-driven benchmarks be used to set the goals of healthy people 2010? Am J Public Health 89 (1) 61–5.
- 5. Szolovits P, Pauker SG (1978) Categorical and probabilistic reasoning in medical diagnosis. Artificial Intelligence in Medicine 11: 115–44.
- 6. Brigham and Women's Hospital (2008) Clinical Laboratory Manual, 2007–2008. Boston, MA.
- 7. Massachusetts General Hospital (2008) Pathology Service Laboratory Handbook. Boston, MA.
- 8. Nalichowski R, Keogh D, Chueh HC, Murphy SN (2006) Calculating the benefits of a Research Patient Data Repository. AMIA Annu Symp Proc 1044.
- 9. Clinical and Laboratory Standards Institute (2000) How to define and determine reference values and reference intervals for quantitative clinical laboratory tests, document C28-A2. Wayne, PA.
- 10. International Federation of Clinical Chemistry (1987) Expert Panel on Theory of Reference Values. Approved recommendation (1986) on the theory of reference values. Part 1. The concept of reference values. J Clin Chem Clin Biochem 25: 337–42.
- 11. International Federation of Clinical Chemistry (1987) Expert Panel on Theory of Reference Values. Approved recommendation (1987) on the theory of reference values. Part 2. Selection of individuals for the production of reference values. J Clin Chem Clin Biochem 25: 639–44.
- 12. Horn P, Pesce A (2003) Reference intervals: an update. Clin Chim Acta 334 (1–2) 5–23.
- 13. Kouri T, Kairisto V, Virtanen A, Uusipaikka E, Rajamaki A, et al. (1994) Reference intervals developed from data for hospitalized patients: computerized method based on combination of laboratory and diagnostic data. Clin Chem 40: 2209–15.
- 14. Nakayama T (1992) Factors that influence reference values. Rinsho Byori 40 (8) 828–36.
- 15. Shine B (2008) Use of routine clinical laboratory data to define reference intervals. Ann Clin Biochem 45 (5) 467–75.
- 16. Henny J, Petitclerc C, Fuentes-Arderiu X, Petersen P, Queralto J, et al. (2000) Need for revisiting the concept of reference values. Clin Chem Lab Med 38 (7) 589–95.
- 17. Petitclerc C (2004) Normality: the unreachable star? Clin Chem Lab Med 42 (7) 698–701.
- 18. Solberg HE (1994) Using a hospitalized population to establish reference intervals: pros and cons. Clin Chem 40 (12) 2209–15.
- 19. Stavroudis TA, Hemachandra AH, Lehmann CU (2007) Who cares to know: Defining critical action laboratory values. San Francisco: American Academy of Pediatrics.
- 20. Ekins S, Williams AJ (2010) Reaching out to collaborators: crowdsourcing for pharmaceutical research. Pharm Res 27 (3) 393–5.
- 21. Johnston SC, Hauser SL (2009) Crowdsourcing scientific innovation. Ann Neurol 65 (6) A7–8.
- 22. Oprea TI, Bologa CG, Boyer S, Curpan RF, Glen RC, et al. (2009) A crowdsourcing evaluation of the NIH chemical probes. Nat Chem Biol 5 (7) 441–7.
- 23. Bradley JC, Lancashire RJ, Lang AS, Williams AJ (2009) The Spectral Game: leveraging Open Data and crowdsourcing for education. J Cheminform 1 (1) 9.