Diagnostic data routinely collected for hospital admitted patients and used for case-mix adjustment in care provider comparisons and reimbursement are prone to biases. We aim to measure discrepancies, variations and associated factors in recorded chronic morbidities for hospital admitted patients in New South Wales (NSW), Australia. Of all admissions between July 2010 and June 2014 in all NSW public and private acute hospitals, admissions with over 24 hours stay and one or more of the chronic conditions of diabetes, smoking, hepatitis, HIV, and hypertension were included. The incidence of a non-recorded chronic condition in an admission occurring after the first admission with a recorded chronic condition (index admission) was considered as a discrepancy. Poisson models were employed to (i) derive adjusted discrepancy incidence rates (IR) and rate ratios (IRR) accounting for patient, admission, comorbidity and hospital characteristics and (ii) quantify variation in rates among hospitals. The discrepancy incidence rate was highest for hypertension (51% of 262,664 admissions), followed by hepatitis (37% of 12,107), smoking (33% of 548,965), HIV (27% of 1500) and diabetes (19% of 228,687). Adjusted rates for all conditions declined over the four-year period; with the sharpest drop of over 80% for diabetes (47.7% in 2010 vs. 7.3% in 2014), and 20% to 55% for the other conditions. Discrepancies were more common in private hospitals and smaller public hospitals. Inter-hospital differences were responsible for 1% (HIV) to 9.4% (smoking) of variation in adjusted discrepancy incidences, with an increasing trend for diabetes and HIV. Chronic conditions are recorded inconsistently in hospital administrative datasets, and hospitals contribute to the discrepancies. Adjustment for patterns and stratification in risk adjustments; and furthermore longitudinal accumulation of clinical data at patient level, refinement of clinical coding systems and standardisation of comorbidity recording across hospitals would enhance accuracy of datasets and validity of case-mix adjustment.
Citation: Assareh H, Achat HM, Stubbs JM, Guevarra VM, Hill K (2016) Incidence and Variation of Discrepancies in Recording Chronic Conditions in Australian Hospital Administrative Data. PLoS ONE 11(1): e0147087. https://doi.org/10.1371/journal.pone.0147087
Editor: Chiara Lazzeri, Azienda Ospedaliero-Universitaria Careggi, ITALY
Received: May 10, 2015; Accepted: December 26, 2015; Published: January 25, 2016
Copyright: © 2016 Assareh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data cannot be made publicly available due to restrictions imposed by data custodians. Data are available upon request from the Secure Analytics for Population Health Research and Intelligence (SAPHaRI) system made available by the Centre for Epidemiology and Evidence (http://www.health.nsw.gov.au/epidemiology/Pages/Population-health-data-warehouse.aspx) and from the NSW Ministry of Health Centre For Health Record Linkage (http://www.cherel.org.au/).
Funding: The authors have no support or funding to report.
Competing interests: The authors have declared that no competing interests exist.
Routinely collected data for hospital admitted patients are increasingly used for clinical and epidemiological research, health resource distribution, funding strategies and quality improvement purposes. Demographic and diagnostic information captured in administrative hospital data collections is employed for case-mix or risk adjustment in order to account for differences in patient characteristics and provide fair comparisons and reimbursements [1–4]. According to certain coding rules and data standards this information is recorded by clinical coders based on patients’ medical information documented during admission . Despite advancements in diagnoses classifications, coding training and standardisation of clinical documentation and coding practices that improved accuracy and reliability of comorbidity information [3, 6], discrepancies in recorded comorbidities at coder, hospital [7–9] and regional levels [10, 11] have been reported in Australia and elsewhere. Relating case mix to funding strategies introduced a systematic bias of reporting more comorbidities, known as “upcoding”, for greater gains in several national health systems . Such biases can change the relationship between patient profile and outcome across hospitals and would potentially lead to inaccurate or unfair provider comparisons and allocation of incentives [2, 4, 13–16].
Different sources of information, employed by studies to verify consistency in hospital datasets, resulted in varying levels of agreement. Higher agreements were reported where hospital data were compared against clinical charts as opposed to self-reported data [7, 17–19]. A recent study reported almost a fifth of the variation in discrepancies in coding common comorbidities in Australian hospitals was attributable to hospital characteristics . Individual hospitals contributed to the observed differences along hospital structural characteristics such as size and location [7, 8, 20].
Despite the important findings from Australian studies previously conducted, no study examined internal consistency of hospital datasets through longitudinal investigation of patient-specific morbidity information. Such a design allows a population-based investigation and reflects discrepancies within a homogeneous setting governed by a single documentation and set of clinical coding standards. Furthermore, investigation of the temporal behaviour of discrepancies and their variations can provide additional insight into the consequences of systematic changes in clinical coding practices such as changes in documentation, coding rules and standards, infra-structure and staffing [8, 21, 22].
This study aimed to measure non-recorded morbidity incidents in administrative hospital datasets and the contribution of patient, admission, morbidity and hospital related factors, as well as examine inter-hospital variation in the observed incidents. We used record linked data for all admitted patients between July 2010 and June 2014 in all acute hospitals across New South Wales (NSW), Australia. Discrepancies in the four chronic conditions of diabetes, hepatitis, HIV and hypertension as well as smoking status were investigated. These five are among most frequently captured conditions in risk-adjustment models [23–25]. Their effect on care and treatment make their recording required or more likely .
2.1 Data source and study population
NSW, the largest health jurisdiction in Australia, has over seven million residents and approximately 500 healthcare facilities with up to three million admissions per annum. We used records from the record linked Admitted Patient Data Collection (APDC) database between 2010–2013 financial years (2010–2013 FY) comprising all NSW hospital separations from 1st July 2010 to 30th June 2014. Each separation (episode of care) record includes information on patient demographics, morbidities and procedures, hospital characteristics, and separations (discharges, transfers and deaths) from all public and private healthcare facilities in NSW. Record linked APDC includes a unique patient identifier that enables the identification and linkage of patient-specific admissions . Each record is assigned with up to 55 codes for morbidities (principal diagnosis and comorbidities) based on the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification (ICD-10-AM) Seventh Edition . Linked APDC records were obtained from the NSW Admitted Patient, Emergency Department and Deaths Register, which was established under the public health and diseases registers provisions of the NSW Public Health Act 2010 and is maintained by the NSW Ministry of Health. Record linkage was carried out by The Centre for Health Record Linkage (CHeReL). The data were accessed remotely through Secure Analytics for Population Health Research and Intelligence (SAPHaRI) system made available by Centre for Epidemiology and Evidence, NSW Ministry of Health . De-identified patients’ records were provided and accessed via SAPHaRI and used for analysis. The study was approved by the Western Sydney Local health District (WSLHD) ethics committee and the Centre for Epidemiology and Evidence, NSW Ministry of Health as the data provider.
Of all admissions at all NSW healthcare facilities within our study period (11,278,591 admissions for 3,761,932 patients), we included admissions of those patients who had at least two admissions with hospital length of stays of at least 24 hours in any NSW acute public or private hospital with at least one recorded chronic condition. This study examined 1,545,294 (13.7%) admissions for 385,268 (10.2%) patients. Admissions at community facilities, multipurpose, non-acute or sub-acute centres, psychiatric and rehabilitation facilities, nursing home and hospices, and children’s hospitals were excluded.
2.2 Discrepancy identification and covariates
Based on ICD-10-AM, five conditions, diabetes (E10–E14), chronic hepatitis (hepatitis: B18.0-B18.2, B94.2 and Z86.18), chronic HIV (B20-B22, B23.8, and B24), hypertension (I10-I15), and smoking (F17.1, F17.2, Z86.43, and Z72.0), were identified within recorded morbidities for each admission. For each patient, the earliest and the latest admissions with the recorded chronic condition (first and last index admissions respectively) were identified for each chronic condition. A discrepancy incidence in clinical coding was defined as any admission with a non-recorded chronic condition occurring: a) between the first and the last indices; or b) within three months of the last index admission (follow-up period) but not occurring after 31st March 2014 (buffer period). For patients with only one admission with a recorded condition, only one index admission existed and therefore the second criterion was applied. All admissions occurring after the first index admission including the last index admission and those that met the follow-up and buffer periods criteria were included in the denominator.
This restricted prospective approach in the identification of a non-recorded chronic condition was employed to avoid any overestimation caused by counting admissions prior to diagnosis or after possible cure. The limited follow-up period of three months allowed inclusion of any possibly true discrepancy incidents occurring after the last index admission while minimising inclusion of any admission following a false positive admissions (patient had no chronic condition but the condition was recorded). Furthermore a buffer of three months at the end of the study period diminished the effect of censoring among follow-up admissions. An extensive sensitivity analysis using different follow-up periods in the absence or presence of a buffer was conducted and the results were outlined in S1 Table.
For all admissions, four sets of covariates–patient, admission, morbidity, and hospital related–were considered. Patient demographic variables included age, gender and socio-economic status. We utilised a Statistical Local Area level disadvantage index of Socio-Economic Indices for Areas (SEIFA) with the lower values indicating more disadvantage . SEIFA scores were categorised into four classes (1st quartile: most disadvantaged to 4th quartile: least disadvantaged areas). Admission covariates included admission type (surgical, medical, and other), admission source (emergency, planned, and other), and length of stay (1–2, 3–5, 5–10, and over 10 days). Morbidity related factors were number of recorded morbidities categorised by quartiles, presence of any other chronic conditions (yes, no), and discrepancy in recording other four chronic conditions (yes, no). Hospital characteristics included hospital type (public vs. private), location (metropolitan vs. rural), and peer groups for public hospitals. Public hospital peer groups comprised “A1”: principal referral group, usually teaching hospitals; “B”: major metropolitan and non-metropolitan; “C1”: district group 1; and “C2”: district group 2. Hospital peer groups contained similar sized hospitals, ranging from those treating more than 25,000 acute case-mix weighted separations per annum in principal referral groups through to treating between 2,000 and 5,000 acute case-mix weighted separations in district group .
2.3 Statistical analysis
We employed Poisson linear models to evaluate adjusted discrepancy incidence rates (IR) and rate ratios (IRR) for the five chronic conditions separately after including patient, admission, morbidity and hospital-related characteristics. Separate models for public hospital admissions were constructed to derive estimates for the public hospital peer group effect. Morbidity characteristics were also entered into the models one at a time because of multicollinearity. To investigate the temporal behaviour of the discrepancy incidents, financial years were also entered into the models for all admissions as well as in separate models for public and private hospital admissions as indicator variables, with 2010 as the reference year. Adjusted trends were estimated by multiplying incidence rate ratios obtained from the Poisson model and the crude risk at the reference year. The difference between public and private hospitals trends was also assessed using an interaction term between the hospitals type and year variables in the full model.
Inter-hospital variation among public hospitals was evaluated within a multilevel framework, using a Poisson mixed model with a random intercept component at hospital level for each condition. A series of models were constructed to assess the contribution of hospital-related factors in the observed variation of discrepancy incidents following adjustment from a null model to the most comprehensive with all covariates. To express the inter-hospital variation, we employed the variance partition coefficient (VPC) for Poisson multilevel modelling scheme using the exact formulae developed by Stryhn et al. . The VPC on hospital level indicates the influence of the hospitals on discrepancy incidents that cannot be explained by the model parameters. Due to conditionality of VPC in Poisson modelling on covariates values, the median and inter-quartile of the calculated VPC for all existing covariates values were reported. Furthermore the proportional change in inter-hospital variance estimates () of the different models were calculated. This indicates the proportion of total inter-hospital variation that is explained by case-mix factors. To translate inter-hospital variation into risk differences, we used the median incidence rate ratio (MIRR) statistic which is the median of the rate ratios of pair-wise comparisons of admissions with identical characteristics taken from randomly chosen hospitals and calculated as , an extension of the measure developed by Merlo et al [32, 33]. To assess the effect of hospital size, random intercept estimates were stratified by hospital peer group and associated statistics were derived. To quantify trend of inter-hospital variation over the study period, the Poisson mixed models were extended by the inclusion of the year variable as a categorical random slope. We also used pair-wise Pearson correlation to assess the association of hospital recoding performances across the five chronic conditions, based on the hospital-specific random intercepts. Data preparation was conducted in SAS Enterprise Guide V.6.1  through SAPHaRI , and analyses were performed in R package V.3.1.2 .
3.1 Discrepancy incidence rate and risk factors
Of 228,687 inspected admissions following 76,666 patients with a diabetes related first index admission, 43,008 subsequent admissions had no recorded diabetes code, resulting in a discrepancy incidence rate of 18.8%. There existed more discrepancy incidents related to the four other chronic conditions: 26.7% (in 1,500 admissions) for HIV, 33.2% (in 182,735 admissions) for smoking, 36.6% (in 12,107 admissions) for hepatitis and the highest rate of 51% (in 262,664 admissions) for hypertension (Table 1).
Discrepancy incidents were lower among females for most of the chronic conditions, with the largest gender difference observed for hypertension (21%). For older patients, hepatitis was more accurately recorded, while diabetes and smoking conditions were less likely to be documented than for younger patients. Patients’ socio-economic status had either no or an inconsistent effect on coding completeness. Patients who underwent surgery during hospitalisation were up to 36% more likely to have their chronic conditions recorded compared to the medical patients. The effect of admission source was inconsistent across chronic conditions; emergency admitted patients had a lower discrepancy incidence rate for diabetes, but a higher rate for smoking compared to planned admissions. A similar inconsistent pattern was evident for the effect of length of stay on completeness of recording chronic conditions (Table 1).
The incidence of non-recorded chronic conditions was significantly higher in private hospitals across all conditions. For the most common conditions of diabetes, smoking and hypertension, the excess likelihood of discrepancy in recording morbidities within private hospitals ranged between 15% and 22%. Significantly higher inconsistency rates of 60% and 201% in clinical records were found for patients with hepatitis and HIV. Rural hospitals tended to have up to an 8% lower discrepancy incident rate in recording the top three most common chronic conditions; no difference was observed between metropolitan and rural hospitals in recording HIV and hepatitis. Among public hospitals, smaller hospitals had higher discrepancy incidence rates mainly for the top three most common conditions, compared to large principal referral hospitals. The highest gaps of at least 90% and the lowest gaps of at most 20% were found in recording diabetes and hypertension respectively (Table 2).
A higher number of recorded comorbidities at hospital admission decreased the likelihood of non-recorded chronic conditions by at least 40%. Each chronic condition (except HIV) was more likely (at least 13%) to be recorded if the patient had any other recorded chronic conditions. The likelihood of not recording a chronic condition could increase in the omission of the recording of the other four conditions by at least 49% (Table 2).
3.2 Trend analysis
As depicted in Fig 1, discrepancy incidents for all examined chronic conditions declined over the four-year period (2010–2013 FY). The sharpest drop of close to 85% was observed for diabetes (adjusted rates of 47.7% in 2010 vs. 7.3% in 2013). For hepatitis the adjusted rate increased by 6% in 2012, reaching 58%, then markedly dropped by over 60% in 2013, with a total drop of 56% over the study period. Incidence rates for smoking and hypertension notably decreased by 35% and 20% respectively, but rates were unchanged for HIV with a non-significant drop of 18%. The discrepancy incidence rate in public hospitals remained lower than the rate in the private hospitals over the study period for all the chronic conditions. Up to 4% larger drops were observed for diabetes and hypertension rates in public versus private hospitals; whereas the drop in the discrepancy rate for smoking was 10% larger in private compared to public hospitals. The observed differences in trends for hepatitis and HIV discrepancy rates were not significant.
3.3 Inter-hospital variation
Adjustment for patient and admission characteristics explained much of the observed inter-hospital variations in discrepancy incidents, as seen by large drops in the VPC from the model with no adjustment to models with patient and admission factors across all chronic conditions (Table 3). However, a noticeable proportion of between 0.9% (for HIV) and 9.4% (for smoking) of all variations was still attributable to hospital and associated factors. Hospital characteristics (rurality and peer groups), partly explained the inter-hospital variation, leaving up to 7% of unexplained variation that was associated with individual hospital characteristics (unseen factors).
Overall, the presence of smoking or hepatitis in a patient admitted to a hospital with high discrepancy rates, was up to 33% (MIRR = 1.33, adjusted for patient and admission characteristics) more likely not to be recorded, than had the admission been to a hospital with lower discrepancy rates. A smaller gap of close to 20% was observed for diabetes and hypertension; followed by 14% for HIV, the condition most robust to hospital characterises (Table 3). According to proportional variance reductions, case-mix factors explained between 22% (the lowest for hypertension) and 61% (the highest for HIV) of inter-hospital variations. Most of this was explained by hospital and admission factors as opposed to patient demographics, as inclusion of them largely decreased the estimated variances across all chronic conditions. In particular, of all inter-hospital variations, between 8% (for hypertension) and 21% (for HIV) was further explained by admission factors. An additional explained variation of 16% (for hypertension) to 25% (for hepatitis) was obtained following the inclusion of hospital factors (Table 3).
The extent to which inter-hospital variations and likelihood gaps in recording chronic conditions were influenced by hospital size varied across chronic conditions (Fig 2). Large principal referral hospitals (A1) tended to have smaller variations in recording diabetes, but higher variations for HIV compared to the smaller district hospitals (C1 and C2). The observed variations in recording diabetes translated to a 9% gap among principal referral hospitals compared to a 17% gap in the group with the smallest district hospitals (C2); the relevant numbers for HIV were 24% and 9% respectively. No consistent pattern or considerable difference was observed among hospital groups in recording other chronic conditions.
The inter-hospital variations in discrepancy incidence rates varied over time for all chronic conditions. In particular, there were greater differences in the coding of diabetes in the second half of the study period compared to the first half (Fig 3). The gap of at most 25% for diabetes in the first period increased to over 65%. An increase from 10% to 34% was also evident for the coding of hepatitis. The trends in the variation of the other three conditions noticeably decreased in 2011 and subsequently either remained stable or began to increase over the next three years.
Hospitals with lower discrepancy rates in recording diabetes tended to also have lower discrepancy rates in the recording of smoking since a significant correlation between deviations from the average (estimated hospital-specific intercept) in the two conditions was observed across 80 hospitals. A similar pattern among hospitals was also observed for the recording of hepatitis and smoking as well as for hypertension and HIV; see S2 Table.
This large population-based study using NSW Ministry of Health hospital admissions linked datasets over a four-year period identified the non-recorded incidence rates of five chronic conditions as varying between 19% (for diabetes) and 51% (for hypertension). Except for HIV, the adjusted discrepancy incidence rates for all examined chronic conditions declined considerably, ranging from 20% for hypertension to 80% for diabetes over the four-year period to July 2014. Admission records from private hospitals and smaller public hospitals had higher discrepancy incidents compared to their counterparts. Variability among public hospitals was responsible for 1% to 9% of variation in adjusted discrepancy incidence rates for the five chronic conditions, leading to between 14% and 33% discrepancy rate differences. Seven per cent of the variation remained unexplained after adjusting for hospital characteristics. The inter-hospital variation changed over time, with the increase most noticeable for diabetes. Hospital size had an inconsistent effect on inter-hospital discrepancy differences across the conditions.
Discrepancy incidence rates and trends
Completeness in recording chronic morbidities and agreement among different sources of morbidity data have been investigated in Australia and elsewhere. The identified 19% incompleteness in the coding of diabetes in the NSW hospital administrative data was in the range of the previously reported rates of at most 13% [17–19, 36] and 26%  when clinical charts and self-reported information, respectively were used as the reference. The large drop in discrepancy incidence rates for diabetes from over 47% in the first half of our study period to 10% or less in the last two years coincided with the change in rules governing the coding of diabetes as comorbidity in hospital data. In general, according to the Australian ICD standard for documenting additional diagnoses in clinical charts, only those conditions affecting the patient’s care management or treatment within that admission are required to be coded in hospital administrative datasets . Therefore, diagnoses that relate to an earlier admission, and which have no effect on the current admission, are not required to be coded. The cause and effect relationship requirement for coding purposes between diabetes and the patient’s care, which was applied during the 2010 to 2012 period, was lifted in July 2012 . Such changes reportedly influenced diabetes prevalence estimates based on administrative data [22, 37] and the occurrence of discrepancies as demonstrated in this study. Our findings reflected the influence of the change in standards that lead to reduced subjectivity associated with coding at the coder level. In particular, it revealed the potential improvement in recording (documented) diabetes by coders versus the lack of documentation of diabetes in clinical charts by clinicians .
A lower discrepancy rate of 19% in coding smoking status was observed in the UK administrative datasets , compared to the 33% identified in this study and the 41% reported recently from NSW APDC datasets . Inclusion of tobacco related service use in the UK study could have contributed to lower inconsistency, while identification of ex-smokers and tobacco related injuries in our study compared to the recent Australian research may have resulted in better completeness rates identified in this study.
The observed 51% discrepancy rate for hypertension was almost double that seen when clinical charts were the reference [17, 19], but was lower than the rate of 69% obtained using patients’ self-report . Compared to other reports, we applied the narrowest ICD codes in case identification, disregarding cases with renal, brain or pregnancy complications caused by hypertension which could have resulted in different completeness rates.
For the rare conditions of hepatitis and HIV, our study benefited from a large state-wide cohort, providing more reliable discrepancy rates (of 37% and 27%) compared to other reports (zero to 33%) limited by small sample size [17, 18, 38, 39]. We found noticeably high inconsistencies in coding morbidities that are either life threating or can cause severe complications, as is the case for HIV, which is listed among the most important risk factors of mortality in risk adjustment methods [23, 24].
In addition to changes in coding standards and varying case identification methods noted above, systematic changes that affect coding practices as well as the method of verification to identify non-recorded comorbidities in hospital data may also have contributed to the differences in reported discrepancies. The observed decreasing trends for all conditions, particularly within public hospitals, can be associated with the introduction of activity based funding in 2011 in Australia [21, 40] as it previously resulted in increases in the recording of secondary diagnoses and procedures [12, 41] in Europe. Responses to shortfalls in staffing and training of clinical coders prior to our study period  could have contributed to the temporal reduction in discrepancies as observed elsewhere [8, 39].
The current findings indicate higher discrepancy rates compared to studies conducted using clinical chart review, regarded as the gold standard [17, 19, 38], to ascertain the presence of chronic conditions but lower rates than studies using primary carer provider or patient survey information [7, 18]. Having higher rates compared to clinical chart-based studies could be due to inclusion of non-recorded conditions as true non-documented conditions in our rates. Despite the potential to report false positive rates (falsely recorded conditions) in studies measuring agreement between hospital data and external references, comparison of hospital data with clinical charts indeed focuses on discrepancy in the coding conditions that were documented. Using other references such as survey based information may still overlook non-recorded conditions, and not resolve problems of high subjectivity and variation due to lack of unique governing standards in documentation. The employed internal references developed and applied in this study enabled the capture of all non-recorded conditions regardless of whether they were documented within one environment and governed by a unique set of rules. Although the effect of temporal data accumulation was not determined, using a prospective design enabled us to directly estimate the amount of discrepancy which can be eliminated through data accumulation over time. The demonstrated increase in accuracy of hospital data through temporal data accumulation [7, 36] also support the utilisation of internal references within this setting. The very low false positive rates of less than 2% in administrative datasets [7, 18] for most of the conditions investigated gives further credence for the reliability of our internal references (index admission being true positive) made possible with data linkage and lends credibility to our sole focus on non-recorded comorbidities.
We echoed other research findings of higher discrepancies among private hospitals compared to public hospitals [7, 8, 17, 18]. The role of clinical coding in funding public hospitals could result in improved accuracy in public hospital datasets [12, 41]. Our finding that rural hospitals tended to have more accurate recorded conditions was consistent with US results [8, 20] but contradicted previous Australian findings . However, the significantly higher discrepancy rates in smaller versus larger public hospitals were consistent with previous Australian findings [7, 17]. A tendency to record more comorbidity at larger hospitals, reflecting the presentation of severely ill patients with multiple conditions was positively associated with better accuracy in administrative datasets [7, 13].
Variation in performance [43, 44], quality and safety [45, 46] and service usage  indicators among acute care providers in NSW and elsewhere has been identified. Taking into account patient and admission differences, notable inter-hospital variability in discrepancy incidence rates was evident among 80 NSW public hospitals. A third of our adjusted inter-hospital variation (0.9% to 9.4%) was explained by hospital size and location. Our results were comparable to Lujic et al.  who reported a slightly higher variation (2% to 13%) among similar hospitals. Differences in the modelling scheme, measurement and adjustments would have contributed to the results.
Larger variation in recording hypertension and smoking than diabetes were consistent with previous findings . The contribution of case mix adjustment in explaining inter-hospital variation differed across chronic conditions with the highest for HIV and hepatitis and the lowest for hypertension. No comparative data exist for examining the variability in recording hepatitis and HIV conditions. These findings highlighted the potential biases, caused by discrepancies in coding, for care provider comparisons and funding based on risk adjustment methods, in particular those using hypertension, as has been addressed  and evaluated  elsewhere.
Discrepancy rates as well as inter-hospital variation varied over time and were affected by hospital size. Despite the observed drop in the discrepancy rate for diabetes, a significant increase in the related inter-hospital variation over the second half of the study period was evident, perhaps reflecting differences in the method and speed of adoption of modified coding rules for diabetes . The timing and level of adaptation to new standards among hospitals can introduce larger variation at least in the short term. The introduction of activity-based funding might also have contributed to the overall increasing trend of variation observed from 2011 for the other conditions .
At the patient level, discrepancy rates for each condition were inversely associated with the number of recorded conditions and, in particular if the recorded comorbidities included one of our five chronic conditions. At the higher level, hospitals with good coding practice for one condition tended to do well with others. These findings reemphasise the importance of individual hospital responsibilities and characteristics. Engagement of coders in diverse roles, higher staffing and lower throughput, training and professional development and interaction with clinicians are among the effective organisational factors aimed at enhancing clinical data quality [8, 9]. Enhancement and standardisation of training and rotation of coders between hospitals have also reduced variation at coder level .
Our study raises several important policy implications. Firstly, despite advancement in adjustment methods to ascertain fair comparison and funding strategies, the significant non-random inconsistencies in the administrative dataset are likely to lead to disproportionate conclusions. Minimising discrepancies or at least controlling for hospitals level factors through modelling or stratification will facilitate optimal decision making. Secondly, in the absence of routine utilisation of clinical chart review, the use of temporal accumulation of morbidity information within administrative datasets to measure discrepancy and construct informed risk adjustment is feasible, as demonstrated by this study. Thirdly, defining quality characteristics for administrative data and routinely monitoring the quality indicators over time would allow better understanding of the effectiveness of system changes, such as documentation and recording standards, and highlight areas for improvement and subsequent actions [48, 49]. Lastly, systematic knowledge enhancement and engagement among hospital administrators, clinicians, coders and researchers within health service domain for recording quality improvement and reimbursement purposes should be formalised.
Strengths and limitations
This study benefited from its design, using a large population based dataset to access all admissions in all acute hospitals within the most populated health jurisdiction in Australia to explore for the first time trends in coding discrepancy rates. This study benefited from data linkage at patient level and a prospective longitudinal design. The design enabled the exclusion of any pre-diagnostic admissions, to eliminate the risk of any overestimation in discrepancy rates, and combined with a restricted set of criteria and follow-up period minimised any false positives due to error or post treatment. The proposed design developed and employed internal references based on routinely collected data that could readily be used for real time monitoring of clinical coding practices and improvement through longitudinal data accumulation and dynamic indexing.
We may have under-reported the total discrepancies in the absence of an external reference for measuring false positive rates. However, clinical chart review on randomly sampled cases although useful, is limited to the extent that it relies on comprehensive documentation of all comorbidities. Variation analysis was limited to public hospitals as determined by data availability; the analyses of hospital-specific admissions from private hospitals could provide addition insight. Despite the essential role of time in our design, the effect size of data accumulation as well as time between multiple admissions was not quantified and is therefore an area for further research. Models incorporating a coder’s related characteristics including staffing, rotation, experience and training, coding parameters such as rules governing the documentation and coding of a condition and unseen admission factors including principal diagnoses may better explain differences. Distinguishing contribution to discrepancy from incomplete documentation of morbidity in clinical chart versus incomplete coding of documented conditions in hospital administrative datasets would be very informative for targeted actions. Conducting a controlled trail or comparing patients’ records across transfers could provide valuable insight of documentation versus coding contributions in discrepancies. Inclusion of changes in the rules governing recording practices in the modelling might also provide more evidence on the effect of system-wide changes and further highlight potential areas for improvement.
Chronic conditions are recorded inconsistently in hospital administrative datasets, and hospitals, individually as well as grouped by characteristics, contribute to the observed incidence and variation in discrepancies. Consequently, case-mix adjustments for provider comparison and funding purposes could be biased because of coding incompleteness and associated discrepancy patterns across hospitals. While examination of non-recording patterns associated with hospital characteristics through modelling or stratification for risk-adjustment purposes could potentially minimise bias, longitudinal accumulation of clinical information at patient level through data linkage combined with refinement of clinical coding systems and standardisation of documentation across hospitals would enhance accuracy of routinely collected datasets and the related validity of case-mix adjustment.
S1 Table. Pair-wise correlation of hospital performance in recording chronic conditions.
The authors thank Ms. Susan Claessen (The National Centre for Classification in Health-University of Sydney) and Ms. Natasha Smith (Clinical coding manager-Westmead Hospital) for their advice at clinical coding standards. They would also like to thank A/Prof Sarah Thackway and Dr. Lee Taylor (Epidemiology and Evidence-NSW Health) for their support in the study implementation and data acquisition.
Conceived and designed the experiments: HA HMA KH. Performed the experiments: HA HMA. Analyzed the data: HA. Contributed reagents/materials/analysis tools: HA HMA JMS VMG KH. Wrote the paper: HA HMA JMS VMG KH.
- 1. Simpson J, Evans N, Gibberd R, Heuchan A, Henderson-Smart D. Analysing differences in clinical outcomes between hospitals. Quality and Safety in Health Care. 2003;12(4):257–62. pmid:12897358
- 2. Paddison C, Elliott M, Parker R, Staetsky L, Lyratzopoulos G, Campbell JL, et al. Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey. BMJ Quality & Safety. 2012;21(8):634–40.
- 3. Burns EM, Rigby E, Mamidanna R, Bottle A, Aylin P, Ziprin P, et al. Systematic review of discharge coding accuracy. Journal of Public Health. 2012;34(1):138–48. pmid:21795302
- 4. Cheng P, Gilchrist A, Robinson KM, Paul L. The risk and consequences of clinical miscoding due to inadequate medical documentation: a case study of the impact on health services funding. Health Information Management Journal. 2009;38(1):35–46. pmid:19293434
- 5. National Centre for Classification in Health N. The International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification (ICD-10-AM). Sydney: NCCH, Faculty of Health Sciences, The University of Sydney; 2004.
- 6. Henderson T, Shepheard J, Sundararajan V. Quality of diagnosis and procedure coding in ICD-10 administrative data. Med Care. 2006;44(11):1011–9. pmid:17063133
- 7. Lujic S, Watson DE, Randall DA, Simpson JM, Jorm LR. Variation in the recording of common health conditions in routine hospital data: study using linked survey and administrative data in New South Wales, Australia. BMJ Open. 2014;4(9):e005768. pmid:PMC4158198.
- 8. Rangachari P. Coding for quality measurement: the relationship between hospital structural characteristics and coding accuracy from the perspective of quality measurement. Perspectives in Health Information Management. 2007;4:3. pmid:PMC2047295.
- 9. Santos S, Murphy G, Baxter K, Robinson KM. Organisational factors affecting the quality of hospital clinical coding. Health Information Management Journal. 2008;37(1):25–37. pmid:18245862
- 10. Coory M, Cornes S. Interstate comparisons of public hospital outputs using DRGs: Are they fair? Aust N Z J Public Health. 2005;29(2):143–8. pmid:15915618
- 11. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):1113–8. pmid:21406648
- 12. Steinbusch PJ, Oostenbrink JB, Zuurbier JJ, Schaepkens FJ. The risk of upcoding in casemix systems: a comparative study. Health Policy. 2007;81(2):289–99.
- 13. Mohammed MA, Deeks JJ, Girling A, Rudge G, Carmalt M, Stevens AJ, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009;338:b780. pmid:19297447
- 14. Kronick R, Welch WP. Measuring Coding Intensity in the Medicare Advantage Program. Medicare & Medicaid Research Review. 2014;4(2):E1–E19.
- 15. Bottle A, Jarman B, Aylin P. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding. Health Serv Res. 2011;46(6pt1):1741–61.
- 16. Nicholl J. Case-mix adjustment in non-randomised observational evaluations: the constant risk fallacy. J Epidemiol Community Health. 2007;61(11):1010–3. pmid:17933961
- 17. Powell H, Lim LL, Heller RF. Accuracy of administrative data to assess comorbidity in patients with heart disease: an Australian perspective. J Clin Epidemiol. 2001;54(7):687–93. pmid:11438409
- 18. Preen DB, Holman CDAJ, Lawrence DM, Baynham NJ, Semmens JB. Hospital chart review provided more accurate comorbidity information than data from a general practitioner survey or an administrative database. J Clin Epidemiol. 2004;57(12):1295–304. pmid:1033171787; 15617956.
- 19. Soo M, Robertson LM, Ali T, Clark LE, Fluck N, Johnston M, et al. Approaches to ascertaining comorbidity information: validation of routine hospital episode data with clinician-based case note review. BMC Res Notes. 2014;7(1):253.
- 20. Lorence DP, Ibrahim IA. Benchmarking variation in coding accuracy across the United States. J Health Care Finance. 2002;29(4):29–42.
- 21. Eagar K. ABF Information Series No. 1: what is activity-based funding? University of Wollongong: Australian Health Services Research Institute, Centre for Health Service Development, 2011.
- 22. Knight L, Halech R, Martin C, Mortimer L. Impact of changes in diabetes coding on Queensland hospital principal diagnosis morbidity data. Brisbane: Queensland Health, 2011.
- 23. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–83. pmid:3558716
- 24. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8–27. pmid:9431328
- 25. Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi J-C, et al. Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Med Care. 2005;43(11):1130–9. pmid:16224307
- 26. NSW Admitted Patient Data Collection (APDC) [Internet]. NSW Ministry of Health. [cited 23/02/2015]. Available: http://www.cherel.org.au/data-dictionaries.
- 27. National Centre for Classification in Health N. Australian Coding Standards for ICD-10-AM and ACHI, Seventh Edition. Sydney: NCCH, Faculty of Health Sciences, The University of Sydney; 2010.
- 28. Secure analytics for population health research and intelligence (SAPHaRI) [Internet]. NSW Ministry of Health. [cited 23/02/2015]. Available: http://www.health.nsw.gov.au/epidemiology/Pages/Population-health-data-warehouse.aspx.
- 29. Australian Bureau of Statistics. Census of population and housing: socio-economic indexes for areas (SEIFA), Australia. Canberra: Australian Bureau of Statistics, 2011.
- 30. Demand and Performance Evaluation. NSW health services comparison data book 2008/2009 Sydney: NSW Ministry of Health, 2010.
- 31. Stryhn H, Sanchez J, Morley P, Booker C, Dohoo I, editors. Interpretation of variance parameters in multilevel Poisson regression models. Proceedings of the 11th International Symposium on Veterinary Epidemiology and Economics; 2006.
- 32. Hedin K, Petersson C, Cars H, Beckman A, Håkansson A. Infection prevention at day-care centres: feasibility and possible effects of intervention. Scand J Prim Health Care. 2006;24(1):44–9. pmid:16464814
- 33. Merlo J, Chaix B, Ohlsson H, Beckman A, Johnell K, Hjerpe P, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: using measures of clustering in multilevel logistic regression to investigate contextual phenomena. J Epidemiol Community Health. 2006;60(4):290–7. pmid:16537344
- 34. SAS Institute. SAS Enterprise Guide. 6.1 ed. Cary, North Carolina2013.
- 35. R Core Team. R: A Language and Environment for Statistical Computing. 3.1.2 ed. Vienna, Austria: R Foundation Statistical Computing; 2013.
- 36. Nedkoff L, Knuiman M, Hung J, Sanfilippo FM, Katzenellenbogen JM, Briffa TG. Concordance between administrative health data and medical records for diabetes status in coronary heart disease patients: a retrospective linked data study. BMC Med Res Methodol. 2013;13:121. pmid:1439436297; 24079345.
- 37. Health Statistics New South Wales [Internet]. NSW Ministry of Health. [cited 23/02/2015]. Available: www.healthstats.nsw.gov.au.
- 38. Stavrou E, Pesa N, Pearson S-A. Hospital discharge diagnostic and procedure codes for upper gastro-intestinal cancer: how accurate are they? BMC Health Serv Res. 2012;12:331. pmid:1197748955; 22995224.
- 39. Hennessy DA, Quan H, Faris PD, Beck CA. Do coder characteristics influence validity of ICD-10 hospital discharge data? BMC Health Serv Res. 2010;10:99. pmid:PMC2868845.
- 40. Council of Australian Governments. National Health Reform Agreement 2011 [24/02/2015]. Available: http://www.federalfinancialrelations.gov.au/content/npa/health_reform/national-agreement.pdf.
- 41. O'Reilly J, Busse R, Häkkinen U, Or Z, Street A, Wiley M. Paying for hospital care: the experience with implementing activity-based funding in five European countries. Health economics, policy and law. 2012;7(01):73–101.
- 42. Australian Institute of Health and Welfare. The coding workforce shortfall. Canberra: 2010.
- 43. Fung V, Schmittdiel JA, Fireman B, Meer A, Thomas S, Smider N, et al. Meaningful variation in performance: a systematic literature review. Med Care. 2010;48(2):140–8. pmid:20057334
- 44. Selby JV, Schmittdiel JA, Lee J, Fung V, Thomas S, Smider N, et al. Meaningful Variation in Performance: What Does Variation in Quality Tell Us About Improving Quality? Med Care. 2010;48(2):133–9. pmid:00005650-201002000-00008.
- 45. Assareh H, Chen J, Ou L, Hollis SJ, Hillman K, Flabouris A. Rate of venous thromboembolism among surgical patients in Australian hospitals: a multicentre retrospective cohort study. BMJ Open. 2014;4(10):e005502. pmid:25280806
- 46. Ou L, Chen J, Assareh H, Hollis SJ, Hillman K, Flabouris A. Trends and Variations in the Rates of Hospital Complications, Failure-to-Rescue and 30-Day Mortality in Surgical Patients in New South Wales, Australia, 2002–2009. PLoS One. 2014;9(5):e96164. pmid:24788787
- 47. Seymour CW, Iwashyna TJ, Ehlenbach WJ, Wunsch H, Cooke CR. Hospital-Level Variation in the Use of Intensive Care. Health Serv Res. 2012;47(5):2060–80. pmid:22985033
- 48. Assareh H, Waterhouse MA, Moser C, Brighouse RD, Foster KA, Smith IR, et al. Data Quality Improvement in Clinical Databases Using Statistical Quality Control Review and Case Study. Therapeutic Innovation & Regulatory Science. 2013;47(1):70–81.
- 49. Rostami R, Nahm M, Pieper CF. What can we learn from a decade of database audits? The Duke Clinical Research Institute experience, 1997–2006. Clinical Trials. 2009;6(2):141–50. pmid:19342467