Advertisement
  • Loading metrics

The impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice: A cluster randomised controlled trial (ACCEPt-able)

The impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice: A cluster randomised controlled trial (ACCEPt-able)

  • Jane S. Hocking, 
  • Anna Wood, 
  • Meredith Temple-Smith, 
  • Sabine Braat, 
  • Matthew Law, 
  • Liliana Bulfone, 
  • Callum Jones, 
  • Mieke van Driel, 
  • Christopher K. Fairley, 
  • Basil Donovan
PLOS
x

Abstract

Background

Financial incentives and audit/feedback are widely used in primary care to influence clinician behaviour and increase quality of care. While observational data suggest a decline in quality when these interventions are stopped, their removal has not been evaluated in a randomised controlled trial (RCT), to our knowledge. This trial aimed to determine whether chlamydia testing in general practice is sustained when financial incentives and/or audit/feedback are removed.

Methods and findings

We undertook a 2 × 2 factorial cluster RCT in 60 general practices in 4 Australian states targeting 49,525 patients aged 16–29 years for annual chlamydia testing. Clinics were recruited between July 2014 and September 2015 and were followed for up to 2 years or until 31 December 2016. Clinics were eligible if they were in the intervention group of a previous cluster RCT where general practitioners (GPs) received financial incentives (AU$5–AU$8) for each chlamydia test and quarterly audit/feedback reports of their chlamydia testing rates. Clinics were randomised into 1 of 4 groups: incentives removed but audit/feedback retained (group A), audit/feedback removed but incentives retained (group B), both removed (group C), or both retained (group D). The primary outcome was the annual chlamydia testing rate among 16- to 29-year-old patients, where the numerator was the number who had at least 1 chlamydia test within 12 months and the denominator was the number who had at least 1 consultation during the same 12 months. We undertook a factorial analysis in which we investigated the effects of removal versus retention of incentives (groups A + C versus groups B + D) and the effects of removal versus retention of audit/feedback (group B + C versus groups A + D) separately. Of 60 clinics, 59 were randomised and 55 (91.7%) provided data (group A: 15 clinics, 11,196 patients; group B: 14, 11,944; group C: 13, 11,566; group D: 13, 14,819). Annual testing decreased from 20.2% to 11.7% (difference −8.8%; 95% CI −10.5% to −7.0%) in clinics with incentives removed and decreased from 20.6% to 14.3% (difference −7.1%; 95% CI −9.6% to −4.7%) where incentives were retained. The adjusted absolute difference in treatment effect was −0.9% (95% CI −3.5% to 1.7%; p = 0.2267). Annual testing decreased from 21.0% to 11.6% (difference −9.5%; 95% CI −11.7% to −7.4%) in clinics where audit/feedback was removed and decreased from 19.9% to 14.5% (difference −6.4%; 95% CI −8.6% to −4.2%) where audit/feedback was retained. The adjusted absolute difference in treatment effect was −2.6% (95% CI −5.4% to −0.1%; p = 0.0336). Study limitations included an unexpected reduction in testing across all groups impacting statistical power, loss of 4 clinics after randomisation, and inclusion of rural clinics only.

Conclusions

Audit/feedback is more effective than financial incentives of AU$5–AU$8 per chlamydia test at sustaining GP chlamydia testing practices over time in Australian general practice.

Trial registration

Australian New Zealand Clinical Trials Registry ACTRN12614000595617

Author summary

Why was this study done?

  • Financial incentives and audit/feedback are widely used in primary care to influence clinician behaviour and increase quality of care. As healthcare costs continue to increase, governments and funding agencies are reassessing funding models for primary care, with widespread cuts to financial incentives.
  • While observational data suggest a decline in quality when these interventions are stopped, their removal has not been evaluated in a randomised controlled trial (RCT).

What did the researchers do and find?

  • We conducted a 2 × 2 factorial cluster RCT in Australian general practices that aimed to determine the impact on chlamydia testing in general practice when incentive payments per activity and/or audit/feedback on activity performance were removed.
  • Clinics were randomised into 1 of 4 groups: incentives removed but audit/feedback retained, audit/feedback removed but incentives retained, both removed, and both retained.
  • The primary outcome was the annual chlamydia testing rate among 16- to 29-year-old patients.
  • We found that removal of incentive payments had little impact on general practice chlamydia testing, but the removal of audit and feedback reduced testing.

What do these results mean?

  • Our payments were consistent with other incentives general practitioners (GPs) received at the time, suggesting that in the Australian general practice setting, incentive payments of this amount do not have a substantial impact on influencing GP preventive healthcare activities such as chlamydia testing.
  • The removal of quarterly audit and feedback for GPs had a greater impact on testing rates, reflecting the importance of this strategy in influencing GP preventive healthcare activities. The provision of audit and feedback was costlier than the provision of financial incentives. However, using online video conferencing and fully automating the audit and feedback reports would reduce costs.
  • Our results suggest that, in Australia at least, audit and feedback is more effective than incentive payments of AU$5 to AU$8 per activity at influencing GP behaviour.

Introduction

Primary care plays a fundamental role in preventive healthcare, and strategies to improve its quality include financial incentives and audit/feedback [1]. Financial incentives aimed at modifying provider behaviour to improve quality and/or increase efficiency in primary care [2] have been used by the Australian Government since 1998, when the Practice Incentives Program was introduced for activities such as diabetes care [3]. The program provides less than 10% of the funding for general practitioner (GPs) [4]. In the UK, the Quality and Outcomes Framework was introduced into the contract of GPs by the government in 2004, accounting for about 25% of primary care clinics’ income [5]. Both schemes have been subject to debate about effectiveness [69] and have undergone modification, including withdrawal of some incentives and raising the payment threshold targets on others [5,10,11]. While some observational data suggest a decline in provider activities and quality of care when incentives are removed [5,12], other data have shown little impact [13,14]. There is little information about the effect of incentive removal on provider activities and quality of care in the Australian general practice setting. Further, the impact of the removal of incentives has not to our knowledge been assessed in a randomised controlled trial (RCT).

Audit/feedback is widely used in primary care [1517]. In audit/feedback, GPs’ professional practice is measured and compared with guidelines, targets, and/or peers, and results are fed back to the GPs. Ideally, this prompts them to modify their practice if the feedback finds this is needed. While there is substantial RCT evidence that audit/feedback improves practice [18], observational data suggest that removing audit/feedback may reverse improvements. However, there is little evidence about the impact of removing audit/feedback on GP activities and quality of care in Australia, and to our knowledge no RCT evidence.

We had the unique opportunity to evaluate the impact of removing incentives and audit/feedback on the preventive activities of GPs in Australia by building on an existing trial—the Australian Chlamydia Control Effectiveness Pilot (ACCEPt) [19]. ACCEPt evaluated an intervention to increase chlamydia screening, a key preventive activity for young adults (<30 years) in Australian general practice [20]. The intervention included incentive payments for testing and audit/feedback on GPs’ testing performance. At the end of ACCEPt, we re-randomised intervention clinics in a 2 × 2 factorial cluster RCT to determine whether preventive activities such as chlamydia testing in general practice are sustained when incentives and/or audit/feedback are removed. Given that the intention of financial incentives and/or audit/feedback is to modify provider behaviour in order to improve quality and/or increase efficiency, our hypothesis was that chlamydia testing would decrease if these strategies were removed. We present the results of this new trial, ACCEPt-able, here.

Methods

ACCEPt-able was a 2 × 2 factorial cluster RCT and followed a published protocol [21]. We report the findings according to the CONSORT extension for cluster RCTs [22] (S1 CONSORT Checklist). There were no changes to trial recruitment, implementation, management, or follow-up methods, but in a change to the published protocol, we had to exclude clinics that were unable to provide outcome data at the end of the trial from the primary analysis (further detail provided below).

Study design and participants

The parent trial, ACCEPt, was a cluster RCT that evaluated the effectiveness of a chlamydia screening intervention on chlamydia prevalence, finishing in December 2015. ACCEPt was conducted across 4 Australian states (New South Wales, Victoria, South Australia, and Queensland). Full details are published elsewhere [19,23]. At the time of ACCEPt, opportunistic chlamydia testing was recommended annually for sexually active 16- to 29-year-olds in general practice [20]. How chlamydia testing was conducted varied between clinics, with some clinics using GPs to initiate testing and others using practice nurses, and some using clinician-collected specimens for testing, others allowing patients to self-collect specimens (e.g., urine specimens or high vaginal swabs) and leave them at the clinic for testing, and others requiring the patient to attend an external pathology collection centre for testing. Intervention clinics received financial incentives to individual GPs for each chlamydia test: AU$5 per test for up to 20% of 16- to 29-year-olds tested each year to AU$8 per test for over 40% coverage. These payments were electronically transferred to the clinic each quarter. This amount was consistent with the payment of AU$6 GPs received at the time for completing immunisation schedules and corresponds to an annual payment of about AU$800 assuming an annual chlamydia testing rate of 20% and an average of 800 patients aged 16 to 29 years attending each clinic per year. This total amount, the payment frequency, and electronic transfer methods were consistent with those of other government-funded general-practice-based incentives at the time [4,24]. Intervention clinics also received audit/feedback, where individual GPs were provided with a 1-page report that summarised their chlamydia testing rates for the previous quarter, including the number of patients aged 16 to 29 years who had consulted them, the number they tested, and the number that tested positive. The report also included a statement of the total amount of incentive payments they would receive for that quarter’s testing. The report was given to individual GPs during a quarterly face-to-face visit with a research officer who explained the results and worked with the GP to identify strategies to help increase their testing rates. The intervention also included chlamydia education (hard-copy and online resources about chlamydia and its management that were given to all GPs and nurses in a face-to-face meeting with a research officer after randomisation) and computer alerts prompting testing. Not all clinics used the computer alerts. Guided by normalisation process theory, a member of the research team worked with each clinic to tailor the intervention to the resources of the clinic and to identify strategies to facilitate testing and embed it into routine practice [25]. Annual testing of 16- to 29-year-olds in intervention clinics increased from 8.2% to 20.1%, with a treatment effect odds ratio (OR) of 1.7 (95% CI 1.4 to 2.1) [19].

At the conclusion of ACCEPt, a research officer met with GPs in each intervention clinic, informed them about ACCEPt-able, invited them to participate, and obtained informed consent [21]. The intervention was allocated at the cluster level (clinic) because patients attending each clinic could consult with different GPs. Clinics were eligible if they were in the ACCEPt intervention arm. Patients aged 16–29 years were eligible for 1 chlamydia test per year unless they reported risk factors (e.g., new sex partner) or genital symptoms requiring further testing.

This trial was approved by the Royal Australian College of General Practitioners National Research and Evaluation Ethics Committee (NREEC 14–004; 16 May 2014), and written consent was obtained from all GPs. During ACCEPt-able, we recruited and consented new GPs, who were also provided with the chlamydia education package. Clinics were recruited into ACCEPt-able immediately after completing ACCEPt, between July 2014 and September 2015. Clinics were followed up for 2 years or until 31 December 2016, whichever came first.

Randomisation and masking

Clinics were randomised using a computer-generated minimisation algorithm to maximise the balance across 2 variables—annual chlamydia testing rate among 16- to 29-year-olds in the clinic for 12 months prior to ACCEPt-able (<19% versus ≥19%, based on median testing rate) and number of 16- to 29-year-olds attending the clinic each year (<1,000 versus ≥1,000, based on the 67th percentile of the number of patients at each clinic, to ensure that groups were evenly distributed among relatively smaller and larger clinics because of the potential association of clinic size with patient quality of care [26]). The trial statistician was blinded to allocation. Blinding of clinics and GPs was not possible. Randomisation took place after clinics were recruited into ACCEPt-able and consented to participate. A research officer informed clinics and each GP of their allocation.

Interventions

Clinics in ACCEPt-able were randomised into 1 of 4 arms: incentives removed but audit/feedback and visit retained (group A), audit/feedback and visit removed but incentives retained (group B), incentives and audit/feedback and visit removed (group C), or incentives and audit/feedback and visit retained (group D). All GPs within each clinic received the same intervention. The groups receiving audit/feedback received the same quarterly 1-page report as for the ACCEPt trial that summarised GPs’ chlamydia testing rate for the previous quarter and included a statement of the total amount of incentive payments they would receive for that quarter’s testing. The report was given during a quarterly face-to-face visit with a research officer who explained the results and worked with GPs to identify strategies to help increase their testing rates.

Outcomes

The primary outcome was annual chlamydia testing rate among 16- to 29-year-olds attending the clinic. The numerator was the number of patients aged 16–29 years who had at least 1 chlamydia test within 12 months; the denominator was the number of patients aged 16–29 years who had at least 1 consultation during the same 12 months.

Testing data were extracted from each clinic’s electronic medical records using GRHANITE [27,28], a data extraction tool. The tool extracts consultation data including a unique non-identifying patient code, the age and sex of the patient, and chlamydia test results. Data were extracted for the 12 months prior to commencement in ACCEPt-able and during the intervention period.

Sample size

The sample size was determined by ACCEPt, which included 60 intervention clinics. We had 94% power to detect a 5% absolute decrease in annual chlamydia testing from 20% to 15% between any 2 groups. A 5% reduction represents a clinically relevant result—about 200,000 fewer 16- to 29-year-olds screened each year in Australia. Our calculations assumed an intra-cluster correlation coefficient (ICC) of 0.02 for testing rate [19], an average cluster size of 700 patients aged 16–29 years per clinic per year, and an alpha of 0.05.

Statistical analysis

We conducted a factorial analysis as our primary analysis. This investigated the effects of removal versus retention of incentives (groups A + C versus groups B + D) and audit/feedback (groups B + C versus groups A + D) separately on annual chlamydia testing over 2 years. We aimed to compare the groups according to intention-to-treat, but in a change to the published protocol [21], we had to exclude clinics that were unable to provide outcome data at the end of the trial from the primary analysis. For each intervention, we fitted generalised linear models, using generalised estimating equations to account for clustering at the clinic level, and assessed the impact of the intervention on chlamydia testing in year 2 compared with baseline. A logistic model generated ORs, and absolute differences were obtained from a model with an identity link function with binomial error distribution. These models also provided 95% confidence intervals and p-values and adjusted for minimisation variables only (annual chlamydia testing rate among 16- to 29-year-olds in the clinic and number of 16- to 29-year-olds attending the clinic each year), as is recommended [29]. We also obtained the results of an adjusted model post hoc that, in addition to the minimisation factors, also included the variables that were adjusted for in the ACCEPt trial (patient sex and age group and socio-economic status quintile of the clinic—‘fully adjusted model’) [19,30].

We undertook several post-hoc analyses: (i) we calculated absolute differences in addition to the planned ORs; (ii) we tested the assumption that there was no interaction effect between the 2 interventions and conducted an analysis by randomised group whereby the group that retained audit/feedback and incentives was the control (‘intervention group analysis’), as is recommended for reporting factorial trials [31]; (iii) we calculated the ICC for chlamydia testing using the primary analysis model with trial arm in the model; and (iv) we conducted factorial subgroup analyses by sex and age group (16–19, 20–24, and 25–29 years). The output was generated using SAS software, version 9.4, for Windows.

Cost–consequence analysis

A cost–consequence analysis comparing costs and consequences for each combination of removing/retaining incentives and audit/feedback activities was conducted [32]. Costs (incentives, travel, staff time, and data extraction) and consequences (proportion of the target population tested) for the scenarios of removing versus retaining each intervention were obtained from trial data. The average saving per patient aged 16–29 years was calculated for removal of each intervention. The incremental cost of retaining each intervention per additional patient in the target population tested was calculated. As the trial was based in rural clinics, we conducted a sensitivity analysis to examine the potential costs and consequences for removing or retaining the interventions in metropolitan clinics, where travel costs and staff time for travel are likely to be reduced considerably.

Results

Of 60 clinics, 59 agreed to participate in ACCEPt-able. No clinics withdrew, but 4 clinics had technical problems with data extraction and their data were unavailable, leaving 55 (91.7%) clinics in the analysis (Fig 1). The intervention period ranged from 0.2 years to 2 years, with a mean duration of 1.5 years (SD 0.4). Three clinics participated for less than 1 year (2 clinics closed and 1 clinic was a solo GP who became unwell and ceased seeing patients), 23 clinics between 1 and 1.5 years, and 29 clinics between 1.5 and 2 years. The average duration of the intervention period was similar between groups (1.5 years for groups A and C; 1.6 years for groups B and D).

Baseline characteristics at the patient and cluster level were similar between pairs of intervention groups (for factorial analysis) (Table 1), but given the loss of 4 clinics, we report only the results from the fully adjusted models in the text. The results from the model adjusted for minimisation variables only and the results from the fully adjusted model (adjusted for minimisation variables and patient age and sex and socio-economic status of the clinic) were similar (Table 2). There were some minor differences between the 4 trial groups, with clinics in group C (incentives and audit/feedback removed) and group D (incentives and audit/feedback retained) being more likely to be in disadvantaged areas. For analyses reporting on each intervention group (‘intervention group analysis’), we report the fully adjusted analyses.

thumbnail
Table 1. Baseline characteristics of clinics and patients.

https://doi.org/10.1371/journal.pmed.1003858.t001

thumbnail
Table 2. Primary outcome chlamydia testing—factorial analysis.

https://doi.org/10.1371/journal.pmed.1003858.t002

Chlamydia testing rates decreased from baseline in all groups (Figs 24), and for groups A, B, and C, testing rates reduced to levels like those observed in the first 12 months of ACCEPt, the parent trial (S1 Fig).

thumbnail
Fig 2. Proportion of patients tested for chlamydia per year by time since randomisation: Factorial analysis—removal of financial incentives versus retention of financial incentives.

Error bars correspond to 95% confidence intervals. FI, financial incentives.

https://doi.org/10.1371/journal.pmed.1003858.g002

thumbnail
Fig 3. Proportion of patients tested for chlamydia per year by time since randomisation: Factorial analysis—removal of audit/feedback versus retention of audit/feedback.

Error bars correspond to 95% confidence intervals. AF, audit/feedback.

https://doi.org/10.1371/journal.pmed.1003858.g003

thumbnail
Fig 4. Proportion of patients tested for chlamydia per year by time since randomisation: Intervention group analysis.

Error bars correspond to 95% confidence intervals. AF, audit/feedback; FI, financial incentives.

https://doi.org/10.1371/journal.pmed.1003858.g004

There was no statistical evidence of an interaction for treatment effect between removal of incentives and removal of audit/feedback on our primary outcome of chlamydia testing (interaction effect = 3.2%; 95% CI −2.4% to 8.8%; p = 0.2642). The ICC for testing was 0.015. In our factorial analysis, the annual chlamydia testing rate decreased from 20.2% to 11.7% over the 2 years (difference −8.8%; 95% CI −10.5% to −7.0%) where incentives were removed and decreased from 20.6% to 14.3% (difference −7.1%; 95% CI −9.6% to −4.7%) where incentives were retained. The adjusted absolute difference in treatment effect between groups was −0.9% (95% CI −3.5% to 1.7%; p = 0.2267), and the adjusted OR was 0.8 (95% CI 0.6 to 1.1; p = 0.2267) (Table 2). In subgroup analyses, the differences in treatment effect between clinics where incentives were removed and clinics where incentives were retained when stratified by sex or age of patient were not statistically significant (S1 Table). Annual testing decreased from 21.0% to 11.6% over the 2 years (difference −9.5%; 95% CI −11.7% to −7.4%) where audit/feedback was removed and decreased from 19.9% to 14.5% (difference −6.4%; 95% CI −8.6% to −4.2%) where audit/feedback was retained. The adjusted absolute difference in treatment effect was greater for removal than retention of audit/feedback (difference −2.6%; 95% CI −5.4% to −0.2%; p = 0.0336), and the adjusted OR was 0.7 (95% CI 0.5 to 1.0; p = 0.0336) (Table 2). In subgroup analyses, evidence of a difference was observed when stratified by sex and age group of patients except for those aged 25 to 29 years (S1 Table). The absolute difference in treatment effect did not vary between age groups.

Our intervention group analysis showed that testing decreased in all 4 groups, but the decrease was substantially lower in the group that retained incentives and audit/feedback. The adjusted absolute treatment effects were −1.8% (95% CI −4.9% to 1.3%; p = 0.0660) for removal of incentives only, −3.4% (95% CI −7.8% to 1.0%; p = 0.0247) for removal of audit/feedback only, and −3.4 (95% CI −6.5% to −0.2%; p = 0.0356) for removal of incentives and audit/feedback (S2 Table).

Cost and consequences

There was an estimated cost saving of AU$2.31 per 16- to 29-year-old patient per year associated with removing incentives. As removal of incentives had no significant impact on testing, discontinuing incentives dominates over a strategy of their retention (Table 3). There was an estimated cost-saving of AU$5.88 per 16- to 29-year-old patient per year associated with removing audit/feedback. The incremental cost of continuing audit/feedback activities was an estimated AU$189.64 (range: AU$94.82 to AU$5,117.49) per additional patient in the target population tested (Table 3). Most costs for audit/feedback were travel-related (79%). Sensitivity analysis showed that if travel costs were reduced to reflect the costs for research officers to visit metropolitan clinics, the costs of audit/feedback would decrease to an average of AU$3.02 per patient (Table 3).

Discussion

In a 2 × 2 factorial cluster RCT set in Australian general practice, the removal of financial incentives of AU$5 to AU$8 paid to GPs for each chlamydia test conducted had little additional impact on reducing testing rates among 16- to 29-year-olds attending the clinic. Our payments were consistent with other incentives at the time [24], suggesting that in the Australian general practice setting, incentives at this level do not have an important impact on preventive activities like chlamydia testing. We found that the removal of audit/feedback reduced testing, with a relative reduction of 30% (absolute difference = −2.6%) that could translate to about 160,000 fewer 16- to 29-year-olds tested each year in Australia [33]. The provision of audit/feedback was costlier, but most costs were for the visit, which could be substantially reduced with online conferencing for example. Fully automating the audit and feedback reports using digital platforms would also further reduce costs. We also found that chlamydia testing rates declined in all groups, regardless of whether incentives and/or audit and feedback were removed, emphasising the challenges in sustaining preventive healthcare activities in general practice over time.

There are several explanations for why we did not see an impact of removal of incentives. Incentives may not have been critical in driving test uptake in ACCEPt, such their removal in ACCEPt-able did not substantially impact testing. At the beginning of ACCEPt-able, clinics received an average total payment of AU$822 per year for chlamydia testing, which, at the time, was consistent with the total amount of approximately AU$2,400 that clinics received across 3 activities (asthma and diabetes cycles of care and cervical screening in under-screened women) as part of the Practice Incentives Program [34]. The introduction of these incentives in 2001 did not significantly increase uptake of these activities, suggesting incentivisation like this is unlikely to translate into substantial changes in Australian general practice [4]. This is supported by qualitative research, where Australian GPs report that incentives do not fundamentally influence patient management [4,35]. This may be because Australian general practices are largely funded by a fee-for-service reimbursement model; the few incentives available represent less than 10% of their funding [4]. Chance cannot be excluded because we did not expect a reduction in testing in clinics that retained incentives, which reduced our effective sample size, and our observed treatment effect of 0.9% was considerably smaller than our hypothesized 5%.

Our audit/feedback intervention included a written report and visit by a research officer. Unfortunately, we could not determine whether removing the report or the visit alone would have had the same effect. However, a previous systematic review compared an educational visit plus audit/feedback with audit/feedback alone, finding that the 2-pronged approach was more effective than audit/feedback only [36].

Unexpectedly, we observed that testing also decreased in the group that retained incentives and audit/feedback. This suggests that chlamydia testing had not become normalised in work practices, with clinics returning to their pre-intervention ways of working despite the intervention’s remaining in place [25]. Alternatively, it is possible that staff turnover led to loss of ‘corporate memory’ [37] about chlamydia, contributing to reduced testing. We provided clinics with the same level of support during ACCEPt-able as during ACCEPt, but we did not monitor whether there were changes in the clinics’ use of other strategies to facilitate testing such as using computer alerts, and while new GPs received our chlamydia educational package, we did not provide any further educational support to already-participating GPs. The lack of ongoing ‘calibration’ of the intervention and its support may have contributed to declining testing rates across all groups [38]. In addition, our intervention targeted GPs, with negligible patient involvement, which is necessary for sustaining change over time [39]. Nonetheless, ACCEPt-able highlights the challenges of sustaining GP behaviour change; further research is needed on how to sustain such change.

Several studies have reported on the removal of incentives in primary care, but all present observational data only, with conflicting results. Two studies examined incentive removal from the UK Quality and Outcomes Framework [5,14]. Similar to our findings, Kontopantelis et al. found that incentive removal had minimal effect on activities related to treatment and monitoring (e.g., cholesterol) [14]. In contrast, Minchin et al. found immediate reductions following incentive removal [5]. However, reductions were greatest where the GP was required to record advice provided to the patient (e.g., contraception advice) and smaller for activities related to measurement (e.g., cholesterol) [5,14]. Similar findings were observed in another study of 35 Kaiser Permanente facilities in the US, where small decreases in screening for diabetic retinopathy and cervical screening were observed when incentives were removed [12]. A cluster RCT of an intervention that included incentives to reduce high-risk prescribing in 34 primary care clinics in Scotland [13] found no change in high-risk prescribing during a 4-year observational post-intervention study when incentives were removed.

We are unaware of any RCT evidence about the impact of removing audit/feedback on provider activity. Observational data collected at the end of RCTs of audit/feedback interventions show similar results. An RCT of an intervention that included an educational session and audit/feedback found a 50% reduction in inappropriate antibiotic prescribing in 18 community-based paediatric clinics in the US, but once the intervention was terminated at trial end, there was an immediate increase in inappropriate prescribing, which returned to pre-trial levels within 18 months [40]. Similar findings were reported at the conclusion of another US trial of audit/feedback to reduce inappropriate prescribing [41].

Our trial has several limitations. First, our sample size assumed an absolute reduction in testing of 5% when incentives and/or audit/feedback were removed and no change where they were retained. We did not anticipate a decrease in all groups. However, the factorial design and smaller ICC than estimated (0.015 versus 0.02) maximised our statistical power. Second, when designing the trial, we assumed no interaction between removal of incentives and removal of audit/feedback and were not powered to detect an interaction. However, our post hoc analysis of each intervention group separately showed similar results to our primary analysis, confirming the factorial analysis findings. Third, 4 clinics did not provide testing data and were excluded from the analysis after randomisation. However, their removal had little impact on the distribution of minimisation and socio-economic variables across the intervention groups, and these variables were adjusted for in our analysis, minimising any bias (S3 Table). Fourth ACCEPt-able was undertaken in rural areas, so the results might not be generalisable to urban areas. However, our analysis accounted for cluster-level socio-economic factors, which had little impact on results. Fifth, we assessed the impact of the intervention on chlamydia testing in year 2 compared with baseline, and not all clinics remained in the trial until the end of year 2. However, it was reassuring that the average duration of the intervention period was similar between groups. Sixth, we evaluated the impact of the removal of incentives and audit/feedback on chlamydia testing, so our results may not be generalisable to other preventive health activities in general practice. Finally, this trial was set in Australia, where general practice is mainly renumerated on a fee-for-service basis; our results may be less transferrable to settings where incentives represent a larger proportion of income.

Conclusions

In this cluster RCT, we found that the financial incentives offered had little impact on chlamydia testing in Australian general practice. The total amount of financial incentive payments received per year in our trial was consistent with other incentive payments GPs received at the same time in Australia. It is possible that the removal of financial incentives might have a greater impact if incentive payments made up a greater proportion of GP income, such as in the UK. RCT evidence is needed to investigate this question. The removal of audit and feedback with a face-to-face visit resulted in a relative reduction in testing activity of 30% overall. A reduction of this size could have a considerable public health impact at the population level, with fewer chlamydia tests conducted and more infections going undetected. Our results suggest that, in Australia at least, audit and feedback is an important intervention for influencing GP behaviour for preventive health activities like chlamydia testing. The use of digital platforms that include automated reports and online communication could reduce the costs associated with audit and feedback. Our finding that chlamydia testing also decreased in clinics that retained incentives and audit and feedback highlights that simply retaining these interventions over time is not enough; further studies should investigate how to sustain clinician behaviour change over time.

Supporting information

S1 Fig. Annual chlamydia testing rates for ACCEPt and ACCEPt-able.

https://doi.org/10.1371/journal.pmed.1003858.s002

(PDF)

S1 Table. The primary outcome, chlamydia testing, by sex and age group: Factorial analysis.

https://doi.org/10.1371/journal.pmed.1003858.s003

(DOCX)

S2 Table. The primary outcome, chlamydia testing: Intervention group analysis.

https://doi.org/10.1371/journal.pmed.1003858.s004

(DOCX)

S3 Table. Distribution of minimisation and socio-economic status variables across clinics by intervention group.

https://doi.org/10.1371/journal.pmed.1003858.s005

(DOCX)

Acknowledgments

The authors would like to thank the ACCEPt-able research officers for their efforts in implementing and supporting clinics during the trial; participating clinics, GPs, and nurses; Associate Professor Douglas Boyle and the Health Informatics Unit, Department of General Practice, University of Melbourne, for ongoing technical support with regards to data extraction from medical records software; Professor Jane Tomnay from the Centre for Excellence in Rural Sexual Health, University of Melbourne, for ongoing guidance around conducting research in rural areas; and pathology providers for their contribution to data collection.

The views expressed are those of the authors alone and do not necessarily reflect those of the funding body.

References

  1. 1. Hulscher ME, Wensing M, van Der Weijden T, Grol R. Interventions to implement prevention in primary care. Cochrane Database Syst Rev. 2001;2001(1):CD000362. pmid:11279688
  2. 2. Eijkenaar F. Pay for performance in health care: an international overview of initiatives. Med Care Res Rev. 2012;69(3):251–76. pmid:22311954
  3. 3. Department of Health. Redesigning the Practice Incentives Program—consultation paper. Canberra: Australian Government; 2016.
  4. 4. Greene J. An examination of pay-for-performance in general practice in Australia. Health Serv Res. 2013;48(4):1415–32. pmid:23350933
  5. 5. Minchin M, Roland M, Richardson J, Rowark S, Guthrie B. Quality of care in the United Kingdom after removal of financial incentives. N Eng J Med. 2018;379(10):948–57.
  6. 6. Gillam S, Steel N. The Quality and Outcomes Framework—where next? BMJ. 2013;346:f659. pmid:23393112
  7. 7. Glasziou PP, Buchan H, Mar CD, Doust J, Harris M, Knight R, et al. When financial incentives do more good than harm: a checklist. BMJ. 2012;345:e5047. pmid:22893568
  8. 8. Spense D. Kill the QOF. BMJ. 2013;346:f1498. pmid:23468302
  9. 9. Campbell SM, Scott A, Parker RM, Naccarella L, Furler JS, Young D, et al. Implementing pay-for-performance in Australian primary care: lessons from the United Kingdom and the United States. Med J Aust. 2010;193(7):408–11. pmid:20919973
  10. 10. Australia Services. Practice Incentives Program. Forrest: Services Australia; 2021 [cited 2021 Nov 19]. Available from: https://www.servicesaustralia.gov.au/organisations/health-professionals/services/medicare/practice-incentives-program.
  11. 11. Caley M, Burn S, Marshall T, Rouse A. Increasing the QOF upper payment threshold in general practices in England: impact of implementing government proposals. Br J Gen Pract. 2014;64(618):e54–9. pmid:24567583
  12. 12. Lester H, Schmittdiel J, Selby J, Fireman B, Campbell S, Lee J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898. pmid:20460330
  13. 13. Dreischulte T, Donnan P, Grant A, Hapca A, McCowan C, Guthrie B. Safer prescribing—a trial of education, informatics, and financial incentives. N Eng J Med. 2016;374(11):1053–64.
  14. 14. Kontopantelis E, Springate D, Reeves D, Ashcroft DM, Valderas JM, Doran T. Withdrawing performance indicators: retrospective analysis of general practice performance under UK Quality and Outcomes Framework. BMJ. 2014;348:g330. pmid:24468469
  15. 15. Smith M, Fereday S. Developing a clinical audit programme. London: Healthcare Quality Improvement Partnership; 2016.
  16. 16. Royal Australian College of General Practitioners. QI&CPD program: 2017–19 triennium handbook for general practitioners. East Melbourne: Royal Australian College of General Practitioners; 2016.
  17. 17. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;2012(6):CD000259. pmid:22696318
  18. 18. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, et al. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41. pmid:24965281
  19. 19. Hocking JS, Temple-Smith M, Guy R, Donovan B, Braat S, Law M, et al. Population effectiveness of opportunistic chlamydia testing in primary care in Australia: a cluster-randomised controlled trial. Lancet. 2018;392(10156):1413–22. pmid:30343857
  20. 20. Royal Australian College of General Practitioners. Guidelines for preventive activities in general practice (The Red Book). 8th edition. East Melbourne: Royal Australian College of General Practitioners; 2012.
  21. 21. Hocking JS, Temple-Smith M, van Driel M, Law M, Guy R, Bulfone L, et al. Can preventive care activities in general practice be sustained when financial incentives and external audit plus feedback are removed? ACCEPt-able: a cluster randomised controlled trial protocol. Implement Sci. 2015;11(1):122.
  22. 22. Campbell MK, Piaggio G, Elbourne DR, Altman DG. Consort 2010 statement: extension to cluster randomised trials. BMJ 2012;345:e5661. pmid:22951546
  23. 23. Hocking JS, Lo N, Gu R, La M, Donova B, Kaldo J, et al. Protocol 12PRT/9010: Australian Chlamydia Control Effectiveness Pilot (ACCEPt): a cluster randomised controlled trial of chlamydia testing in general practice (ACTRN1260000297022). Lancet Accepted Protocol Summaries. 2012 [cited 2021 Nov 19]. Available from: https://www.thelancet.com/protocol-reviews/12PRT-9010.
  24. 24. Health Services Division. Productivity commission study on compliance costs in general practice. Canberra: Department of Health and Ageing; 2002.
  25. 25. Murray E, Treweek S, Pope C, MacFarlane A, Ballini L, Dowrick C, et al. Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8:63. pmid:20961442
  26. 26. de Moel-Mandel C, Sundararajan V. The impact of practice size and ownership on general practice care in Australia. Med J Aust. 2021;214(9):408–10.e1. pmid:33966270
  27. 27. Boyle D, Kong F. A systematic mechanism for the collection and interpretation of display format pathology test results from Australian primary care records. Electr J Health Inform. 2011;6(2):e18.
  28. 28. Canaway R, Boyle DI, Manski-Nankervis JE, Bell J, Hocking JS, Clarke K, et al. Gathering data for decisions: best practice use of primary care electronic records for research. Med J Aust. 2019;210(Suppl 6):S12–6. pmid:30927466
  29. 29. Scott NW, McPherson GC, Ramsay CR, Campbell MK. The method of minimization for allocation to clinical trials. a review. Control Clin Trials. 2002;23(6):662–74. pmid:12505244
  30. 30. Australian Bureau of Statistics. Census of population and housing: socio-economic indexes for areas (SEIFA), Australia, 2011. Canberra: Australian Bureau of Statistics; 2013.
  31. 31. McAlister F, Straus S, Sackett D, Altman D. Analysis and reporting of factorial trials: a systematic review. JAMA. 2003;289:2545–53. pmid:12759326
  32. 32. Drummond MF, Sculpher M, Claxton K, Stoddart GL, Torrance GW. Methods for the economic evaluation of healthcare programmes. 4th edition. Oxford: Oxford University Press; 2015.
  33. 33. Australian Bureau of Statistics. Australian demographic statistics, Jun 2016. Canberra: Australian Bureau of Statistics; 2016.
  34. 34. Australian National Audit Office. Practice Incentives Program 2010–2011. Canberra: Department of Health and Ageing; 2011.
  35. 35. Yeung A, Hocking J, Guy R, Fairley CK, Smith K, Vaisey A, et al. ‘It opened my eyes’—examining the impact of a multifaceted chlamydia testing intervention on general practitioners using Normalization Process Theory. Fam Pract. 2018;35(5):626–32. pmid:29608672
  36. 36. O’Brien MA, Rogers S, Jamtvedt G, Oxman AD, Odgaard-Jensen J, Kristoffersen DT, et al. Educational outreach visits: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2007;2007(4):CD000409. pmid:17943742
  37. 37. Lahaie D. The impact of corporate memory loss: what happens when a senior executive leaves? Int J Health Care Qual Assur Inc Leadersh Health Serv. 2005;18(4–5):xxxv–xlvii. pmid:16167654
  38. 38. Fox CR, Doctor JN, Goldstein NJ, Meeker D, Persell SD, Linder JA. Details matter: predicting when nudging clinicians will succeed or fail. BMJ. 2020;370:m3256. pmid:32933926
  39. 39. Kiran T, Ramji N, Derocher MB, Girdhari R, Davie S, Lam-Antoniades M. Ten tips for advancing a culture of improvement in primary care. BMJ Qual Saf. 2019;28(7):582–7. pmid:30381328
  40. 40. Gerber JS, Prasad PA, Fiks AG, Localio AR, Bell LM, Keren R, et al. Durability of benefits of an outpatient antimicrobial stewardship intervention after discontinuation of audit and feedback. JAMA. 2014;312(23):2569–70. pmid:25317759
  41. 41. Linder JA, Meeker D, Fox CR, Friedberg MW, Persell SD, Goldstein NJ, et al. Effects of behavioral interventions on inappropriate antibiotic prescribing in primary care 12 months after stopping interventions. JAMA. 2017;318(14):1391–2. pmid:29049577