Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Cash incentives versus defaults for HIV testing: A randomized clinical trial

  • Juan Carlos C. Montoy ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Writing – original draft, Writing – review & editing

    juancarlos.montoy@ucsf.edu

    Affiliation Department of Emergency Medicine, University of California, San Francisco, San Francisco, California, United States of America

  • William H. Dow,

    Roles Conceptualization, Formal analysis, Funding acquisition, Supervision, Writing – review & editing

    Affiliation Division of Health Policy and Management, School of Public Health University of California, Berkeley, Berkeley, California, United States of America

  • Beth C. Kaplan

    Roles Conceptualization, Funding acquisition, Project administration, Resources

    Affiliation Department of Emergency Medicine, University of California, San Francisco, San Francisco, California, United States of America

Abstract

Background

Tools from behavioral economics have been shown to improve health-related behaviors, but the relative efficacy and additive effects of different types of interventions are not well established. We tested the influence of small cash incentives, defaults, and both in combination on increasing patient HIV test acceptance.

Methods and findings

We conducted a randomized clinical trial among patients aged 13–64 receiving care in an urban emergency department. Patients were cross-randomized to $0, $1, $5, and $10 incentives, and to opt-in, active-choice, and opt-out test defaults. The primary outcome was the proportion of patients who accepted an HIV test. 4,831 of 8,715 patients accepted an HIV test (55.4%). Those offered no monetary incentive accepted 51.6% of test offers. The $1 treatment did not increase test acceptance (increase 1%; 95% confidence interval [CI] -2.0 to 3.9); the $5 and $10 treatments increased test acceptance rates by 10.5 and 15 percentage points, respectively (95% CI 7.5 to 13.4 and 11.8 to 18.1). Compared to opt-in testing, active-choice testing increased test acceptance by 11.5% (95% CI 9.0 to 14.0), and opt-out testing increased acceptance by 23.9 percentage points (95% CI 21.4 to 26.4).

Conclusions

Small incentives and defaults can both increase patient HIV test acceptance, though when used in combination their effects were less than additive. These tools from behavioral economics should be considered by clinicians and policymakers. How patient groups respond to monetary incentives and/or defaults deserves further investigation for this and other health behaviors.

Registration

Clinical Trials NCT01377857.

Introduction

Behavioral economics approaches such as defaults and incentives for changing patient behavior have been implemented across a wide range of clinical settings. Monetary incentives have been employed as a means to modify health-related behaviors in substance abuse treatment [1], smoking cessation [2], weight loss [3], risky sexual behavior [4, 5], and some one-time or infrequent behaviors such as immunization [6] and HIV screening [7]. Defaults have likewise been shown to be effective at influencing behaviors; for example, the prescribing of generics over brand-name medications [8, 9], end-of-life decisions in advance directives [10], and participation in diabetes care [11]. Because both incentives and defaults have proven effective, further research is needed to develop our understanding regarding which types of interventions are more effective at changing specific types of behaviors–a question best answered through at-scale head-to-head experimentation [12].

This paper analyzes a head-head randomized trial of approaches to increase HIV testing among emergency department patients. Identifying HIV infections remains a top priority in addressing the ongoing HIV epidemic [1315], but despite widespread agreement that universal opt-out screening should be adopted [1619], failure to screen is the norm across all hospital types [20, 21]. A previous publication using a subset of data from this trial (arms with no monetary incentives) found that changing defaults for HIV testing yielded clinically significant differences in HIV testing [22]. Here we estimate the extent to which various cash incentives increase HIV test acceptance, compare this effect head-to-head with the effect of defaults, and analyze whether incentives and defaults can be used together to optimize test acceptance.

Methods

We conducted a randomized clinical trial in the emergency department of an urban teaching hospital and regional trauma center. Between June 18, 2011, and June 30, 2013, non-clinical staff approached patients in the emergency department: once to offer a rapid HIV test and once for a questionnaire. Patients were identified and approached by study staff during times not interfering with their clinical care. Accepted tests were completed as part of their care in the department. The ten-minute self-administered questionnaires were described generically as improving emergency department care. After both the test and questionnaire responses were recorded, patients were fully debriefed and written consent was obtained. Per state and federal law, and with the approval of the institutional review board (IRB), minors were able to consent to the study. The study received IRB approval from the University of California, San Francisco, was conducted and reported in accordance with CONSORT guidelines, and was registered as clinicaltrials.gov study NCT01377857. The protocol has been described previously and presented in greater detail in S1 Text [22].

Monetary incentives were assigned at the zone-day level: all patients in each of the four ED zones on a given day received the same treatment assignment. Incentives were assigned to each zone using a random-number generator, independent from the other zone assignments.

A random number generator was used to create default wording (opt-in, active-choice, and opt-out) treatment assignments, randomized at the patient level, each with equal probability. Patients were also randomly assigned to be offered the questionnaire either before or after the HIV test offer. No incentive was offered for questionnaire completion. The incentive, default, and questionnaire timing treatment assignments were cross-randomized in a factorial design.

Study staff began each shift in one of four emergency department zones and approached all eligible patients in that zone prior to moving to the next zone. The starting zone was determined at the day level using a random-number generator, in which each zone had a 25% chance of being the starting zone any given day. Staff were not blinded to treatment assignments.

Participants

Study inclusion criteria were: age 13–64, able to consent to HIV testing and study inclusion, and English- or Spanish-speaking. Patients were excluded if known HIV-positive, had tested for HIV in past three months, pregnant, in police custody, or had participated in this study in the previous three months.

Protocol

Using a standardized script, study staff informed patients that the emergency department was offering rapid screening HIV tests. Patients were told that the testing was non-targeted and routine, and used a rapid assay with results available during their ED visit, approximately 1–2 hours. The test offer followed: opt-in “You can let me, your nurse, or your doctor know if you'd like a test today,” active-choice “Would you like a test today?” or opt-out “You will be tested unless you decline.” Finally, if the patient was assigned to a positive monetary incentive, they were informed, “To encourage testing today we are offering a $1 cash incentive” (substituting $5 or $10 as relevant). No mention of monetary incentives was made to patients who were assigned to the $0 treatment.

Study staff notified clinicians of patients accepting HIV tests. No pre-test counseling was performed. Patients were informed of negative test results by their nurse or clinician. Positive test results were disclosed by the patient’s clinician in accordance with the protocol established by the hospital's HIV Rapid Testing and Referral Program.

Statistics

The primary outcome was test acceptance percentage. Treatment effects were estimated with univariate and multivariable ordinary least squares regression. Tables report raw linear regression coefficients, which are directly interpretable as the difference in the proportion of subjects who accept an HIV test; interaction effects are similarly straightforward to interpret [23].

We also examined effects across HIV risk subgroups, per approximated Denver HIV Risk Score (S1 Table) [24, 25]. Scores depend on demographics (age, gender, race/ethnicity), risk behaviors (sex with a male, vaginal intercourse, receptive anal intercourse, IV drug use), and past HIV testing. We classified patients as low risk (score under 20), intermediate risk (scores 20–39), and high risk (scores 40 or higher). For patients who did not complete the questionnaire, the risk score was estimated using available data only. While analysis by risk level was a planned analysis, the Denver HIV Risk Score was published and validated during our data collection, so these risk definitions were not pre-specified. Because patient responses within the same zone and on the same day could be correlated, we clustered standard errors by day and emergency department zone (zone-day level). Sensitivity analyses, including different model specifications using ordinary least squares and multivariable logistic regression, are presented in the Supporting Information. Randomization and all analyses were performed using STATA 13.1.

Planned sample size was sufficient to detect a 5 percentage point difference in test acceptance between treatment arms with 80% power at a 5% significance level between the no incentive treatment assignment and one of the positively-valued incentive assignments within one of the default assignments. This 5 percentage point effect size was the minimum difference we deemed to be clinically important. We assumed a baseline test acceptance percentage of 50%. This predicted a sample size of 2,349 for the no incentive group and 1,175 for each of the incentive groups (no incentive was designed to have a greater quantity than each of the positively-valued incentive arms). These sample sizes yield a total of 5,874 patients within each default group, for a total of 17,622 patients in the study. Our actual enrolled sample size was smaller than originally planned due to enrollment difficulties.

Results

Participation and randomization

Research assistants approached 10,463 patients to offer HIV tests and questionnaires. 8,715 (82.3%) of patients consented to inclusion in the study. Randomization yielded no significant differences in demographic groups across monetary incentive treatment assignments (Table 1); demographics according to default assignment are presented in S2 Table. The distributions of demographics and chief complaints did not vary by assignment to monetary incentive. Fig 1 shows the flow of patients through treatment assignments, with consent rates for each incentive-default combination.

thumbnail
Fig 1. Flow diagram.

Of 10,463 patients approached for inclusion in study, 8,715 consented. Because patients were retrospectively consented, no patients were excluded after being consented for inclusion.

https://doi.org/10.1371/journal.pone.0199833.g001

Treatment effects

HIV tests were accepted by 4,831 patients (55.4%). Those offered no monetary incentive accepted 51.6% of test offers; those offered $1, $5, and $10 accepted 52.6%, 62.1%, and 66.6% of tests, respectively. These unadjusted differences showin in Table 2, Column 1 and Fig 2 reflect an absolute difference between the $1 treatment and no incentive treatment of 1% (95% confidence interval -2.0 to 3.9); the $5 and $10 treatments increased test acceptance rates by 10.5 and 15 percentage points, respectively (95% CI 7.5 to 13.4 and 11.8 to 18.1). Patients in the opt-in scheme accepted 43.8% of test offers, unadjusted for incentives. Patients in the active-choice scheme were 11.5 percentage points more likely to accept test offers (95% CI 9.0 to 14.0); those in the opt-out scheme were 23.9 percentage points more likely to accept testing (95% CI 21.4 to 26.4).

thumbnail
Fig 2. HIV consent by treatment assignment.

Proportion of patients accepting an HIV test according to treatment assignment: 2a monetary incentives, 2b defaults, and 2c incentive x default combinations.

https://doi.org/10.1371/journal.pone.0199833.g002

Incentives and defaults are considered jointly under a model without interaction terms and a model with them (Table 2, Columns 3 and 4, respectively). The estimates of the effects of monetary incentives and of defaults are similar in the multivariable model without interactions (Table 2, Column 3) to the estimates from each of the univariate models. When the effects of incentives are measured separately for each default (Table 2, Column 4), each of the cash incentives have the largest effect within the opt-in group. The $1 incentive was associated with a 6.2 percentage point increase in test acceptance (95% CI 1.4 to 11.0); it did not increase test acceptance among the active-choice or the opt-in group. The effects of the $5 and $10 incentives were attenuated in the opt-out group.

Risk of infection

The sample of patients enrolled in the study was comprised of 40.3% low-risk, 50.4% intermediate-risk, and 9.3% high-risk patients. Univariate analysis shows that intermediate-risk patients were 7.1, and high risk were 9.1 percentage points more likely to test than low-risk patients (95% CI 5.0 to 9.3 and 5.3 to 12.8, respectively).

When the effect of incentives is calculated separately for each group, the estimates show a similar pattern to the results from the univariate model: the $1 incentive has no effect on testing, and the $5 and $10 each increase test acceptance. None of the interaction terms is significantly different from 0, suggesting that the monetary incentives affected behavior equally across risk groups. Sensitivity analyses are presented in the supplementary material: risk-specific interaction terms (S3 Table), estimation with a logistic regression (S4 Table), and a back-of-the-envelope calculations to account for differential study participation rates (S5 and S6 Tables).

Fig 3 presents results from a model that estimates the effects of incentives on test uptake separately for each default within patients from each risk category: coefficients were estimated for incentives, defaults, and risk level, and each two-way and three-way interaction between them.

thumbnail
Fig 3. HIV consent by incentive-default treatment assignment and risk of infection.

Proportion of patients accepting an HIV test according to incentive-default treatment assignment, stratified by risk group. Risk of infection was estimated by the Denver HIV Risk Score: < 20 low risk, 20–39 intermediate risk, ≥40 high risk.

https://doi.org/10.1371/journal.pone.0199833.g003

Discussion

This study tested two types of behavioral economics interventions–monetary incentives and defaults–and found evidence that each can be effective in increasing HIV test uptake. This is to our knowledge the first study to directly compare two types of behavioral economics interventions in any health behavior context. Recent research has evaluated how to target a single type of intervention, but has not yet compared different types of interventions [26]. In large part this literature has explored repeated behaviors or behavior over time, such as medication adherence and weight loss [27, 28].

The interventions were tested both separately and in combination with each other with a rigorous design that included random assignment to small monetary incentives and patient-level randomization to a one-sentence variation in test offer, with all else held constant. The effects were persistent across all model specifications and levels of patient risk of infection, though the effects were somewhat attenuated when defaults and incentives were used together: the $1 incentive increased test acceptance in the opt-in but not the other default settings, and the $5 and $10 incentives were less effective under the opt-out default than the other default settings. In general, higher-risk patients tested at higher rates than lower-risk patients and had smaller responses to treatments. Among all treatment assignments, opt-out had the largest effect, followed by the $10 incentive.

Compared to previously published work from this study, which demonstrated that defaults significantly affect patient behavior, this study places two classes of behavioral economics nudges in direct comparison with nearly double the sample size. We again confirmed that active-choice is a category distinct from opt-in, both providing policymakers with clearer guidance on how to implement policies and also bringing this field in closer alignment with the existing literature in psychology and economics [29, 30]. Despite being universally present in health care, defaults have been understudied in medicine and this topic deserves further attention.

The proportion of patients accepting testing may vary in other settings and with other populations as compared to those within this single-center study. However, patients with a wide range of demographics, chief complaints, and risk factors for HIV were included in the study. Although the particular percentages may be quite different, the patterned responses to small monetary incentives and also to op-in, active-choice, and opt-out test offers may be expected for HIV testing in other settings as well as for decisions about other medical tests.

By blinding patients to the study itself and also to its components, the retrospective informed consent design has the advantage of minimizing or even eliminating many potential sources of bias but introduces the risk of bias from post-randomization withdrawal. We see evidence of this: the proportion of approached patients who participated in the study increases monotonically with monetary incentives. However, the difference in participation rates is small and did not drive the results here; sensitivity analysis did not change the primary results.

The three monetary incentive values used in this study are somewhat arbitrary but are on a scale that might reasonably be chosen by a hospital or health system. The $1 serves to test if, as previously found [7], whether a monetary incentive is offered is more important than the value of the incentive–a finding we did not replicate. We chose immediate cash incentives in order to maximize the response under the prediction from behavioral economics that equivalent payments such as a check given immediately or cash given later would likely yield smaller increases in test acceptance rates, as would a deduction of the same dollar amount from one’s hospital bill.

Our ED population had few barriers to testing: there was no travel time, scheduling, written consent, or, in most cases, additional blood draws. But, even under the $10, opt-out treatment assignment, test uptake did not approach 100%. This result is cause for pessimism about the potential for small incentives, defaults, or both to achieve the target of universal screening. This suggests that some patients truly believe the test is not worthwhile, and for others the psychological costs of learning one’s HIV status are too high. This poses a challenging question of how to achieve universal testing and identify all existing cases of HIV infection. Nevertheless, among high-risk patients the combination of incentives and defaults raised test acceptance from 48% in the $0 opt-in arm to 80% in the $10 opt-out arm.

This study directly compares two behavioral economics interventions and adds to the existing evidence that small interventions can have significant effects in directing patients toward more optimal health-related behaviors. Our results have the potential to help inform how to structure HIV test offers for other emergency departments as well as other health care settings. The finding that, on average, moving from opt-in to opt-out testing influenced behavior more than even the largest incentive reinforces the notions that the medicine is not just a transaction, and what we say to patients matters. This field is still relatively new, and much remains to be learned about how and in what settings to use behavioral economics approaches to improving health-related behavior. How patients respond to monetary incentives and defaults deserves further investigation for this and other health problems.

References

  1. 1. Prendergast M, Podus D, Finney J, Greenwell L, Roll J. Contingency management for treatment of substance use disorders: A meta-analysis. Addiction. 2006;101:1546–60. pmid:17034434
  2. 2. Volpp KG, Troxel AB, Pauly MV, Glick HA, Puig A, Asch DA, et al. A randomized, controlled trial of financial incentives for smoking cessation. New Engl J Med. 2009;360(7):699–709. pmid:19213683
  3. 3. Volpp KG, John LK, Troxel AB, Norton L, Fassbender J, Loewenstein G. Financial incentive–based approaches for weight loss: a randomized trial. JAMA. 2008;300(22):2631–7. pmid:19066383
  4. 4. De Walque D, Dow WH, Nathan R, Abdul R, Abilahi F, Gong E, et al. Incentivising safe sex: a randomised trial of conditional cash transfers for HIV and sexually transmitted infection prevention in rural Tanzania. BMJ Open. 2012;2(1):e000747.
  5. 5. Kohler HP, Thornton RL. Conditional cash transfers and HIV/AIDS prevention: unconditionally promising? The World Bank Economic Review. 2011:lhr041.
  6. 6. Banerjee AV, Duflo E, Glennerster R, Kothari D. Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and without incentives. BMJ. 2010;340:c2220. pmid:20478960
  7. 7. Thornton R. The Demand for, and impact of, learning HIV status. American Economic Review. 2008;98(5):1829–63. pmid:21687831
  8. 8. Patel MS, Day S, Small DS, Howell JT, Lautenbach GL, Nierman EH, et al. Using default options within the electronic health record to increase the prescribing of generic-equivalent medications: a quasi-experimental study. Ann Int Med. 2014;161(10): S44–52.
  9. 9. Patel MS, Day SC, Halpern SD. Generic medication prescription rates after health system-wide redesign of default options within the electronic health record. JAMA Int Med. 2015;176(6):847–8.
  10. 10. Aysola J, Tahirovic E, Troxel AB, Asch DA, Gangemi K, Hodlofski AT, et al. A randomized controlled trial of opt-in versus opt-out enrollment into a diabetes behavioral intervention. Am J Health Promot. 2016:0890117116671673.
  11. 11. Halpern SD, Ubel PA, Asch DA. Harnessing the power of default options to improve health care. New Engl J Med. 2007;357(13):1340. pmid:17898105
  12. 12. Loewenstein G, Brennan T, Volpp KG. Asymmetric paternalism to improve health behaviors. JAMA. 2007 Nov 28;298(20):2415–7. pmid:18042920
  13. 13. McNulty M, Cifu AS, Pitrak D. HIV screening. JAMA. 2016;316(2):213–4. pmid:27404189
  14. 14. Skarbinski J, Rosenberg E, Paz-Bailey G, Hall HI, Rose CE, Viall AH, et al. Human immunodeficiency virus transmission at each step of the care continuum in the United States. JAMA Int Med. 2015;175(4):588–96.
  15. 15. Marks G, Crepaz N, Janssen RS. Estimating sexual transmission of HIV from persons aware and unaware that they are infected with the virus in the USA. AIDS. 2006;20(10):1447–50. pmid:16791020
  16. 16. Branson B, Handsfield H, Lampe M, Janssen RS, Taylor AW, Lyss SB, et al. Centers for Disease Control, and Prevention. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR. 2006;55(RR14): 1–17.
  17. 17. Qaseem A, Snow V, Shekelle P, Hopkins R Jr, Owens DK. Clinical Efficacy Assessment Subcommittee, American College of Physicians. Screening for HIV in health care settings: a guidance statement from the American College of Physicians and HIV Medicine Association. Ann Intern Med. 2009;150:125–31. pmid:19047022
  18. 18. American College of Obstetricians and Gynecologists. ACOG Committee Opinion. Routine human immunodeficiency virus screening. Obstetrics and Gynecology. 2008;112(2 Pt 1):401.
  19. 19. Moyer VA. Screening for HIV: US preventive services task force recommendation statement. Annals of Internal Medicine. 2013;159(1):51–60. pmid:23698354
  20. 20. Berg LJ, Delgado MK, Ginde AA, Montoy JC, Bendavid E, Camargo CA Jr. Characteristics of U.S. emergency departments that offer routine human immunodeficiency virus screening. Acad Emerg Med. 2012;19:894–900. pmid:22849642
  21. 21. Hoover JB, Tao G, Heffelfinger JD. Monitoring HIV testing at visits to emergency departments in the United States: very-low rate of HIV testing. J Acquir Immune Defic Syndr. 2013;62: 90–4. pmid:23018376
  22. 22. Montoy JC, Dow WH, Kaplan BC. Patient choice in opt-in, active choice, and opt-out HIV screening: randomized clinical trial. BMJ. 2016;352:h6895.
  23. 23. Ai C, Norton EC. Interaction terms in logit and probit models. Econ Lett. 2003;80:123–9
  24. 24. Haukoos JS, Lyons MS, Lindsell CJ, Hopkins E, Bender B, Rothman RE, et al. Derivation and validation of the Denver Human Immunodeficiency Virus (HIV) risk score for targeted HIV screening. Am J Epidemiol. 2012;175: 838–46. pmid:22431561
  25. 25. Haukoos JS, Hopkins E, Bucossi MM, Lyons MS, Rothman RE, White DA, et al. Denver Emergency Department HIV Research Consortium. Brief report: Validation of a quantitative HIV risk prediction tool using a national HIV testing cohort. J Acquir Immune Defic Syndr. 2015;68:599–603. pmid:25585300
  26. 26. Asch DA, Troxel AB, Stewart WF, et al. Effect of financial incentives to physicians, patients, or both on lipid levels. JAMA. 2015;314(18):1926–1935. pmid:26547464
  27. 27. Volpp KG, Loewenstein G, Troxel AB, Doshi J, Price M, Laskin M, et al. A test of financial incentives to improve warfarin adherence. BMC Health Services Research. 2008;8(1):272.
  28. 28. John LK, Loewenstein G, Troxel AB, Norton L, Fassbender JE, Volpp KG. Financial incentives for extended weight loss: a randomized, controlled trial. J Gen Intern Med. 2011;26(6):621–6. pmid:21249462
  29. 29. Carroll GD, Laibson D, Madrian BC, Metrick A. Optimal Defaults and Active Decisions. Quarterly Journal of Economics. 2009;124(4):1639–74. pmid:20041043
  30. 30. Keller PA, Harlam B, Loewenstein G, Volpp KG. Enhanced active choice: A new method to motivate behavior change. Journal of Consumer Psychology. 2011;21(4):376–83.