Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Strategies to increase downloads of COVID–19 exposure notification apps: A discrete choice experiment

  • Jemima A. Frimpong ,

    Roles Conceptualization, Formal analysis, Methodology, Resources, Writing – original draft, Writing – review & editing

    Affiliations Division of Social Science, Program in Social Research and Public Policy, New York University–Abu Dhabi (UAE), Abu Dhabi, United Arab Emirates, Carey Business School, Johns Hopkins University, Baltimore, MD, United States of America

  • Stéphane Helleringer

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft

    Affiliation Division of Social Science, Program in Social Research and Public Policy, New York University–Abu Dhabi (UAE), Abu Dhabi, United Arab Emirates


Exposure notification apps have been developed to assist in notifying individuals of recent exposures to SARS-CoV-2. However, in several countries, such apps have had limited uptake. We assessed whether strategies to increase downloads of exposure notification apps should emphasize improving the accuracy of the apps in recording contacts and exposures, strengthening privacy protections and/or offering financial incentives to potential users. In a discrete choice experiment with potential app users in the US, financial incentives were more than twice as important in decision-making about app downloads, than privacy protections, and app accuracy. The probability that a potential user would download an exposure notification app increased by 40% when offered a $100 reward to download (relative to a reference scenario in which the app is free). Financial incentives might help exposure notification apps reach uptake levels that improve the effectiveness of contact tracing programs and ultimately enhance efforts to control SARS-CoV-2. Rapid, pragmatic trials of financial incentives for app downloads in real-life settings are warranted.


Contact tracing is an important intervention to control the COVID–19 pandemic. It entails interviewing newly diagnosed persons to obtain lists of the individuals they have recently interacted with. A trained health worker then attempts to notify these “contacts” of their possible exposure, for example by phone or during a home visit. This gives an opportunity to encourage at-risk contacts to isolate and undergo testing. This traditional approach to contact tracing is however time and labor-intensive [1]. It might reach a limited number of contacts. It might also reach them after they have further transmitted SARS-CoV-2 [2].

Newly developed digital tools might remedy some of these limitations of traditional contact tracing programs [3]. Exposure notification apps (“EN apps”, thereafter) might alert additional contacts faster about their potential exposure. EN apps use Bluetooth technology to keep logs of contacts between app users in real time [4]. Once app users are diagnosed with (or show symptoms suggestive of) COVID–19, they can voluntarily enter this information on the app. Other users recorded in their contact logs are then instantaneously notified and encouraged to self-isolate. Several technology firms have developed infrastructure to facilitate the deployment of EN apps, and different versions of EN apps are being used around the world. Several evaluations suggest that the roll-out of EN apps might have prevented infections and deaths in various settings [5, 6].

The impact of EN apps on contact tracing outcomes depends on adoption of the app among potential users. Even though EN apps might help reduce the spread of SARS-CoV-2 at low levels of app uptake [7], a critical mass of mobile subscribers is required to effectively interrupt transmission chains. According to some models, as many as 80% of mobile subscribers were required to use the app, in order to achieve epidemic control prior to the roll-out of vaccines [2]. However, such levels of uptake have not been reached [8].

Most approaches to improving app uptake focus on providing information about how EN apps function [9], addressing concerns about data privacy [10] and/or improving the accuracy of exposure notifications generated by an app [1114]. In opinion surveys, the intentions of potential app users to download an EN app were particularly influenced by the app’s true positive rate (i.e., its “sensitivity”) in detecting exposures to SARS-CoV-2 [11]. Other aspects of EN apps that might influence decisions to download include technical issues such as concerns about data consumption, the availability of storage space on users’ phone [15], as well as trust in government and institutions implementing the EN app [1618].

The adoption of health-related innovations such as EN apps can also be accelerated by offering financial incentives to potential users [19]. Incentives are payments that are conditional on specific actions or behaviors. Patients and at-risk populations are often responsive to financial rewards that encourage adopting healthy behaviors [20] or undergoing specific diagnostics and procedures [21]. For example, financial incentives helped increase engagement in smoking cessation programs in multiple settings and population groups [2224]. They have been effective in increasing the uptake of HIV testing and other HIV prevention tools in at-risk populations [2529]. They have also helped promote participation in data collection [30, 31].

The potential effects of financial incentives on the download rates of EN apps have however seldom been investigated, since the beginning of the COVID-19 pandemic. In a recent study in Germany, financial incentives of up to 5 euros (approximately 5.60 US dollars) increased app downloads among an online panel of participants [9]. However, such small incentives might lead to “peanuts effects” [32] in other settings, where they might be perceived as trivial by potential users.

In May 2020, we conducted a discrete choice experiment to assess multiple approaches to increasing the uptake of EN apps: improving app accuracy, strengthening privacy protections and/or offering financial incentives to potential users. A discrete choice experiment is a survey methodology in which respondents choose between hypothetical versions of a good or service, characterized by a small number of randomly selected attributes [33]. In this DCE, we explored a broad range of potential financial incentives to download an EN app, including one-time payments of up to 100 US dollars.

Data and methods

Data source

The study was approved by the institutional review board of the Johns Hopkins Bloomberg School of Public Health. It was conducted on May 28–29, 2020. We recruited a sample of internet users who resided in the United States, via the Qualtrics online panel. Qualtrics is a commercial firm that specializes in providing services to businesses and organizations, including tools for survey research. We entered a service provider agreement with Qualtrics for the conduct of this discrete choice experiment (based on a set price per completed survey). Specifically, we used the Qualtrics online panel to recruit participants. This is a frequently used option for the constitution of convenience online samples, including in social science and health research [34]. This online panel is constituted of potential respondents, who have signed up to take online surveys in exchange for various incentives and gifts. We indicated our target sample size and eligibility criteria to Qualtrics, then the firm contacted select participants in the online panel to alert them to our study. Qualtrics also handled the compensation of participants who completed the discrete choice experiment. We do not know the exact value of the incentive/gift study participants received from Qualtrics. However, the total cost of the survey per interview amounted to 7$.

To be eligible, potential participants had to a) be aged 18–69 years old, b) own a smart phone, c) read and understand English, and d) report never having tested positive for SARS-CoV-2. We excluded people aged 70 years and older because mathematical models of the potential effects of an EN app in the early phases of the COVID-19 pandemic [2] have assumed that individuals in that age group will rely on other preventive measures (e.g., social distancing) to limit exposure to SARS-CoV-2. We excluded individuals with positive test results because few cases of reinfection among recovered COVID-19 patients had been documented by May 2020 [35]. We thus assumed that most internet users with positive test results would have developed (temporary) immunity to the virus that was circulating at the time of the study. Since EN apps aim to detect contacts during which SARS-CoV-2 can be transmitted, we assumed that this “recovered” group would not benefit from notifications generated by the app. Informed consent for study participation was obtained online: potential participants read the text of the consent and clicked a checkbox noting that they understood the consent form and agreed to participate in the study.

Experimental design

we asked respondents to make 10 choices between two hypothetical EN apps, each defined by 6 attributes (Table 1). We selected these attributes and their levels based on a review of the literature on contact tracing for COVID–19 [1], expert consultations, and descriptions of Bluetooth-based EN apps [4, 36].

Table 1. Strategies, attributes and levels included in the discrete choice experiment.

Three of these attributes were related to privacy features: (1) whether the EN app asks users to provide their phone number or email at download, (2) whether the EN app collects location data (e.g., through GPS tracking), and (3) whether users share their anonymized data with other app users or the health department. Two of the attributes concerned the accuracy of the EN app: (4) the rate at which the app makes errors in notifying users of SARS-CoV-2 exposure (“false positive rate”), and (5) the proportion of exposures to other app users who carry SARS-CoV-2 that are notified to users (“true positive rate” or sensitivity). The final attribute was (6) the price or incentive that a user might pay or receive for downloading the EN app.

We selected the different EN apps presented to respondents at random among 810 possible app configurations resulting from combinations of app attributes (Table 1). We used a fractional factorial design to generate the packages of 10 choice sets shown to each respondent [37]. Specifically, we used a balanced overlap design, which presents respondents with different packages of choice sets [37]. In total, we created 260 versions of choice packages, which were then randomly allocated to respondents. This approach ensures that the different levels in Table 1 are extensively represented in the experiment. It does not place a disproportionate weight on some potential levels or attributes of an EN app [37].

In each choice set, the two EN apps were labeled as “app 1” and “app 2”. Respondents also had the option not to download any of the proposed apps. This ‘opt-out’ option increases the external validity of discrete choice data [38]. Based on standard sample size calculations, we required 250 respondents to detect the main effects of attribute levels in Table 1 [39].

Before starting the experiment, respondents were provided with explanations about contact tracing and EN apps. In doing so, we set explicit parameters for characteristics of an EN app that were not represented in our experimental design: for example, we instructed respondents to assume that using EN apps would not count towards their monthly data usage.

We pre-tested the online survey with a few collaborators (n = 6). This allowed addressing technical issues in our survey form, as well as eliciting initial feedback on the wording of questions and instructions. Based on feedback obtained during this pre-test, we produced a revised version of our online survey. Then, we conducted an online pilot of this revised version with 50 respondents recruited through the Qualtrics online panel. In that pilot, we encouraged participants to use open-ended fields to express their feedback on study questions and instructions. Both investigators reviewed the pilot data. We then revised the online survey to address reported issues. Respondents who completed the online pilot are not included in our analyses.

Data quality

we included an attention check [40] prior to the discrete choice experiment. Respondents who failed that attention check were excluded from completing the survey. We evaluated respondents’ understanding of the explanations about contact tracing and EN apps using six “quiz” questions, after which we provided them with feedback about the right answers.

Statistical analyses

We used data on age reported during screening to describe the selectivity of the consent process and attention checks among eligible internet users. Unfortunately, other characteristics (e.g., gender, race/ethnicity) were not assessed at the screening stage. We thus could not assess other dimensions of sample selectivity. We then described the demographic characteristics of participants who completed the discrete choice experiment, as well as their perceptions of the health threat posed by COVID–19 at the time of the study. We analyzed the discrete choice data using random parameter logit models, which allow preferences for app attributes to vary across respondents [41]. Using estimates from these models, we measured the relative importance of app attributes in explaining decisions to download an EN app [42]. We then calculated predicted probabilities to assess the potential impact of financial incentives on download rates [43]. All analyses were conducted in STATA 15.1 using the mixlogit, mixlbeta and mixlpred commands [44].


We contacted 726 internet users (Fig 1). One hundred and eight internet users immediately stated that they were not interested in learning more about the study and were not screened for eligibility. Among the 618 internet users who were screened for eligibility, 11 did not meet age requirements, 14 reported not understanding English, 38 reported that they did not own a smart phone and 45 reported recently testing positive for SARS-CoV-2. They were thus excluded from the study. In total, 510 internet users were eligible for study participation. Seventy-seven potential participants refused to provide consent (77/510, 15.1%). An additional 39 participants failed the attention check inserted in the survey and were excluded from the study at that time (39/510, 7.6%). A total of 394 participants met the eligibility criteria, consented, and passed the attention check.

Among eligible internet users, participation outcomes varied by age (Fig 2). Non-consent was highest among 18–24 years old (23/122, 18.9%) and 55–69 years old (19/80, 23.8%). It was lowest among 25–34 years old (11/131, 8.4%). Younger respondents were also more likely to be excluded from the study because they failed the attention check. For example, 16 of 122 eligible 18–24 years old failed the attention check (13.1%), whereas only one of the 67 eligible 45–54 years old (1.5%) and none of the 55–69 years old failed this check.

Fig 2. Age-related selectivity of study sample (n = 510).

Notes: The width of each bar represents the proportion of the study sample included in each age group.

The demographic characteristics and risk perceptions of the 394 participants are presented in Table 2. The sample was predominantly women (n = 268, 68.0%). One in 5 respondents belonged to the 18–24 years old age group, whereas only 3.8% were 65–69 years old. Approximately half of the respondents were non-Hispanic whites (210/394, 53.3%), but other groups were also represented including non-Hispanic blacks (54/394, 10.8%) and Hispanics (90/394, 22.8%). One in 3 respondents had completed college or higher, whereas 1 in 4 had a high school degree or had not completed high school. Only 22 out of 394 respondents reported being “not at all worried” about COVID–19 (5.6%).

Three respondents (0.8%) did not answer any of the quiz questions about EN apps correctly, whereas 81 respondents (20.6%) answered all questions correctly (S1 Fig). Out of 3,940 choices between hypothetical EN apps that were made during the experiment, study respondents selected the opt-out “no download” option 850 times (21.6%). Approximately half of respondents never selected the opt-out option (S2 Fig), whereas close to 10% selected this option at every choice set.

Based on results from random parameter logit models (S3 Fig), prices/incentives were twice as important as app accuracy and privacy protections in respondents’ decision-making about app downloads (Fig 3). Variations in price/incentive accounted for more than 50% of respondents’ decision-making, whereas attributes related to accuracy of the EN app in detecting exposures to SARS-CoV-2 accounted for approximately 25% of the decision-making to download an app. Attributes related to privacy features of the application accounted for less than 20% of respondents’ decisions.

Fig 3. Relative importance of attributes in explaining decision-making about app downloads (n = 394).

Notes: Attributes are grouped by strategy as in Table 1. The box plots represent the distributions of relative importance scores, across all individuals who completed the discrete choice experiment.

The predicted probability that potential users would download an EN app increased from 0.34 when the app cost $4.99 to download to 0.46 when free. The probability of downloads increased to 0.56 and 0.64 when potential users were given $50 or $100 incentives, respectively (Fig 4).

Fig 4. Predicted effects of price/incentives on decisions to download EN apps (n = 394).

Notes: We used preference estimates from random parameter logit models (S3 Fig) to predict the probability that a participant would select a given app at different levels of price/incentives. The box plots represent the distributions of these predicted probabilities across respondents.


We compared the relative importance of three key strategies to accelerate uptake of exposure notifications apps for COVID–19. Among these strategies, financial incentives had the largest potential effects. In the context of this survey-based discrete choice experiment, offering a $100 incentive to download an EN app resulted in a 40% increase in the probability to download, compared to a situation in which the app was free (from 0.46 to 0.64). Other attributes of EN apps such as their privacy protections and accuracy in detecting exposure to SARS-CoV-2 also influenced decisions of potential users, but to a lesser extent.

Our study has several limitations. Our estimates of the importance of price/incentives in decisions to download EN apps pertain to the context of our experiment. In another experimental set-up in which there are no levels requiring potential users to pay for the EN app, the importance of prices/incentives in the decision-making process may be attenuated. We conducted the discrete choice experiment among a convenience online sample, which is not representative of the population of potential EN app users in the US. Furthermore, participation in the experiment was rewarded by an incentive or gift provided by Qualtrics. As a result, our convenience sample might have included participants who are more responsive to financial incentives than other internet users or population members. In addition, some of the respondents might have found explanations about contact tracing and EN apps provided prior to the experiment, difficult or very difficult to understand. However, in real-world settings, significant numbers of potential users might make decisions about downloading EN apps based on incomplete and potentially misunderstood information. In robustness tests however, we repeated study analyses after excluding these groups. We obtained similar estimates of attribute importance.

We provided information about certain levels and characteristics of EN apps in ways that might have been difficult for respondents to grasp. For example, we expressed sensitivity of the EN app as percentages, rather than counts (as we used for false negatives, see Table 1). Our experiment did not present respondents with app configurations in which they did not have to share their personal data to receive exposure notifications. Such options might be appealing to potential users with strong privacy concerns. They might however lead to free-riding behaviors, and limit the overall effectiveness of EN apps and its expected societal benefits [45, 46].

Importantly, we did not investigate interactions between attributes. For example, respondents may express stronger preferences for financial incentives when the proposed EN app is less sensitive or yields additional false notifications. Some population groups may also be more responsive to financial incentives than others. We did not investigate such interactions due to our limited sample size, and the selectivity of our study sample.

Data from discrete choice experiments might also be affected by hypothetical bias: what people indicate they would do during an experiment might differ from the choices they will make in real-life conditions [47]. In an online experiment in Germany, for example, financial incentives generated an increase in the uptake of EN apps that was smaller than the increase in willingness to download such apps reported by participants in hypothetical survey questions [9].

Finally, our study focused solely on the one-time decision to download an EN app, even though the effects of EN apps on epidemic dynamics depend on longer-term commitments to use these apps and their functionalities [2, 6]. Financial incentives and other rewards might need to be offered repeatedly to EN app users to sustain use of an EN app over time and/or to encourage repeated interactions with the app (e.g., reporting symptoms or test results). In other areas of health (e.g., HIV treatment, behaviors related to non-communicable diseases), however, financial incentives have had mixed effects in sustaining healthy behaviors over long periods of time [4850].

Additional research is also needed, as the optimal ways to deliver financial incentives for EN apps remain unclear. This includes, for example, refining the amount of payments to offer to potential users. Incentive schemes should avoid “peanuts effects” [32], which emerge when amounts are set too low to motivate potential users. Given the expected societal benefits of COVID–19 control (e.g., economic recovery), high financial incentives (e.g., $100 or more) might remain cost-effective and should be considered. However, incentive amounts should not be set so high that some potential users with financial needs cannot effectively decide not to download the app [51]. If designed within a health equity framework [3], incentive schemes could also help alleviate (some of) the major disparities in the burden of COVID–19 documented in the US and elsewhere [52]. The modalities through which incentives are paid to potential users (e.g., gift cards, tax credits) might also modify the effects of such incentives on the uptake of EN apps. In an experiment in Italy, potential blood donors were reluctant to accept cash payments for a donation, but frequently accepted vouchers of the same amount [53].

There is also a risk that providing financial incentives to download EN apps might “crowd out” more altruistic reasons for using EN apps, attract malicious users and ultimately limit the adoption of EN apps and their effectiveness. Such negative effects of financial incentives have been observed in several prosocial behaviors [54], i.e., behaviors that primarily benefit others or society as a whole, rather than the person adopting the behavior, who might even incur a cost [55]. For example, in the context of blood donations, financial incentives did not seem to crowd out more altruistic motives for giving blood. They have thus been recommended as an important to avoid shortages in the blood supply [56]. In the context of charitable donations, on the other hand, thank-you gifts provided to donors reduced donation rates, probably because they made more altruistic motives to donate less salient in the decision-making process of potential donors [54]. The potential for such unintended negative effects to affect the provision of financial incentives for downloading EN apps should be investigated.

Finally, our discrete choice experiment, and other trials of financial incentives for downloading EN apps [9], were conducted prior to the large-scale roll-out of vaccines against SARS-CoV-2/COVID-19, and the emergence of more transmissible forms of the virus. Many high-income countries have now also launched digital tools to monitor vaccination uptake and/or have adopted vaccination requirements to access various settings and places. Such new tools and risks might have modified attitudes and preferences towards EN apps. Future investigations of incentives to use EN apps for COVID-19 control should thus be carried out in populations with diverse vaccination status.


The effectiveness of EN apps depends on reaching a critical mass of adopters in a reasonable time frame. It is unlikely however that heightened privacy protections and improved accuracy of EN apps will be sufficient to achieve levels of app uptake required to affect epidemic dynamics. Our work indicates that financial incentives to download might have large effects on the rate at which EN apps are adopted in populations affected by COVID–19. Rapid, pragmatic trials investigating the complex effects of financial incentives in real-life settings are now needed. If effective, such incentives might help EN apps reach uptake levels that improve the effectiveness of contact tracing programs and ultimately help increase the likelihood of controlling SARS-CoV-2.

Supporting information

S1 Fig. Distribution of answers to quiz questions to measure understanding of EN apps and instructions (n = 394).

Notes: respondents were asked 6 questions to elicit their understanding. These questions were asked before the beginning of the discrete choice experiment.


S2 Fig. Selection of the opt-out “No download” option during the discrete choice experiment (n = 394).


S3 Fig. Distributions of individual preferences for app attributes (n = 394).

Notes: The plots represent distributions of individual estimates of preferences for attributes obtained from a random parameter logit model. They were computed using the mixlogit and mixlbeta commands in Stata, with 500 Halton draws. Estimates above the red dotted line indicate positive effects of attributes on respondents’ utility levels. Acronyms and abbreviations: DOH = Department of Health; Status refers to the COVID–19 status of the user, as determined by test results and/or reported symptoms.



  1. 1. Watson C, Cicero A, James B, Michael F. A National Plan to Enable Comprehensive COVID-19 Case Finding and Contact Tracing in the US. 2020. Available:
  2. 2. Ferretti L, Wymant C, Kendall M, Zhao L, Nurtay A, Abeler-Dörner L, et al. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science. 2020 [cited 17 Apr 2020]. pmid:32234805
  3. 3. Ivers LC, Weitzner DJ. Can digital contact tracing make up for lost time? The Lancet Public Health. 2020;0. pmid:32682488
  4. 4. Troncoso C, Payer M, Hubaux J-P, Salathé M, Larus J, Bugnion E, et al. Decentralized Privacy-Preserving Proximity Tracing. arXiv:200512273 [cs]. 2020 [cited 31 May 2020]. Available:
  5. 5. Abueg M, Hinch R, Wu N, Liu L, Probert W, Wu A, et al. Modeling the effect of exposure notification and non-pharmaceutical interventions on COVID-19 transmission in Washington state. npj Digit Med. 2021;4: 1–10. pmid:33398041
  6. 6. Colizza V, Grill E, Mikolajczyk R, Cattuto C, Kucharski A, Riley S, et al. Time to evaluate COVID-19 contact-tracing apps. Nat Med. 2021;27: 361–362. pmid:33589822
  7. 7. Kretzschmar ME, Rozhnova G, Bootsma MCJ, Boven M van, Wijgert JHHM van de, Bonten MJM. Impact of delays on effectiveness of contact tracing strategies for COVID-19: a modelling study. The Lancet Public Health. 2020;0. pmid:32682487
  8. 8. Currie D, Peng C, Lyle D, Jameson B, Frommer M. Stemming the flow: how much can the Australian smartphone app help to control COVID-19? Public Health Res Pract. 2020;30. pmid:32601652
  9. 9. Munzert S, Selb P, Gohdes A, Stoetzer LF, Lowe W. Tracking and promoting the usage of a COVID-19 contact tracing app. Nat Hum Behav. 2021;5: 247–255. pmid:33479505
  10. 10. Bengio Y, Janda R, Yu YW, Ippolito D, Jarvie M, Pilat D, et al. The need for privacy with public digital contact tracing during the COVID-19 pandemic. The Lancet Digital Health. 2020;0. pmid:32835192
  11. 11. Kaptchuk G, Goldstein DG, Hargittai E, Hofman J, Redmiles EM. How good is good enough for COVID19 apps? The influence of benefits, accuracy, and privacy on willingness to adopt. arXiv:200504343 [cs]. 2020 [cited 30 May 2020]. Available:
  12. 12. Hargittai E, Redmiles E. Will Americans Be Willing to Install COVID-19 Tracking Apps? In: Scientific American Blog Network [Internet]. 2020 [cited 25 Jul 2020]. Available:
  13. 13. Redmiles EM. User Concerns & Tradeoffs in Technology-Facilitated Contact Tracing. arXiv:200413219 [cs]. 2020 [cited 30 May 2020]. Available:
  14. 14. Williams SN, Armitage CJ, Tampe T, Dienes K. Public attitudes towards COVID-19 contact tracing apps: A UK-based focus group study. medRxiv. 2020; 2020.05.14.20102269.
  15. 15. Zhang B, Kreps S, McMurry N. Americans’ perceptions of privacy and surveillance in the COVID-19 Pandemic. 2020 [cited 31 May 2020].
  16. 16. Wyl V von, Höglinger M, Sieber C, Kaufmann M, Moser A, Serra-Burriel M, et al. Drivers of Acceptance of COVID-19 Proximity Tracing Apps in Switzerland: Panel Survey Analysis. JMIR Public Health and Surveillance. 2021;7: e25701. pmid:33326411
  17. 17. Blom AG, Wenz A, Cornesse C, Rettig T, Fikel M, Friedel S, et al. Barriers to the Large-Scale Adoption of a COVID-19 Contact Tracing App in Germany: Survey Study. J Med Internet Res. 2021;23: e23362. pmid:33577466
  18. 18. Touzani R, Schultz E, Holmes SM, Vandentorren S, Arwidson P, Guillemin F, et al. Early Acceptability of a Mobile App for Contact Tracing During the COVID-19 Pandemic in France: National Web-Based Survey. JMIR mHealth and uHealth. 2021;9: e27768. pmid:34086589
  19. 19. Vlaev I, King D, Darzi A, Dolan P. Changing health behaviors using financial incentives: a review from behavioral economics. BMC Public Health. 2019;19: 1059. pmid:31391010
  20. 20. Cahill K, Hartmann-Boyce J, Perera R. Incentives for smoking cessation. Cochrane Database of Systematic Reviews. 2015 [cited 21 Jun 2020]. pmid:25983287
  21. 21. Yotebieng M, Thirumurthy H, Moracco KE, Kawende B, Chalachala JL, Wenzi LK, et al. Conditional cash transfers and uptake of and retention in prevention of mother-to-child HIV transmission care: a randomised controlled trial. The Lancet HIV. 2016;3: e85–e93. pmid:26847230
  22. 22. Tappin D, Bauld L, Purves D, Boyd K, Sinclair L, MacAskill S, et al. Financial incentives for smoking cessation in pregnancy: randomised controlled trial. BMJ. 2015;350: h134. pmid:25627664
  23. 23. Volpp KG, Troxel AB, Pauly MV, Glick HA, Puig A, Asch DA, et al. A Randomized, Controlled Trial of Financial Incentives for Smoking Cessation. New England Journal of Medicine. 2009;360: 699–709. pmid:19213683
  24. 24. Businelle MS, Kendzor DE, Kesh A, Cuate EL, Poonawalla IB, Reitzel LR, et al. Small financial incentives increase smoking cessation in homeless smokers: A pilot study. Addictive Behaviors. 2014;39: 717–720. pmid:24321696
  25. 25. Lee R, Cui RR, Muessig KE, Thirumurthy H, Tucker JD. Incentivizing HIV/STI Testing: A Systematic Review of the Literature. AIDS Behav. 2014;18: 905–912. pmid:24068389
  26. 26. McCoy SI, Shiu K, Martz TE, Smith CD, Mattox L, R. Gluth D, et al. Improving the Efficiency of HIV Testing With Peer Recruitment, Financial Incentives, and the Involvement of Persons Living With HIV Infection. JAIDS Journal of Acquired Immune Deficiency Syndromes. 2013;63: e56. pmid:23403860
  27. 27. Njuguna IN, Wagner AD, Neary J, Omondi VO, Otieno VA, Orimba A, et al. Financial incentives to increase pediatric HIV testing: a randomized trial. AIDS. 2021;35: 125–130. pmid:33048877
  28. 28. Thirumurthy H, Masters SH, Rao S, Bronson MA, Lanham M, Omanga E, et al. Effect of Providing Conditional Economic Compensation on Uptake of Voluntary Medical Male Circumcision in Kenya: A Randomized Clinical Trial. JAMA. 2014;312: 703–711. pmid:25042290
  29. 29. Choko AT, Corbett EL, Stallard N, Maheswaran H, Lepine A, Johnson CC, et al. HIV self-testing alone or with additional interventions, including financial incentives, and linkage to care or prevention among male partners of antenatal care clinic attendees in Malawi: An adaptive multi-arm, multi-stage cluster randomised trial. PLOS Medicine. 2019;16: e1002719. pmid:30601823
  30. 30. Gibson DG, Wosu AC, Pariyo GW, Ahmed S, Ali J, Labrique AB, et al. Effect of airtime incentives on response and cooperation rates in non-communicable disease interactive voice response surveys: randomised controlled trials in Bangladesh and Uganda. BMJ Global Health. 2019;4: e001604. pmid:31565406
  31. 31. Ulrich CM, Danis M, Koziol D, Garrett-Mayer E, Hubbard R, Grady C. Does It Pay to Pay?: A Randomized Trial of Prepaid Financial Incentives and Lottery Incentives in Surveys of Nonphysician Healthcare Professionals. Nursing Research. 2005;54: 178–183. pmid:15897793
  32. 32. Thirumurthy H, Asch DA, Volpp KG. The Uncertain Effect of Financial Incentives to Improve Health Behaviors. JAMA. 2019;321: 1451–1452. pmid:30907936
  33. 33. Ryan M. Discrete choice experiments in health care. BMJ. 2004;328: 360–361. pmid:14962852
  34. 34. Boas TC, Christenson DP, Glick DM. Recruiting large online samples in the United States and India: Facebook, Mechanical Turk, and Qualtrics. Political Science Research and Methods. 2020;8: 232–250.
  35. 35. SeyedAlinaghi S, Oliaei S, Kianzad S, Afsahi AM, MohsseniPour M, Barzegary A, et al. Reinfection risk of novel coronavirus (COVID-19): A systematic review of current evidence. World J Virol. 2020;9: 79–90. pmid:33363000
  36. 36. Yasaka TM, Lehrich BM, Sahyouni R. Peer-to-Peer Contact Tracing: Development of a Privacy-Preserving Smartphone App. JMIR Mhealth Uhealth. 2020;8: e18936. pmid:32240973
  37. 37. Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Mühlbacher A, Regier DA, et al. Constructing Experimental Designs for Discrete-Choice Experiments: Report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force. Value in Health. 2013;16: 3–13. pmid:23337210
  38. 38. Campbell D, Erdem S. Including Opt-Out Options in Discrete Choice Experiments: Issues to Consider. Patient. 2019;12: 1–14. pmid:30073482
  39. 39. de Bekker-Grob EW, Donkers B, Jonker MF, Stolk EA. Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide. Patient. 2015;8: 373–384. pmid:25726010
  40. 40. Shamon H, Berning CC. Attention Check Items and Instructions in Online Surveys with Incentivized and Non-Incentivized Samples: Boon or Bane for Data Quality? Survey Research Methods. 2020;14: 55–77.
  41. 41. Revelt D, Train K. Mixed Logit with Repeated Choices: Households’ Choices of Appliance Efficiency Level. The Review of Economics and Statistics. 1998;80: 647–657.
  42. 42. Hauber AB, González JM, Groothuis-Oudshoorn CGM, Prior T, Marshall DA, Cunningham C, et al. Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force. Value Health. 2016;19: 300–315. pmid:27325321
  43. 43. Lancsar E, Fiebig DG, Hole AR. Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software. PharmacoEconomics. 2017;35: 697–716. pmid:28374325
  44. 44. Hole AR. Fitting Mixed Logit Models by Using Maximum Simulated Likelihood. The Stata Journal. 2007;7: 388–401.
  45. 45. Sweeney JW. An experimental investigation of the free-rider problem. Social Science Research. 1973;2: 277–292.
  46. 46. Kim O, Walker M. The free rider problem: Experimental evidence. Public Choice. 1984;43: 3–24.
  47. 47. Buckell J, Hess S. Stubbing out hypothetical bias: improving tobacco market predictions by combining stated and revealed preference data. Journal of Health Economics. 2019;65: 93–102. pmid:30986747
  48. 48. Cawley J, Price JA. A case study of a workplace wellness program that offers financial incentives for weight loss. Journal of Health Economics. 2013;32: 794–803. pmid:23787373
  49. 49. John LK, Loewenstein G, Troxel AB, Norton L, Fassbender JE, Volpp KG. Financial Incentives for Extended Weight Loss: A Randomized, Controlled Trial. J GEN INTERN MED. 2011;26: 621–626. pmid:21249462
  50. 50. Patel MS, Asch DA, Troxel AB, Fletcher M, Osman-Koss R, Brady J, et al. Premium-Based Financial Incentives Did Not Promote Workplace Weight Loss In A 2013–15 Study. Health Aff (Millwood). 2016;35: 71–79. pmid:26733703
  51. 51. Kahn J. Digital Contact Tracing for Pandemic Response: Ethics and Governance Guidance. Johns Hopkins University Press; 2020. Available:
  52. 52. Hooper MW, Nápoles AM, Pérez-Stable EJ. COVID-19 and Racial/Ethnic Disparities. JAMA. 2020;323: 2466–2467. pmid:32391864
  53. 53. Lacetera N, Macis M. Do all material incentives for pro-social activities backfire? The response to cash and non-cash incentives for blood donations. Journal of Economic Psychology. 2010;31: 738–748.
  54. 54. Chao M. Demotivating incentives and motivation crowding out in charitable giving. PNAS. 2017;114: 7301–7306. pmid:28655844
  55. 55. Bénabou R, Tirole J. Incentives and Prosocial Behavior. American Economic Review. 2006;96: 1652–1678.
  56. 56. Lacetera N, Macis M, Slonim R. Economic Rewards to Motivate Blood Donations. Science. 2013;340: 927–928. pmid:23704557