Figures
Abstract
Introduction
Automated mobile phone surveys (MPS) can be used to collect public health data of various types to inform health policy and programs globally. One challenge in administering MPS is identification of an appropriate and effective participant consent process. This study investigated the impact of different survey consent approaches on participant disposition (response characteristics and understanding of the purpose of the survey) within the context of an MPS that measured noncommunicable disease (NCD) risk factors across Colombia and Uganda.
Methods
Participants were randomized to one of five consent approaches, with consent modules varying by the consent disclosure and mode of authorization. The control arm consisted of a standard consent disclosure and a combined opt-in/opt-out mode of authorization. The other four arms consist of a modified consent disclosure and one of four different forms of authorization (i.e., opt-in, opt-out, combined opt-in/opt-out, or implied). Data related to respondent disposition and respondent understanding of the survey purpose were analyzed.
Results
Among 1889 completed surveys in Colombia, differences in contact, response, refusal, and cooperation rates by study arms were found. About 68% of respondents correctly identified the survey purpose, with no significant difference by study arm. Participants reporting higher levels of education and urban residency were more likely to identify the purpose correctly. Participants were also more likely to accurately identify the survey purpose after completing several survey modules, compared to immediately following the consent disclosure (78.8% vs 54.2% correct, p<0.001). In Uganda, 1890 completed surveys were collected. Though there were differences in contact, refusal, and cooperation rates by study arm, response rates were similar across arms. About 37% of respondents identified the survey purpose correctly, with no difference by arm. Those with higher levels of education and who completed the survey in English were able to more accurately identify the survey purpose. Again, participants were more likely to accurately identify the purpose of the survey after completing several NCD modules, compared to immediately following the consent module (42.0% vs 32.2% correct, p = 0.013).
Conclusion
This study contributes to the limited available evidence regarding consent procedures for automated MPS. Future studies should develop and trial additional interventions to enhance consent for automated public health surveys, and measure other dimensions of participant engagement and understanding.
Citation: Ali J, Nagarajan M, Mwaka ES, Rutebemberwa E, Vecino-Ortiz AI, Quintero AT, et al. (2022) Remote consent approaches for mobile phone surveys of non-communicable disease risk factors in Colombia and Uganda: A randomized study. PLoS ONE 17(12): e0279236. https://doi.org/10.1371/journal.pone.0279236
Editor: Tarik A. Rashid, University of Kurdistan Hewler, IRAQ
Received: February 3, 2022; Accepted: December 3, 2022; Published: December 21, 2022
Copyright: © 2022 Ali et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data files are available from the ICPSR database (https://deposit.icpsr.umich.edu/deposit/workspace?goToPath=/ddf/167023&goToLevel=project). Researchers can access the data set from the ICPSR link provided after creating an account.
Funding: This study was made possible with the support of Bloomberg Philanthropies (https://www.bloomberg.org) and the people of Australia through the Department of Foreign Affairs and Trade (https://www.dfat.gov.au/) through award number 119668. The contents are the responsibility of the authors and do not necessarily reflect the views of Bloomberg Philanthropies or the Government of Australia. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
As has been well documented, low- and middle-income countries (LMICs) are experiencing major shifts in disease burdens, with significant increases in the prevalence of noncommunicable diseases (NCDs) [1]. The paucity and cost of acquiring timely data on the underlying prevalence of NCDs in many countries makes the impact of any public health intervention difficult to evaluate, especially where target populations are relatively hard to reach [2, 3]. Evidence-based policy development to advance NCD control is a key goal, and effective approaches need to be advanced if countries are to meet local and global targets [4].
Given the widespread access to mobile phones globally and rapidly expanding capabilities of digital devices, mobile technology is increasingly finding a place in monitoring communicable and noncommunicable diseases, and their associated risk factors [5–7]. Simple mobile phone surveys (MPS) deployed through live operator administered interviews, pre-recorded automated interactive voice response (IVR) surveys, or automated text messaging have recently been trialed to facilitate large-scale population monitoring of NCD risk factors in Colombia, Uganda, and other countries [8–12]. What makes these efforts particularly attractive to policymakers, public health practitioners, and researchers is their relatively low cost, minimal technological requirements, and potential to cut data collection and processing times down from year(s) to weeks [5, 7].
While mobile phone-based NCD risk factor monitoring presents opportunities for the advancement of global public health surveillance, emerging practices ought to align with and inform the evolution of existing norms and requirements for ethics and governance of public health surveillance and research [9, 12, 13]. Unfortunately, little empirical evidence exists to advance thoughtful application of these requirements to the novel terrain of MPS. Studies are needed to advance understanding of a variety of data-related considerations–e.g., data privacy, security, ownership, access and use–and also to support careful development and implementation of consent processes that promote adequate disclosure, sufficient depth of comprehension, and authentic remote authorization during MPS [12, 14–16]. Systematic assessment of these and other ethics and regulatory considerations is critical to the stewardship of a foundation of trust upon which much of global digital health rests.
Recent qualitative and conceptual research related to consent for MPS suggests a challenge with identifying and prioritizing key information during very concise consent disclosures, a potential for inadequate respondent understanding, and a risk of uninformed “click-through” authorization with remote consent and automated disclosure processes [9, 12, 17]. Remote consent is defined as a consent process with no in-person components, [18] while automated disclosure refers to use of only pre-recorded informational messages during the process of consent. Experimental research focused on optimization of consent processes for large-scale MPS to monitor disease risk factors is needed to advance remote and automated forms of consent in digital health. Such empirical efforts can build on and complement the relatively large existing body of interventional consent research, which has been conducted primarily in non-digital health contexts [14–16].
In order to advance empirical understanding of consent for remote and automated health surveys in LMICs, we conducted a randomized study comparing different approaches to the consent process within a large automated IVR of NCD risk factors in two countries, Colombia and Uganda. The present study was implemented in both countries because of previous and ongoing collaborative MPS efforts in each targeting NCD risk factors, and to explore our research questions in two sociodemographic, culturally, and geographically distinct contexts. We comparatively tested the impacts of modified consent approaches on respondent disposition (e.g., response and completion rates) and participants’ understanding of the purpose of the mobile phone survey–a simple but important indicator of basic survey understanding.
Methods
Study design
We conducted a randomized controlled multifactor trial that tested variations to the consent process for a MPS designed to collect self-reported NCD risk factor data. The risk factors studied related to tobacco use, alcohol intake, diet, physical activity, and accessing screening for selected NCDs. We investigated the influence of i) the disclosure language used to convey survey consent information, and ii) the mode of authorization (i.e., opt-in, opt-out, combined opt-in/opt-out, or implied) on respondents, primarily by measuring response and cooperation rates as well as participant understanding of the purpose of the survey.
Participants were randomized to one of five study arms. In each country, the control arm consisted of a standard consent disclosure and a combined opt-in/opt-out mode of authorization, as was developed for a parent study [8]. The other four arms consisted of a modified consent disclosure and one of the four different modes of authorization described above (Fig 1, also described in the Interventions section).
The study was registered with ClinicalTrials.gov [19] and received ethics clearance from the Johns Hopkins University Bloomberg School of Public Health, USA; Public Health Institute of Pontificia Universidad Javeriana, Colombia; Makerere University School of Public Health, Uganda; and the Uganda National Council for Science and Technology, Uganda.
Interventions
The first type of intervention was a modification to the language used to disclose information about the survey to potential respondents. In order to develop the modified disclosures, standard consent disclosure language used in the parent study [8] was augmented following a formative, qualitative phase to elicit consent preferences and expectations of mHealth researchers, ethicists, and members of the public in Colombia and Uganda [12, 20, 21]. Wording of standard and modified disclosures is provided in S1 Table.
The second intervention was to randomly vary the way in which respondents were requested to signal their interest or refusal to complete a survey using their mobile phone keypad. Variations on modes of authorization included both active and passive modalities and were: (i) opting-in (“press 1 if you would like to complete the survey”); (ii) opting-out (“press 3 if you do not want to complete the survey”); (iii) a combination of opting-in and opting-out (“press 1 if you would like to complete the survey or 3 if you no not want to complete it”); and (iv) implied authorization (“by completing this survey you agree to participate”) where participants did not have to press a keypad button. The control arm was ‘standard introduction along with opt in and opt out’ while other four arms were the combinations of modified into with any of the four above-mentioned modes of authorization (S1 Table). Potential participants who were age-eligible were provided an audio consent disclosure statement and were requested to authorize their participation as specified for each study arm. We also tested the timing of asking the survey purpose understanding question, with some respondents randomized to receive the survey item early in the survey and others near the end of the survey (S2 Table). In all instances, a potential respondent could refuse a survey by not picking up the phone or by hanging up.
Participants
Study participants were sampled through the use of random digit dialing (RDD) [8]. A pseudo-random number generator was used to produce random numbers of seven-digit length, prefixed with (randomly generated) mobile network operator-specific three-digit phone codes and country codes for Colombia (57) and Uganda (256). The RDD was executed through the IVR platform. In Colombia, 297,989 RDD calls were made from 14 November 2018 until 25 January 2019, resulting in 1,889 completed surveys across all five arms (Fig 2A). Between 31 August 2018 and 1 October 2018, a total of 129,187 calls were made in Uganda, with 1,890 completed surveys across all arms (Fig 2B).
a. CONSORT diagram of interview outcomes among participants in Colombia. b. CONSORT diagram of interview outcomes among participants in Uganda.
Individuals who received a phone call from the IVR system were screened for age-based eligibility (i.e., ≥18 years-old). Age-eligibility was determined based on self-report; participants were instructed to use the keypad to indicate their age (i.e., “Are you 18 years or older? If YES, Press 1. If No, Press 3”). Age-eligible participants were provided with information about the survey and invited to participate. Eligibility- and consent-screened participants were then presented with the remainder of the IVR survey.
Randomization and masking
Randomization was done in the IVR platform in a 1:1:1:1:1 allocation ratio across the five study arms. Participants were automatically randomized to different study arms after choosing a preferred survey language and were unaware of their study group allocation. Data cleaning was done by researchers blinded to the allocation of study participants; analyses were conducted unblinded.
Study procedures and instruments
The study procedures for Colombia and Uganda were comparable but were operationalized using different available IVR survey delivery platforms, which introduced minor variations in delivery. Specific differences in procedure are discussed where applicable.
Participants who picked up a phone call were asked to indicate their language preference through a numeric response on the keypad. In Colombia, the survey was administered only in Spanish; in Uganda, the languages available were Luganda, Luo, Runyakitara, and English. In both countries, the languages offered were commonly understood across a large portion of the population. Survey questions were administered through delivery of several modules: i) language selection (as applicable), ii) survey introduction, iii) screening and authorization, iv) demographics, v) NCD risk factors (five modules), and vi) survey understanding measure. The order of administration of the different NCD modules was randomized to minimize bias due to mid-survey drop-off, while keeping the question order within modules constant and preserving skip patterns. The survey understanding measure was also randomly assigned to one of six positions within the survey in Uganda, and to either the beginning or end of the NCD module in Colombia. This variation in assignment across countries reflected the different capabilities of the survey delivery platforms.
The items in each NCD module were based on standardized questions from widely used surveys, chosen after expert review and adapted to the country context through multiple rounds of testing and feedback. The survey understanding item was adapted from previous consent understanding studies [22–24], with some country-specific variation to accommodate local linguistic (semantic and syntactic) requirements (S2 Table). Translated text was recorded, back-translated, and audio files of the questionnaire were tested at country-level prior to survey deployment to ensure that the translations and recordings were comprehensible.
The IVR surveys were deployed between 08:00 AM and 08:00 PM local time, and a single contact attempt made with each randomly generated number. Survey participants could choose to repeat questions through key presses on their mobile phones as they moved through the survey. Participants did not incur charges for the airtime taken to complete the survey and were informed of this; those who completed the IVR survey received a small airtime incentive delivered through their cellular provider.
Outcomes
The study had three primary outcomes: response rate, cooperation rate, and understanding of the purpose of the survey. Secondary outcomes were contact rate and refusal rate. Disposition-related survey outcomes were categorized using standard definitions from the American Association for Public Opinion Research (AAPOR). The calculation method (i.e., numerators, denominators, and equations) of contact, response, refusal, and cooperation rates are presented in S3 Table [25]. To capture respondent understanding of the survey purpose, participants had to select one of the following four categories, indicating what they perceived as the purpose: to improve understanding of hospital services, to improve understanding of community health, to develop a new medicine, or don’t know. We considered ‘to improve community health’ as the correct response.
Statistical analysis
We used identical assumptions to calculate the required sample size in both countries. To produce the largest sample size with adequate power, the following assumptions were used: cooperation rate of 30%, a type I error (i.e., alpha) of 0.05, and 80% power. To detect a minimum absolute difference of 10%, a total of 376 participants were required in each arm in each country. We did not inflate sample sizes for multiple comparisons, as recommended in Rothman [26].
The survey introduction, different modes of authorization, and timing of measurement were the independent variables. Participation rates (i.e., contact, response, refusal, and cooperation rates) and understanding of the survey purpose were the dependent variables. First, demographic characteristics of complete interviews were described using numbers and percentages (%) for both countries. Then, log-binomial regression was used to calculate the risk ratios (RR) and corresponding 95% confidence intervals (CI) for contact, response, refusal, and cooperation rates by study arms. Proportions of survey understanding were compared across demographics and study arms using chi-squared tests. The effect of measurement timing on understanding of the survey’s purpose–measurement immediately following information disclosure vs later in the survey–was also explored. Lastly, unadjusted and adjusted log-binomial regression analyses were used to report the RR (with 95% CI) for the likelihood of the association of correct response with survey arms and sociodemographic characteristics. The adjustments were done for the following sociodemographic variables: age, gender, education, and rural-urban location of residence. The log-binomial regression shows significance level using Z-test. We did the adjustments to account for sociodemographic conditions as they may confound the association of outcomes and interventions of the study. Analyses were done with Stata/SE (version 14.1; Stata Corp, College Station, TX, USA) [27]. An alpha of 0.05 was assumed for all tests of statistical significance.
Inclusivity in global research
Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the (S1 Checklist).
Results
Survey and respondent characteristics
Demographic and socioeconomic characteristics of the completed surveys were similarly distributed across study arms (Table 1). There was a roughly even distribution between male and female participants, and about half of participants were between 18–29 years old. One-third of respondents had completed secondary education, one-fourth had completed a technical education, and one-third had completed tertiary education or higher. Three-fourths of the sample consisted of urban respondents, and the survey was conducted exclusively in Spanish.
Demographic and socioeconomic characteristics of the completed surveys were similarly distributed across the study arms (Table 1). Over three-fourths of the sample was female, and about two-thirds were between 18–29 years old. One-fourth had completed only primary education, and about 40% had completed secondary education. There was almost equal representation of rural and urban dwellers. Over half completed the survey in Luganda and about one-fourth in Runyakitara, with the remainder in English and Luo.
Survey disposition according to study arm
In Colombia, the cooperation rates were 52.08% in the control arm (Arm 1), 51.55% in Arm 2, 51.87% in Arm 3, 39.14% in Arm 4, and 48.85% in Arm 5 (Fig 1, Table 2). Cooperation rates were only significantly lower in Arm 4 (RR 0.75, 95% CI: 0.68–0.83, p<0.001) as compared to the control arm. The response rates were 1.09% in the control arm, 0.92% in Arm 2, 0.98% in Arm 3, 0.68% in Arm 4, and 0.93% in Arm 5. The response rates were significantly lower in Arm 2 (RR 0.85, 95% CI: 0.75–0.96, p = 0.008), Arm 4 (RR 0.62, 95% CI: 0.55–0.70, p<0.001), and Arm 5 (RR 0.86, 95% CI: 0.76–0.97, p = 0.02) when compared to the control arm. For secondary outcomes, the refusal rates were significantly lower in Arm 3 0.42% (RR, 0.81 95% CI: 0.68–0.97, p = 0.019) and significantly higher in Arm 4, 0.71% (RR 1.37, 95% CI: 1.17–1.59, p<0.001), in comparison to the control arm. Arms 2, 3, 4, and 5 had significantly lower contact rates than the control arm.
In Uganda, the cooperation rates were 61.34% in the control arm, 63.79% in Arm 2, 70.52% in Arm 3, 68.84% in Arm 4, and 73.31% in Arm 5 (Fig 1, Table 2). Cooperation rates were significantly higher in Arm 3 (RR 1.15, 95% CI: 1.06–1.25, p = 0.001), Arm 4 (RR 1.12, 95% CI: 1.03–1.22, p = 0.007), and Arm 5 (RR 1.20, 95% CI: 1.10–1.30, p<0.001) as compared to the control arm. There was no significant difference in response rates across the different study arms. The refusal rates were significantly lower in Arm 3 (0.36%, RR 0.61, 95% CI: 0.47–0.8, p<0.001), Arm 4 (0.38%, RR 0.64, 95% CI: 0.49–0.83, p = 0.001), and Arm 5 (0.23%, RR 0.39, 95% CI: 0.29–0.54, p<0.001) in comparison to the control arm. Arms 3 and 5 had significantly lower contact rates than the control arm.
We also reported the differences in outcomes using Arm 2 as the reference category (Table 3). In Colombia, the response rate (RR: 0.73, 95% CI: 0.65–0.83, p <0.001), cooperation rate (RR: 0.76, 95% CI: 0.68–0.84, p <0.001) were only lower in Arm 4; however, this arm had higher refusal rate (RR: 1.51, 95% CI: 1.30–1.76, p <0.001).
In Uganda, contact rate was only significantly lower on Arm 5 (RR: 0.88, 95% CI: 0.78–0.99, p = 0.032) (Table 3). The response rate did not differ by arm. Refusal rate was significantly lower in Arm 3 (RR: 0.74, 95% CI: 0.56–0.98, p = 0.038) and Arm 5 (RR: 0.48, 95% CI: 0.56–0.98, p<0.001). These two arms also had significantly higher cooperation rates.
Disposition codes by intervention arms for both countries are also reported in full in S4 Table.
Respondent understanding of survey purpose
In Colombia, overall, 67.82% of survey respondents correctly identified the purpose of the survey (95% CI: 65.64–69.93), with no significant difference by study arm, age, or sex (Table 4, Fig 3A). The proportion that correctly identified the survey purpose increased with educational attainment (50.00% none, 57.93% primary, 65.88% secondary, 70.02% technical, 73.76% tertiary, p<0.001) (Table 4, Fig 4A). A significantly higher proportion of urban respondents (p = 0.003) correctly identified the survey purpose (69.24%), compared to rural respondents (63.90%). A significantly higher proportion (p<0.001) of survey respondents correctly identified the survey purpose after having completed a majority of the survey (78.84%, 95% CI: 76.20–81.26) as compared to respondents who were asked about the survey purpose shortly after consent (54.24%, 95% CI: 50.79–57.64) (Fig 3C).
In Uganda, overall, 37.30% of respondents correctly identified the survey purpose (95% CI: 35.14% - 39.51%) with no significant difference by study arm or age; however, a greater proportion of men correctly identified the survey purpose compared to women (38.49% and 32.43%, respectively (p = 0.004) (Table 4, Fig 3B). Understanding also differed by location–urban vs rural (p<0.001). The proportion that correctly identified the survey purpose increased with educational attainment (17.23% none, 27.05% primary, 43.41% secondary, and 53.59% tertiary, p<0.001) (Table 4, Fig 4B). There was also a significant difference (p<0.001) in the correct response based on the language chosen; whether Luganda (n = 292, 28.85%), Luo (n = 60, 40.00%), Runyakitara (n = 181, 42.00%), or English (n = 169, 58.48%) (Table 4). A higher proportion of respondents (p = 0.014) gave the correct response after having completed a majority of the survey (41.98%, 95% CI: 36.43–47.73) than respondents who were asked about the survey purpose shortly after consent (32.22%, 95% CI: 27.17–37.73) (Fig 3D). Survey understanding by study arms are also reported in S5 Table.
We reported the unadjusted and adjusted RR for the association of survey purpose understanding with arms and sociodemographic characteristics by country in Table 5. Overall, though rural residents had negative association with understanding in unadjusted analysis (unadjusted RR: 0.92, 95% CI: 0.85–0.99, p = 0.039), none of the factors, including survey arms were associated during adjusted analysis in Colombia (p>0.05). In Uganda, in adjusted analysis, people with primary (adjusted RR: 1.55, 95% CI: 1.14–2.08, p = 0.004), secondary (adjusted RR: 2.49, 95% CI: 1.89–3.29, p<0.001), and tertiary (adjusted RR: 3.16, 95% CI: 2.39–4.18, p<0.001) education level were more likely to understand than those without any formal education, however, survey arms were not significantly associated.
Discussion
This study comparatively evaluated modifications to consent disclosure and authorization modalities for an IVR deployed in Colombia and Uganda to explore relationships between the consent approach and survey participation and understanding outcomes. We also sought to identify whether timing of measuring participant understanding of the purpose of the survey (i.e., measurement immediately after consent vs. near the end of the IVR NCD survey) was associated with higher or lower levels of understanding. Forms of information disclosure (i.e., standard and modified) and modes of authorization (i.e., opt-in, opt-out, combined opt-in/opt-out, or implied) had significant effects on primary outcomes of cooperation and response rates, with no effect on participants’ understanding of the purpose of the survey. In both countries, participant understanding was substantially higher when measured later in the survey, as compared to immediately following disclosure of information at consent. This study contributes to the limited available evidence on consent procedures when mobile devices are used to facilitate disease research and surveillance in LMICs.
Our study found associations between aspects of the consent approach and both survey disposition and respondent understanding of the purpose of the survey, with differences by country context. In Colombia, there was a clear indication against the opt-out form of authorization, as evidenced by the significantly higher refusal rate and the significantly lower response and cooperation rates. Recently introduced Colombian data protection regulations are stringent in requiring affirmative forms of authorization for data access and use [28]. These regulations likely reflect pre-existing preferences and have the added effect of raising awareness of and shaping attitudes against default forms of authorization.
In Uganda, compound modes of authorization (i.e., those that included both opt-in and opt-out modes) were associated with higher refusal rates and lower cooperation rates, raising the possibility that these modes of authorization may pose difficulties for respondents. It is unclear whether higher levels of performance across singular opt-in, opt-out, and implied forms of authorization align with stated preferences. It has been suggested elsewhere that an opt-in form of authorization for MPS might be preferable to the Ugandan population [9]. Discrepancies between behaviors and stated preferences have been noted in related areas, including in relation to digital privacy and what has been described as the “privacy paradox” [29]. Additional research is needed to further understand the observed differences in both countries in respondent disposition, as it relates to the mode of authorization.
While some information can be more difficult to convey than the purpose of a survey–for instance the risks and benefits, or the idea of random assignment–communicating the purpose of a survey is still known to be challenging [30]. Compared to Colombia, where nearly two-thirds of respondents correctly identified the survey to be related to community or population health, in Uganda, only slightly more than one-third of respondents did so. In both countries, a higher level of respondent education was associated with correct identification of the survey purpose, which is consistent with prior consent understanding studies outside the MPS context. However, Colombia generally has higher levels of educational attainment, which was reflected in our study data; in Colombia, approximately 14% reported completing no more than primary education, compared to 40% in Uganda. In addition, we observed a significant difference in the correct response based on the language chosen. The association between level of education and language, especially for English, may account for the effects of education on consent understanding.
It is particularly noteworthy that despite the consent modules in both countries explicitly mentioning the purpose of the survey, those who were asked about its purpose immediately after the consent process were less likely to correctly identify that the survey sought to advance community or population health as compared to those who were asked about the survey’s purpose at the end of the questionnaire. Several studies have similarly shown that participants may not fully understand the purpose of a research activity immediately following consent [30]. Further, studies involving consent for research have found that the timing of measurement of understanding is relevant to interpretation of findings, with an emphasis on the potential for recall bias in measurement [31]. While there is a generally accepted notion of consent being a "process" (and not a one-time event), there is limited evidence demonstrating an evolution of consent understanding throughout the research process, as we have shown here.
Shortcomings in understanding can be attributable to different factors, and it can be difficult to infer which of these might be most salient for a given individual or context. The overall difference in understanding noted between countries in our study could be influenced by the level of education of the study population as well as the language of survey administration. The sole survey language used in Colombia was Spanish, with near-universal coverage across the population, while in Uganda the survey was administered in four of the approximately 43 languages spoken [32]. This may have the effect of some respondents choosing familiar but non-native languages, impacting understanding of the survey’ purpose or its questions.
Consent is not a static or singular construct—norms and behaviors related to consent may shift with changes in how societies, governments and health system actors collect and utilize information. Many acknowledge that consent expectations vary depending on the nature of the activity being consented to [33]. For a simple, low-risk automated MPS that presents opportunities for meaningful public health benefit, there is a goal of developing a consent process that is effective and efficient. Future research that measures other dimensions of consent understanding for MPS, such as survey procedures, risks, benefits, and perceptions of voluntariness in participation would be valuable. Further research to explore whether certain respondents experience technical challenges when effectuating their participation preferences using different modes of authorization (i.e., opt-in vs opt-out) would also be valuable. Finally, it is important to consider opportunities to develop more interactive consent processes for automated surveys, such as by introducing the option for conversation with a live operator who can answers questions.
This study has several limitations. Though we used a randomized trial design and had a large sample size, the sample was not nationally representative and included a higher proportion of people who were younger, more highly educated, or urban residents. Moreover, cross-country comparisons were somewhat difficult to make for linguistic reasons (despite best efforts, some concepts may not translate identically across multiple languages), and due to different software platforms being used to administer the MPS in each country, given local availability. While there was no meaningful difference in experience of the survey from a respondent perspective, there were differences in how the two survey systems handled randomization of survey modules. In Uganda, module randomization was automated, whereas in Colombia we had to manually allocate randomly generated blocks of phone numbers to receive different module order assignments. Finally, due to constraints on survey length (it is well known that many respondents drop off IVR surveys if they are too long) we were only able to introduce a one-item measure of understanding. A more robust study of consent for MPS might examine additional elements of understanding to establish a more comprehensive understanding score. Given the novelty of this study, we elected to take an incremental approach to consent understanding measurement.
Conclusion
Mobile phones demonstrate increasing promise as a means of improving public health data collection, including through MPS. Additional attention is needed to optimizing consent processes for such technology uses. This study contributes to the limited available evidence regarding remote consent procedures for automated MPS, identifying how small variations to the mode of authorization can have an effect on respondent disposition. Future studies should develop and trial additional interventions to enhance consent for automated public health surveys, and measure other dimensions of participant engagement and understanding. Preferences for different modes of authorization can also be further examined in different country- and data-contexts. Research of this nature can not only contribute improvements to survey methodology, but also enhance the ethical quality of automated MPS.
Supporting information
S1 Checklist. Inclusivity in global research.
https://doi.org/10.1371/journal.pone.0279236.s001
(DOCX)
S1 Table. Standard and modified survey introductions (English language versions).
https://doi.org/10.1371/journal.pone.0279236.s002
(DOCX)
S2 Table. Back-translated measures of understanding of the purpose of the survey.
https://doi.org/10.1371/journal.pone.0279236.s003
(DOCX)
S4 Table. Disposition codes by study arm in Colombia and Uganda.
https://doi.org/10.1371/journal.pone.0279236.s005
(DOCX)
S5 Table. Survey understanding by study arm and timing of measurement in Colombia and Uganda.
https://doi.org/10.1371/journal.pone.0279236.s006
(DOCX)
Acknowledgments
We are grateful to those in Colombia and Uganda who took the time to participate in the mobile phone survey.
References
- 1. Forouzanfar MH, Alexander L, Anderson HR, Bachman VF, Biryukov S, Brauer M, et al. Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013. The Lancet. 2015 Dec;386(10010):2287–323. pmid:26364544
- 2.
Ibrahim H, Liu X, Zariffa N, Morris AD, Denniston AK. Health data poverty: an assailable barrier to equitable digital health care. Vol. 3, The Lancet Digital Health. Elsevier Ltd; 2021. p. e260–5.
- 3. Raban MZ, Dandona R, Dandona L. Availability of data for monitoring noncommunicable disease risk factors in India. Bull World Health Organ. 2012 Jan 1;90(May 2011):20–9. pmid:22271961
- 4. United Nations General Assembly. Transforming our world: The 2030 agenda for sustainable development. 2015.
- 5. L’Engle K, Sefa E, Adimazoya EA, Yartey E, Lenzi R, Tarpo C, et al. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality. Ivers LC, editor. PLOS ONE. 2018 Jan 19;13(1):e0190902. pmid:29351349
- 6. Vicente P, Reis E, Santos M. Using Mobile Phones for Survey Research: A Comparison with Fixed Phones. Int J Mark Res. 2009 Jan;51(5):1–16.
- 7. Greenleaf A, Vogel L. Interactive Voice Response for Data Collection in Low and Middle-Income Countries [Internet]. 2018. Available from: https://www.viamo.io/wp-content/uploads/2018/07/Viamo-Brief-Data-Collection1595.pdf
- 8. Gibson DG, Pariyo GW, Wosu AC, Greenleaf AR, Ali J, Ahmed S, et al. Evaluation of Mechanisms to Improve Performance of Mobile Phone Surveys in Low- and Middle-Income Countries: Research Protocol. JMIR Res Protoc. 2017;6(5):e81. pmid:28476729
- 9. Mwaka E, Nakigudde J, Ali J, Ochieng J, Hallez K, Tweheyo R, et al. Consent for mobile phone surveys of non-communicable disease risk factors in low-resource settings: an exploratory qualitative study in Uganda. mHealth. 2019;5(1):26–26. pmid:31559271
- 10. Torres-Quintero A, Vega A, Gibson DG, Rodriguez-Patarroyo M, Puerto S, Pariyo GW, et al. Adaptation of a mobile phone health survey for risk factors for noncommunicable diseases in Colombia: a qualitative study. Glob Health Action. 2020;13(1). pmid:32856572
- 11. Song Yang, Phadnis R, Favaloro J, Lee J, Lau CQ, Moreira M, et al. Using Mobile Phone Data Collection Tool, Surveda, for Noncommunicable Disease Surveillance in Five Low- and Middle-income Countries. Online J Public Health Inform [Internet]. 2020 Dec 8 [cited 2021 Nov 24];12(2). Available from: https://journals.uic.edu/ojs/index.php/ojphi/article/view/10574 pmid:33381279
- 12. Rodriguez-Patarroyo M, Torres-Quintero A, Vecino-Ortiz AI, Hallez K, Franco-Rodriguez AN, Rueda Barrera EA, et al. Informed Consent for Mobile Phone Health Surveys in Colombia: A Qualitative Study. J Empir Res Hum Res Ethics. 2021 Feb;16(1–2):24–34. pmid:32975157
- 13.
World Health Organization. WHO guidelines on issues in public health surveillance. Geneva: World Health Organization; 2017.
- 14. Flory J. Interventions to Improve Research in Informed Consent for Research. J Am Med Assoc. 2014;292(13):1593–601.
- 15. Mandava A, Pace C, Campbell B, Emanuel E, Grady C. The quality of informed consent: Mapping the landscape. A review of empirical data from developing and developed countries. Vol. 38, Journal of Medical Ethics. Institute of Medical Ethics; 2012. p. 356–65. pmid:22313664
- 16. Nishimura A, Carey J, Erwin PJ, Tilburt JC, Murad MH, McCormick JB. Improving understanding in the research informed consent process: A systematic review of 54 interventions tested in randomized control trials. BMC Med Ethics. 2013 Jul 23;14(1):1–15. pmid:23879694
- 17. Ali J, Labrique AB, Gionfriddo K, Pariyo G, Gibson DG, Pratt B, et al. Ethics considerations in global mobile phone-based surveys of noncommunicable diseases:A conceptual exploration. J Med Internet Res. 2017;19(5). pmid:28476723
- 18. Moore S, Tassé A-M, Thorogood A, Winship I, Zawati M, Doerr M. Consent Processes for Mobile App Mediated Research: Systematic Review. JMIR MHealth UHealth. 2017 Aug 30;5(8):e126. pmid:28855147
- 19.
Use of Consent Language and Mode to Improve Interactive Voice Response Survey in Colombia and Uganda—Full Text View—ClinicalTrials.gov [Internet]. [cited 2020 Aug 12]. Available from: https://clinicaltrials.gov/ct2/show/NCT04394520?id=NCT04394520&draw=2&rank=1
- 20. Gibson DG, Farrenkopf BA, Pereira A, Labrique AB, Pariyo GW. The Development of an Interactive Voice Response Survey for Noncommunicable Disease Risk Factor Estimation: Technical Assessment and Cognitive Testing. J Med Internet Res. 2017 May 5;19(5):e112. pmid:28476724
- 21. Rutebemberwa E, Namutundu J, Gibson DG, Labrique AB, Ali J, Pariyo GW, et al. Perceptions on using interactive voice response surveys for non-communicable disease risk factors in Uganda: a qualitative exploration. mHealth. 2019;5(September):32–32. pmid:31620459
- 22. Taylor HA, Washington D, Wang NY, Patel H, Ford D, Kass NE, et al. Randomized comparison of two interventions to enhance understanding during the informed consent process for research. Clin Trials. 2021; pmid:33892597
- 23. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of Informed Consent: a New Measure of Understanding Among Research Subjects. JNCI J Natl Cancer Inst. 2001 Jan 17;93(2):139–47. pmid:11208884
- 24. Miller CK, O’Donnell DC, Searight HR, Barbarash RA. The Deaconess Informed Consent Comprehension Test: an assessment tool for clinical research subjects. Pharmacotherapy. 1996 Oct;16(5):872–8. pmid:8888082
- 25.
The American Association for Public Opinion Research. Standard Definitions: Final dispositions of case codes and outcome rates for surveys. 9th ed. 2016.
- 26. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiol Camb Mass. 1990 Jan;1(1):43–6. pmid:2081237
- 27.
Stata Corporation, College Station, Texas USA. StataCorp. 2017 [Internet]. 2017 [cited 2017 May 8]. Available from: https://www.stata.com/support/faqs/resources/citing-software-documentation-faqs/
- 28.
Content Management Systems (CMS) for Legal Associations. Data Protection and Cyber Security Laws in Colombia [Internet]. 2021. Available from: https://cms.law/en/int/expert-guides/cms-expert-guide-to-data-protection-and-cyber-security-laws/colombia
- 29.
Barth S, de Jong MDT. The privacy paradox–Investigating discrepancies between expressed privacy concerns and actual online behavior–A systematic literature review. Vol. 34, Telematics and Informatics. Elsevier Ltd; 2017. p. 1038–58.
- 30. Tam NT, Huy NT, Thoa LTB, Long NP, Trang NTH, Hirayama K, et al. Participants’ understanding of informed consent in clinical trials over three decades: systematic review and meta-analysis. Bull World Health Organ. 2015 Mar 1;93(3):186–198H. pmid:25883410
- 31. Dunn L. Enhancing Informed Consent for Research and Treatment. Neuropsychopharmacology. 2001 Jun;24(6):595–607. pmid:11331139
- 32.
World Atlas. What Languages Are Spoken In Uganda? [Internet]. 2021 [cited 2021 Dec 6]. Available from: https://www.worldatlas.com/articles/what-languages-are-spoken-in-uganda.html
- 33. Grady C. Enduring and Emerging Challenges of Informed Consent. Longo DL, editor. N Engl J Med. 2015 Feb 26;372(9):855–62.