Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A randomised controlled trial of email versus mailed invitation letter in a national longitudinal survey of physicians

  • Benjamin Harrap,

    Roles Formal analysis, Writing – review & editing

    Affiliation Centre for Health Policy, Melbourne School of Population and Global Health, The University of Melbourne, Carlton, Victoria, Australia

  • Tamara Taylor,

    Roles Conceptualization, Data curation, Methodology, Project administration, Writing – review & editing

    Affiliation Government and Social Research Division, Big Village, Melbourne, Australia

  • Grant Russell,

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliation Department of General Practice, Monash University, Melbourne, Victoria, Australia

  • Anthony Scott

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft

    anthony.scott@monash.edu

    Affiliation Centre for Health Economics, Monash University, Caulfield East, Victoria, Australia

Abstract

Despite their low cost, the use of email invitations to distribute surveys to medical practitioners have been associated with lower response rates. This research compares the difference in response rates from using email approach plus online completion rather than a mailed invitation letter plus a choice of online or paper completion. A parallel randomised controlled trial was conducted during the 11th annual wave of the nationally representative Medicine in Australia: Balancing Employment and Life (MABEL) longitudinal survey of doctors. The control group was invited using a mailed paper letter (including a paper survey plus instructions to complete online) and three mailed paper reminders. The intervention group was approached in the same way apart from the second reminder when they were approached by email only. The primary outcome is the response rate and the statistical analysis was blinded. 18,247 doctors were randomly allocated to the control (9,125) or intervention group (9,127), with 9,108 and 9,107 included in the analysis. Using intention to treat analysis, the response rate in the intervention group was 35.92% compared to 37.59% in the control group, a difference of -1.66 percentage points (95% CI: -3.06 to -0.26). The difference was larger for General Practitioners (-2.76 percentage points, 95% CI: -4.65 to -0.87) compared to other specialists (-0.47 percentage points, 95% CI: -2.53 to 1.60). For those who supplied an email address, the average treatment effect on the treated was higher at -2.63 percentage points (95% CI: -4.50 to -0.75) for all physicians, -3.17 percentage points (95% CI: -5.83 to -0.53) for General Practitioners, and -2.1 percentage points (95% CI: -4.75 to 0.56) for other specialists. For qualified physicians, using email to invite participants to complete a survey leads to lower response rates compared to a mailed letter. Lower response rates need to be traded off with the lower costs of using email rather than mailed letters.

Background

Web surveys have consistently lower response rates than all other survey modes [1]. Surveys of medical practitioners remain a key source of information about clinical practice, health service delivery, and clinical attitudes and experience. A key issue with survey data is that they can have low external validity because the sample may be less representative due to response bias caused by recruitment methods and non-random selection of physicians completing the survey. Although a low response rate does not necessarily mean low external validity [2], the focus on response rates remains a key feature of the survey methods literature for physicians [35].

Systematic reviews and meta-analyses have examined different methods of increasing response rates in surveys of medical practitioner populations [3,68], such as changing features of survey design and delivering incentives. Email contact and online survey completion is popular as costs are lower but research has shown that response rates also tend to be lower, with a mailed approach more effective and recommended [3]. For example, in a meta-analysis of 48 studies of health professionals, three studies found that mailed surveys were associated with higher response rates than online/web modes, with no difference in response rates between online modes and mixed modes [3]. Pit, Vo (6) conducted a systematic review of methods used to increase response rates for GPs, and found postal surveys were more effective than phone or email surveys (as a singular method of distribution), and a sequential mixed-mode of reminders was more effective than using online only or online and paper surveys concurrently. Beebe, Jacobson [9] found that a sequential mixed mode (web followed by mail) survey of health professionals had a higher response rate than mail only but found no statistically significant differences between mail only and web only, though the sample sizes were small. No differences were found between web only, mail only, and mixed modes in a more recent randomised study of physicians [10]. Other key studies have examined mixed modes that compare combinations of mail and online approaches, but do not directly compare mail and online [1113].

Most studies used data generally more than 10 years old. As the use of email and the internet becomes more universal, including the more widespread use of electronic medical records, it is important to re-examine this issue. Nevertheless, for physician cohorts who are less familiar with the internet, mainly older physicians, there is uncertainty as to whether response rates would be different, and a risk that response rates will be lower with an email approach or online completion. Older people are less likely to respond to email than younger people [14,15] and if a mailed approach is used and physicians are given the choice between paper or online competition, the latter is less likely for older physicians [16].

The aim of this research is to compare response rates between an email approach and a mailed approach within a national longitudinal survey of physicians. More specifically, we introduce an email approach in the second of three reminders sent to non-responding physicians. In the first ten annual waves of the survey, the main mailout and all three reminders were delivered by mail only. Our null hypothesis was that there would be no difference in the response rate when delivering the reminder by email or mail.

Methods

Reporting and design of the randomised trial are based on the Consolidated Standards of Reporting Trials (CONSORT) guidelines [17]. The study was approved by The University of Melbourne Faculty of Business and Economics Human Ethics Advisory Group (Ref. 0709559) and the Monash University Standing Committee on Ethics in Research Involving Humans (Ref: 195535 CF07/1102–2007000291). Participant consent was obtained through voluntarily completing the survey.

Participants

The research was conducted within the context of the Medicine in Australia: Balancing Employment and Life (MABEL) survey. This was a longitudinal panel survey of all medical practitioners in Australia, collecting 11 annual waves of data from around 9,000 to 10,000 physicians per wave [18]. The original responders in Wave 1 (2008) were followed up annually, with the addition of a cohort of new doctors entering the sample frame from Wave 2 and each subsequent wave [19]. Each wave therefore had a mixture of doctors from different cohorts. Responses for each wave were gathered using a sequential mixed mode design based on an earlier RCT [11]. The MABEL survey is sent to all types of medical practitioners in Australia.

The sample frame for MABEL is the Medical Directory of Australia, a national database of doctors held by the Australasian Medical Publishing Company (AMPCo). We use participants from Wave 11, administered between August 2018 and April 2019. Doctors were excluded if they had previously requested to withdraw from the MABEL survey, or who were known to be deceased. Junior doctors were excluded since in 2016 we conducted a small experiment that supported the use of an email approach for junior doctors and this was adopted in subsequent waves for this group only [18].

The invitation included a mailed letter that contained unique log in details for online completion to enable longitudinal tracking, as well as a paper copy of the survey which included a unique username printed on the cover. Respondents could choose the mode of completion. The first reminder used a mailed paper letter containing instructions for online completion but no paper survey. The second reminder used a mailed letter with instructions for online completion and included a paper copy of the survey. The third reminder included only a mailed paper letter with instructions for online completion.

Intervention

The mailout for the intervention groups included an email approach for the second reminder. Both the intervention and control group were approached four times: the initial invitation plus three reminders. In the control group all four approaches used a mailed paper letter sent to each participant’s work address. All survey materials are available at www.mabel.org.au.

The intervention group only differed at the second reminder, where they were approached by email and could only complete online, receiving no paper letter or paper copy of the survey. The comparison between the intervention and control group therefore includes a different method of approach and a different method of completion: email approach plus online completion versus mailed approach plus a choice of online or paper completion. The email included the same text as the paper letter plus a link and instructions for online completion. Emails were sent to email addresses from the AMPCo database or from email addresses provided by participants in earlier waves of the survey.

Outcomes

The primary outcome for this study is the response rate at end of recruitment. A medical practitioner was considered to have responded if they returned their survey (via mail or completing online) with Section A completed which included questions on whether they were currently participating in clinical practice and at least one question answered from Section B. Surveys returned blank were counted as refusals to participate. The response rate was calculated as the total number of responses divided by the total number of surveys distributed (minus surveys that could not be sent by the mailing house because the doctor was deceased or had no valid mailing address).

Sample size

The sample for the trial included 18,247 GPs and non-GP specialists eligible to be invited to complete a survey in Wave 11. This included, i) 13,382 doctors who had previously completed at least one MABEL survey since 2008 (defined as a continuing doctor), ii) a cohort of 1,862 doctors new to the sample frame in 2018, and iii) 3,003 doctors from a ‘boost’ sample comprising a 10% random sample of those who had never responded to an invitation to participate in MABEL. The total sample size of 18,247 doctors is sufficient to detect a two-sided difference of least two percentage points in the response rate (alpha 0.05 and power 0.8). This assumes a response rate of 42.4% in the control group (an estimate from Wave 10 of MABEL).

Randomisation

A parallel-arm design with 1:1 allocation was used, with 18,247 doctors randomly allocated to either the control or intervention group. Allocation was stratified by doctor type (GP, specialist), continuing or new, and boost sample to ensure the proportions of these groups of doctors in the intervention and control groups were the same. This is important because new doctors and boost sample doctors are likely to have lower response rates, and specialists had higher response rates than GPs in previous waves. We tested a two-sided hypothesis as it was unclear, a priori, whether the intervention group would have a higher or lower response rate. Randomisation was performed using the sample command in Stata 15.1 statistical software15.

Randomisation took place (by TT) before the first invitation for Wave 11 was mailed out in August 2018. Group allocation was kept separate (in a separate electronic data file) from the main mailout and the first reminder so researchers handling the responses and reminders were blinded to group allocation during this process. The second reminder was prepared in late November 2018 by TT. The list of those eligible for a second reminder was then merged with the file containing the intervention and control group identifiers to indicate who should receive an email. A separate file indicating whether doctors had an email address was also merged onto this file. AMPCo was sent a list of doctor identifiers indicating if they should be approached using a mailed letter or an email.

Statistical methods

The analysis was conducted by BH who was blinded to group allocation until after the analysis was complete and checked, and who was not involved in the randomisation or any data collection. Baseline characteristics of the intervention and control groups are compared with each other, and with the population of medical practitioners in Australia. The main analysis was based on intention to treat, which estimates the average treatment effect (ATE).

The proportions responding in each group were compared using a 2x2 table and Pearson chi-squared test. An adjusted analysis was also conducted using multivariable logistic regression to examine the probability of response between the two groups after adjusting for covariates. Sub-group analysis was also conducted using separate logistic regressions for GPs and other specialists. Covariates included age, gender, whether qualified overseas, quartiles of the socio-economic status of patients in each respondent’s postcode (measured using the Socio-Economic Indexes For Areas (SEIFA) Index of Relative Socio-Economic Disadvantage [20]), and the proportion of the population in the postcode over 65 years old and under 5 years old. Finally, the rurality of work location was measured using the Modified Monash Model (MMM) classification [21]. Major cities (MMM1), areas within 20km of town with 50,000 population (MMM2); areas within 15km of town with 15,000 to 50,000 population (MMM3); areas within 10km of town with 5,000 to 15,000 population (MMM4); MMM5-7 (all other remote and rural areas) are grouped with MM4 for the analysis. Statistical analysis was conducted using STATA [22].

A proportion of doctors allocated to the intervention and control group did not supply a valid email address to AMPCo. These will be allocated equally across intervention and control groups. Those who did not supply an email address in the intervention group cannot adhere to their allocated group and were instead approached by mailed paper letter rather than email. The intention to treat analysis includes these in the intervention group. This is appropriate as it assumes that in practice not all doctors are willing to provide email addresses and so the main results are applicable to the population of doctors whether or not they are willing to supply an email address.

However, this will lead to an underestimate of the effect of the intervention for those who actually received an email compared to those in the control group who also received email. In addition to calculating the results using intention to treat analysis, we therefore calculate the average treatment effect on the treated (ATET). This compares those who had a valid email address in both the intervention and control group.

Results

A comparison of the sample used in the trial and the population of GPs and specialists in clinical practice in 2018 shows that the trial sample was more likely to be female, are slightly younger, less likely to be from New South Wales, more likely to be from a non-metropolitan area. The proportion who are a specialist, the socio-economic status of the population, and the proportion of the population aged under 5 and over 65 years old are similar (Table 1). Descriptive statistics comparing the characteristics of the intervention and control groups are shown in Tables 2 and 3. The flow diagram in Fig 1 shows each step of the study and how the final sample was determined. Comparison of response rates are shown in Table 4, overall and for the subgroups of GPs and non-GP specialists. The response rate in the intervention group was 35.92% compared to 37.59% in the control group, a difference of -1.66 percentage points (95% CI: -3.06 to -0.26). After adjustment for covariates, this increases to -1.93 (95% CI: -3.36 to -0.50). The difference was larger for GPs (-2.76 percentage points, 95% CI: -4.65 to -0.87) compared to non-GP specialists (-0.47 percentage points, 95% CI: -2.53 to 1.60).

thumbnail
Table 1. Comparison of trial participants with population of GPs and specialists in 2018.

https://doi.org/10.1371/journal.pone.0289628.t001

thumbnail
Table 2. Characteristics of participants in intervention and control groups.

https://doi.org/10.1371/journal.pone.0289628.t002

thumbnail
Table 3. Characteristics of control and intervention group: Participants who supplied and an email address.

https://doi.org/10.1371/journal.pone.0289628.t003

The estimates of the ATET are shown in the bottom half of Table 3. This analysis compares only those who were approached by email in the intervention group to those in the control who had supplied an email address but were approached by mailed letter. Of those who were sent a second reminder in the intervention group, 43% (2840/6605) did not have an email address compared to 43.8% (2866/6540) in the control group. The overall ATET is larger compared to the ITT effect (-2.63 percentage points, 95% CI: -4.50 to -0.75), as well as for GPs (-3.17 percentage points, 95% CI: -5.83 to -0.53) and specialists (-2.10 percentage points, 95% CI: -4.75 to 0.56). The overall difference falls to 2.37 percentage points after adjustment for covariates.

Discussion

Using email to approach potential survey subjects is often preferred because of its low cost. It is likely to have gained popularity during the COVID-19 pandemic given issues with collecting survey data using face-to-face interviews. Several studies have shown that in surveys of physicians an emailed approach can lead to lower response rates, potentially increasing response bias and reducing external validity. This is also the case in other populations [1]. However, these studies are of specific samples and most are over 10 years old. One might assume that since then the use of email, familiarity with the internet, and online survey completion should have become more familiar to physician populations.

Our results confirm that, in a nationally representative population of qualified GPs and non-GP specialists, response rates are lower using an emailed approach. The finding of a 1.93 percentage point fall in the response rate is from the ITT analysis and so is relevant to physician populations approached initially by mail, or where it is unknown a priori if they have an email address. The ATET of a 2.37 percentage point fall in the response rate is relevant to physician populations where only an email address is available to researchers, and so for physicians willing to supply an email address. The fall in response rate is higher for GPs than for non-GP specialists. The effect size seems quite small possibly because the control group, though approached by mail, still had the option of completing the survey online given the mixed mode of survey completion available to them. Our estimates are therefore conservative compared to using mail by itself. Our results are, more relevant to surveys using mixed modes of delivery.

Weaver et al. (2019) randomised around 1,200 physicians in Minnesota and found a web-only response rate of 15.2% compared to 18.9% in the mail-only mode (3.7 percentage points), slightly higher than in our study. They also compared mixed modes (web-mail and mail-web) so the sample sizes in the web-only and mail-only groups were relatively small and so this difference was not statistically significant. Beebe et al., (2018) randomised 686 physicians, nurses and physician assistants and found a response rate of 38.2% for web only compared to 32.1% for mail only after two reminders, a slightly higher effect size (4.1 percentage points) and in the opposite direction of our results. However, again this difference was not statistically significant.

The results should also be interpreted in the context of our longitudinal study that the first 10 annual waves combined a mixed mode of completion with a single method of approach using a mailed paper letter for the main mailout and three reminders. The intervention changed the method of approach to email for the second reminder. We used the second reminder, rather than the first mailout, to avoid a potentially large fall in the total number of responses if email was worse than mail. In the second reminder fewer respondents in total would be included in the trial. We do not think that the percentage difference in response rates would be any different if the intervention was for the first mailing or the first or the third reminder. At the second reminder, the intervention group were approached by email and could only complete the survey online, whilst the control group were approached by mailed letter that included a paper copy of the survey and so had a choice of online or paper copy completion. A limitation is that we are not just comparing differences in the method of approach. Though the control group were approached by mail, they could choose online or paper completion, and so their response rate might be lower if they could complete only online or higher if they could only complete using paper. For those who responded, there is evidence that those in the intervention group were more likely to complete the survey online, presumably because this group were approached by email in the second reminder. In the ITT analysis, 46.1% of control group responded online compared to 52.0% in the intervention group (Table 1). In the ATET analysis of those who supplied an email address, 48.1% in the control group completed the survey online compared to 71.3% in the intervention group.

The study was conducted exclusively within the Australian context, which may limit the generalizability of the findings to other countries with different healthcare systems, professional cultures, and survey response patterns. It would be beneficial to conduct similar studies in other countries to better understand how the intervention is performed in different settings. In addition, we have not examined differences in non-response bias which could arise if those who are more likely to respond to email are different in unobserved ways and this changes the composition of the doctors who respond, e.g. younger doctors are likely to respond to email. Our previous research in the same context has shown mixed modes (mail-online and online-mail) showed evidence of response bias, and that young, male doctors working in remote areas are more likely to complete the online survey [11,23].

Our definition of a complete response of at least one question answered from Section B is quite conservative compared to what is generally recommended in the literature [24]. However, some responses to current working status (Section A in the survey), such as ‘retired’ or ‘not working in clinical practice’ meant that respondents were not required to complete the rest of the survey or were directed to complete only certain sections of the survey, e.g demographics. Nevertheless, for respondents who answered at least one question from Section B, item response rates are over 90% for all sections of the survey across all 11 waves [18].

As we were uncertain about the impact of email on response rates, we took a cautious approach by not using email in the first and main mailout but chose to use email in the second reminder where potential adverse impacts on overall response rate would be minimised. Our results are therefore conservative estimates of the impact of using an email approach versus mailed paper letter. If the intervention was delivered during the main mailout, the statistical power would have been higher. Although we do have a pilot survey every year and could have tested our hypothesis using the pilot sample of around 2000 doctors, the sample size in the pilot was not large enough and most pilot responses are merged into in the main wave if the surveys do not change so still count towards the main response rate.

Conclusions

The use of an emailed approach and online completion avoids printing and postage costs and the costs of manual data entry for those who responded to the second reminder. However, the lower response rates means that costs increase for the third mailed reminder as a higher number of participants needed to be approached. It remains unclear what advice to provide researchers as one might be willing to accept a lower response rate if costs are also lower. Often budgets for such surveys are very limited and email is the only option. In this case, optimising other survey characteristics (e.g. survey length) that can increase response rates is important using existing evidence [1,25] and new research in the context of physicians. In addition, qualitative research is helpful to better understand response behaviours [23]. However, we do recommend that researchers attempt to negotiate larger budgets for physician surveys to ensure that mailed paper letters are used where possible so that survey responses are externally valid. This is necessary to be able to make valid recommendations from survey research. Further research should test a mix of methods to approach potential respondents, with appropriate sample size calculations, and where the same subjects are approached by email as well as mailed letter or other types of contact such as text messaging and social media, though these methods might be less effective in older physician populations.

Acknowledgments

We thank the doctors who participated in the MABEL survey.

References

  1. 1. Daikeler J, Bošnjak M, Lozar Manfreda K. Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates. Journal of Survey Statistics and Methodology. 2020;8(3):513–39.
  2. 2. Johnson TP, Wislar JS. Response rates and nonresponse errors in surveys. JAMA. 2012;307(17):1805–6. Epub 2012/05/03. pmid:22550194.
  3. 3. Cho YI, Johnson TP, VanGeest JB. Enhancing Surveys of Health Care Professionals: A Meta-Analysis of Techniques to Improve Response. Evaluation & the Health Professions. 2013;36(3):382–407. pmid:23975761
  4. 4. Klabunde CN, Willis GB, McLeod CC, Dillman DA, Johnson TP, Greene SM, et al. Improving the Quality of Surveys of Physicians and Medical Groups: A Research Agenda. Evaluation & the Health Professions. 2012;35(4):477–506. WOS:000310687700007. pmid:22947596
  5. 5. Galea S, Tracy M. Participation Rates in Epidemiologic Studies. Ann Epidemiol. 2007;17(9):643–53. pmid:17553702
  6. 6. Pit S, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioner’s survey response rates—a systematic review. BMC Medical Research Methodology. 2014;14:76. Epub 2014/06/08. pmid:24906492; PubMed Central PMCID: PMC4059731.
  7. 7. McLeod CC, Klabunde CN, Willis GB, Stark D. Health care provider surveys in the United States, 2000–2010: a review. Eval Health Prof. 2013;36(1):106–26. Epub 2013/02/05. pmid:23378504.
  8. 8. VanGeest JB, Johnson TP, Welch VL. Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Eval Health Prof. 2007;30(4):303–21. pmid:17986667
  9. 9. Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the Impact of Mixed-Mode Designs (Mail and Web) and Multiple Contact Attempts within Mode (Mail or Web) on Clinician Survey Response. Health services research. 2018. Epub 2018/01/23. pmid:29355920.
  10. 10. Weaver L, Beebe TJ, Rockwood T. The impact of survey mode on the response rate in a survey of the factors that influence Minnesota physicians’ disclosure practices. BMC Medical Research Methodology. 2019;19(1). pmid:30940087
  11. 11. Scott A, Jeon S-H, Joyce C, Humphreys J, Kalb G, Witt J, et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Medical Research Methodology. 2011;11:126. pmid:21888678
  12. 12. Beebe TJ, Locke GR, Barnes SA, Davern ME, Anderson KJ. Mixing Web and Mail Methods in a Survey of Physicians. Health services research. 2007;42(3p1):1219–34. pmid:17489911
  13. 13. Beebe TJ, McAlpine DD, Ziegenfuss JY, Jenkins S, Haas L, Davern ME. Deployment of a Mixed-Mode Data Collection Strategy Does Not Reduce Nonresponse Bias in a General Population Health Survey. Health services research. 2012;47(4):1739–54. WOS:000306141500019. pmid:22250782
  14. 14. Lusk C, Delclos GL, Burau K, Drawhorn DD, Aday LA. Mail versus Internet surveys—Determinants of method of response preferences among health professionals. Evaluation & the Health Professions. 2007;30(2):186–201. ISI:000246447000006. pmid:17476030
  15. 15. Smyth JD, Olson K, Millar MM. Identifying predictors of survey mode preference. Social science research. 2014a;48:135–44. Epub 2014/08/19. pmid:25131280.
  16. 16. Taylor T, Scott A. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. Evaluation & the Health Professions. 2018;42(1):41–70. pmid:30384770
  17. 17. Schulz KF, Altman DG, Moher D, the CG. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Medicine. 2010;8(1):18.
  18. 18. Szawlowski S, Harrap B, Leahy A, Scott A. Medicine in Australia: Balancing Employment and Life (MABEL). MABEL User Manual: Wave 11 Release. Melbourne: Melbourne Institute: Applied Economic and Social Research, The University of Melbourne; 2020.
  19. 19. Joyce C, Scott A, Jeon S, Humphreys J, Kalb G, Witt J, et al. The "Medicine in Australia: Balancing Employment and Life (MABEL)" longitudinal survey—Protocol and baseline data for a prospective cohort study of Australian doctors’ workforce participation. BMC Health Services Research. 2010;10(1):50. pmid:20181288
  20. 20. ABS. Socio-Economic Indexes for Areas (SEIFA). Technical Paper 2033.0.55.001. Canberra: Australian Bureau of Statistics, 2016.
  21. 21. Department of Health. Modified Monash Model. Canberra: Australian Government; 2021.
  22. 22. StataCorp. Stata Statistical Software: Release 14. College Station, TX: StataCorp LP.; 2015.
  23. 23. Taylor T, Scott A. Do Physicians Prefer to Complete Online or Mail Surveys? Findings From a National Longitudinal Survey. Evaluation & the Health Professions. 2019;42(1):41–70. pmid:30384770
  24. 24. Amercian Association for Public Opinion Research. Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys. 9th ed: Amercian Association for Public Opinion Research,; 2016.
  25. 25. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires Cochrane Database of Systematic Reviews. 2009;Issue 3(Art. No.: MR000008).