Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Unit Nonresponse in a Population-Based Study of Prostate Cancer

  • Evrim Oral ,

    Affiliation Biostatistics Program, LSUHSC School of Public Health, New Orleans, Louisiana, United States of America

  • Neal Simonsen,

    Affiliation Consultant Epidemiologist, New Orleans, Louisiana, United States of America

  • Christine Brennan,

    Affiliation Health Policy and Systems Management Program, LSUHSC School of Public Health, New Orleans, Louisiana, United States of America

  • Jennifer Berken,

    Affiliation Department of Mathematical Sciences, McNeese State University, Lake Charles, Louisiana, United States of America

  • L. Joseph Su,

    Affiliation Department of Epidemiology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, United States of America

  • James L. Mohler,

    Affiliations Department of Urology, Roswell Park Cancer Institute, Buffalo, NY, United States of America, Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America

  • Jeannette T. Bensen,

    Affiliation Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America

  • Elizabeth T. H. Fontham

    Affiliation Epidemiology Program, LSUHSC School of Public Health, New Orleans, Louisiana, United States of America

Unit Nonresponse in a Population-Based Study of Prostate Cancer

  • Evrim Oral, 
  • Neal Simonsen, 
  • Christine Brennan, 
  • Jennifer Berken, 
  • L. Joseph Su, 
  • James L. Mohler, 
  • Jeannette T. Bensen, 
  • Elizabeth T. H. Fontham


Low unit response rates can increase bias and compromise study validity. Response rates have continued to fall over the past decade despite all efforts to increase participation. Many factors have been linked to reduced response, yet relatively few studies have employed multivariate approaches to identify characteristics that differentiate respondents from nonrespondents since it is hard to collect information on the latter. We aimed to assess factors contributing to enrollment of prostate cancer (PCa) patients. We combined data from the North Carolina-Louisiana (LA) PCa Project’s LA cohort, with additional sources such as US census tract and LA tumor registry data. We included specific analyses focusing on blacks, a group often identified as hard to enroll in health-related research. The ability to study the effect of Hurricane Katrina, which occurred amidst enrollment, as a potential determinant of nonresponse makes our study unique. Older age (≥ 70) for blacks (OR 0.65) and study phase with respect to Hurricane Katrina for both races (OR 0.59 for blacks, OR 0.48 for whites) were significant predictors of participation with lower odds. Neighborhood poverty for whites (OR 1.53) also was a significant predictor of participation, but with higher odds. Among blacks, residence in Orleans parish was associated with lower odds of participation (OR 0.33) before Katrina. The opposite occurred in whites, with lower odds (OR 0.43) after Katrina. Our results overall underscore the importance of tailoring enrollment approaches to specific target population characteristics to confront the challenges posed by nonresponse. Our results also show that recruitment-related factors may change when outside forces bring major alterations to a population's environment and demographics.


Nonresponse is an important source of nonsampling error and can appear at either unit or item level. This study focuses on unit nonresponse, which occurs when a sampled subject fails to participate in a questionnaire because of failure to establish a contact, or refusal to cooperate. Despite all efforts to minimize nonresponse, most population-based epidemiologic research suffers from significant nonresponse rates, which typically fall between 20 and 40% of the target population [1]. These rates have been observed to increase worldwide for several decades, regardless of the disease studied, geographical region or age of the study population ([2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]). Consequently, low response rates continue to be a major obstacle for researchers, especially in health-related studies where data are often collected via questionnaires ([14], [15]).

Nonresponse poses two main problems: attainment of adequate sample sizes from a given number of target subjects so that the derived results are representative, and an increase in the potential for selection bias resulting from an over or under-representation of particular subpopulations [16]. Even with fairly high response rates, a substantial bias can occur if nonrespondents differ markedly from respondents for rare exposures or rare outcomes [17].

Although nonresponse rates may be decreased by raising contact rates through increased field efforts, this strategy loses effectiveness in studies where contact rates already approach 100 percent. Other strategies can be utilized to minimize nonresponse, such as using more experienced interviewers, mailing advance letters to notify sampled subjects about the study prior to contact, or providing monetary incentives ([18], [19], [20], [21]). However, if these strategies are more effective for some particular subpopulations compared to others, a reduction in nonresponse through these strategies can actually increase nonresponse bias.

Statistical techniques are also available to reduce the bias from nonresponse, such as applying appropriate weighting procedures, but these methods require some strong prior assumptions and knowledge about the population distribution. Post-survey adjustments are thus merely estimated remedies to the real underlying problem. The most direct and effective way to minimize this potential bias is to reduce nonresponse through appropriate measures built into study designs.

Evaluation of factors that affect participation is thus important, but often difficult to assess due to missing information on nonrespondents. Even in studies where some information is available on nonrespondents, the results regarding potential contributors to non-participation vary widely. For example, lack of personal benefits from participation has been given as a major cause for refusals [22]. A negative attitude towards the health care system also has been cited as a cause of nonresponse [23]. Mistrust of research, especially in some ethnic minorities, has been shown to be another potential cause ([24], [25], [26]), which is particularly relevant for studies (such as ours) designed to compare blacks vs whites. Several studies have pointed to poor current health as the major cause of refusals ([5], [27], [28], [29]). Lack of time or interest also have been reported as key reasons for non-participation ([30], [31], [32]). Age, gender, and race have been linked to refusals as well, although results are not entirely consistent ([6], [33], [34]). Lower education levels and unemployment were other factors associated with lower participation rates ([35], [36], [37], [38], [39], [40], [41], [42]). Some studies reported that individuals with a socially undesirable condition, such as a sexually transmitted disease, may be less likely to participate in studies related to that ailment ([6], [43]). Lack of participation also has been associated with being single ([41], [44], [45]) or a smoker ([46], [47], [48], [49]). Personality characteristics, mental health and psychological factors have been examined as predictors of nonresponse as well ([33], [49], [50], [51], [52], [53]).

Nonresponse has thus been the subject of extensive literature, but definitive consensus regarding the mechanisms causing it remains elusive. Moreover, specifically in cancer epidemiology research, it is currently not well known how nonresponse affects the representativeness of cancer patients [54]. Since there are not many studies that focus on nonresponse in cancer research, we attempted to identify modifiable factors that could contribute to decreased participation in prostate cancer (PCa) studies.

PCa is the most commonly diagnosed cancer among U.S. men. The mortality associated with PCa has decreased over the last decades due to earlier detection and improved treatments ([55], [56], [57]); however, PCa-related morbidity, which is more pronounced in blacks, has increased [55]. Consequently, research on PCa continues to be of crucial importance, and a major key to the success of this effort is obtaining the most accurate estimates from PCa patient surveys and studies. Understanding the factors that influence PCa survivors to enroll in research studies will enable researchers to tailor their recruitment efforts accordingly, and reduce the potential impact of nonresponse on study results. Therefore, we sought to assess factors affecting participation among contacted eligible men with PCa in a population-based study. We included specific analyses focused on blacks, a group often identified as among the hardest to enroll in health-related research. Since Hurricane Katrina happened in the midst of the recruitment process, we had the opportunity to include the occurrence of a major disaster among other potential determinants of participation, which makes our study unique.

Study Design and Population

The North Carolina (NC)–Louisiana (LA) Prostate Cancer Project (PCaP) is a multidisciplinary, population-based, case-only study designed to address racial differences in PCa survival through a complete evaluation of social, individual, and tumor level influences on PCa aggressiveness. Eligible research subjects were defined as those residing in the NC and LA study areas with a first diagnosis of adenocarcinoma of the prostate confirmed histologically. Participants were required to be 40–79 years of age at diagnosis, be able to complete the study interview in English, live outside an institution, not be cognitively impaired or in a severely debilitated state physically, and not be under the influence of alcohol, severely medicated, or apparently psychotic at the time of the interview [58]. The PCaP-LA study arm began enrollment in September 2004 in thirteen parishes surrounding New Orleans. Hurricane Katrina forced suspension of accrual in August 2005. This portion of PCaP-LA is referred to as the pre-Katrina (pre-K) sample. Accrual resumed in September 2006 in an extended area that included eight additional parishes in southern LA to account for changes in regional demographics and dispersal of potential research subjects following the hurricane. Post-Katrina (post-K) enrollment was completed in August 2009. All PCaP-LA research subjects were identified through a rapid case ascertainment process utilizing LA Tumor Registry (LTR) contacts. Computer generated random sampling algorithms were applied in order to under-sample white PCa patients to the degree necessary to achieve a 50:50 distribution of race within both NC and LA samples. Further details about the PCaP study can be found in Schroeder et al. [58]; for demographic and socioeconomic characteristics of the LA cohort, see Brennan et al. [55].

The PCaP-LA study arm’s Microsoft Access-based rapid case ascertainment module included records of every PCa patient in LA, identified through pathology reports from over 100 different medical institutions and pathology labs in PCaP-LA’s catchment area. Patients whose tumors appeared to meet study criteria were entered into the subject tracking module, where additional information was obtained via electronic records. Accurint database, other online searches, and consultations with the LTR and the relevant medical institutions provided more information. The combined data were used to identify patients who appeared to meet inclusion criteria, initiate physician notification of those not randomized out, and track progress with enrollment. Patients that appeared to be eligible and were approved for contact by their physicians received an advance letter and a brochure followed by a phone call regarding the study. Compensation of $75 for time and effort was offered in the advance letter for participation in the three components of the study (interview, blood and fat specimen). Enrolled PCaP research subjects, here termed respondents, underwent extensive in-person interviews as well as provided tissue samples and medical record information in an in-home visit: a 749-question structured survey and biospecimens were administered and collected by well-trained registered nurses; proxy interviews were not allowed. The recruitment process of the PCaP-LA cohort was given in Fig 1, where we defined ineligible, uncontacted, enrolled and refused research subjects for this study.

Fig 1. Flow diagram for the recruitment process of the PCaP study, LA cohort.

a Diagnosing physicians provided consent to contact 98% of AA and 96% of CA potential subjects in pre-K and 97% of AA and 96% of CA in post-K. b The reasons included: i) They changed their mind about enrollment after a visit was scheduled; ii) The scheduled interview ended up being cancelled. c The total number of ineligible, enrolled, refused and uncontacted cases were 273, 1234, 754 and 27, respectively.

Informed consent was obtained from all PCaP study research subjects prior to participation and all study protocols were approved by participating institutions’ Institutional Review Boards. The current study that included an analysis of a sub-set of PCaP research subjects was also approved by the Louisiana State Health Sciences University-New Orleans Institutional Review Board (IRB #7971). It was exempted from additional consent requirements as minimal risk research utilizing secondary data.

Data Sources

In order to be able to compare respondents with nonrespondents, we required the use of several detailed supplementary data sources, such as an “eligibility summary form” which was completed by recruiters during the initial phone contact to confirm eligibility and solicit participation. Some of the key components of this form were:

  • Race of the patient (asked regardless of eligibility or enrollment status)
  • Eligibility status
  • If found to be ineligible, the reason for ineligibility
  • If found to be eligible, the scheduled date and time of the visit (This item provided information on nonrespondents from the PCaP-LA cohort.)
  • If found to be eligible but refused to enroll in the study, the reason for declining enrollment

These forms were scanned for each patient and the outcomes were integrated into our analyses. This data source, however, lacked information on race for some patients who could not be contacted or declined to provide it. Detailed clinical and demographic information, like income, were unavailable for uncontacted patients and nonrespondents. Thus, we linked identified patients with LTR data collected through 2011 to obtain missing race and tumor stage; we also used geocoded addresses to determine the census tract in which the patient was residing at the time of diagnosis. Finally, we obtained the percentage living in poverty and population density (persons per square mile) from the 2000 U.S. census for each patient’s tract of residence. Our efforts in combining these data sources enabled us to have a greater level of information than is available on nonrespondents in most epidemiologic studies.

Statistical Analyses and Results

In this study, respondents were defined as eligible patients who completed the home visit, and nonrespondents were defined as eligible patients who either directly refused to participate in the study or for whom a home visit was never arranged despite contact; see Fig 1. We chose to focus on recruitment of “contacted” eligible PCa patients largely for practical reasons, such as availability of more detailed information on them. A total of 273 men who appeared to be eligible based on initially available information did not qualify for recruitment. Of these, 86 were never contacted because their physicians denied permission to do so. The remaining 187 men entered the recruitment process but proved to be ineligible for various reasons (Table 1).

Table 1. Reasons for ineligibilityd of contacted patients in the PCaP-LA cohort, stratified by race and study phase.

First rows indicate counts, second rows indicate percentages (%).

The most common reason for ineligibility was race, followed by being cognitively impaired. The same two reasons also were among the most common reasons for ineligibility when the sample was stratified by study phase.

A total of 2015 men met study eligibility criteria for the pre-K and post-K phases of recruitment within LA. In the course of the study, 1234 of these men enrolled in the study (respondents), 754 refused (nonrespondents) and 27 of them were uncontacted. The overall response rate was 61.2%; overall cooperation, refusal and contact rates were 62.1%, 37.4% and 98.7% respectively, calculated from the formulas given by the American Association for Public Opinion Research (AAPOR) ([59]). Note that slight discrepancies in participation numbers with previously published PCaP manuscripts/reports are due to differences in the definitions of eligibles, respondents, nonrespondents and noncontacts used specifically in this study.

The reasons given for refusal among those who were contacted were summarized in Table 2. A large majority (78.1%) simply stated that they were not interested in the study. Being too busy was cited by 13.3%, followed by being too ill at 2.9%. Whites were more likely to refuse on the grounds of being too ill compared to blacks (p = 0.001), while refusing due to “not being interested” occurred more often among blacks (p = 0.016).

Table 2. Reasons cited for refusal among eligible research subjects contacted in the PCaP-LA cohort, stratified by race.

First rows indicate counts, second rows indicate percentages (%).

Selected characteristics of the respondents, nonrespondents and the total eligible population along with nonresponse bias estimates were presented in Table 3. Categorical variables were created for some of the continuous covariates. Age was included in the analyses as three categories. We evaluated two tumor characteristics: Gleason sums were dichotomized as high (≥ 8) vs. low (< 8). Surveillance, Epidemiology, and End Results (SEER) tumor summary stages ranging from 1 (local) to 7 (distant) were also examined. Since summary stages above 2 (non-local cancers) occurred in only 4% of the research subjects, tumor stage was collapsed into two groups as higher (stage 2 or more) vs. lower (stage 1). Census tract poverty was categorized as < 5, 5–10, 10–20, and ≥20 percent of the households within the tract living in poverty. Population density was used to create a dichotomous covariate for rural tracts (<1000 persons per square mile); another indicator was also created for urban tracts (> 2500 persons per square mile). Residence at diagnosis was categorized into three groups: Orleans (the most populous parish before Hurricane Katrina), East Baton Rouge parish, or elsewhere.

Table 3. Comparison of percentages of respondents, nonrespondents, and the total contacted sample by specific characteristics.

Older cases were over-represented among nonrespondents (p = 0.026). Blacks comprised 54% of the eligible population, but 58% of those did not enroll. While most of the eligible population was identified during the post-K phase of the study (85%), the percentage of the population who responded post-K was significantly lower (83%, p<0.0001). The tumor stage, Gleason score, percentage of census tract households in poverty, population density, and parish of residence showed no significant association with enrollment status. Nonresponse bias values were calculated for these characteristics from formulas given in Groves and Couper ([60]) assuming a fixed response model (Table 3). A positive bias indicates that the characteristic proportion is higher among respondents than nonrespondents, in which case the respondent proportion overestimates the population proportion. A negative bias indicates that the respondent proportion underestimates the population proportion. Overall, the sizes of the biases were small; the highest nonresponse bias was observed for the proportion estimator of race (2.69%).

We calculated overall and race-specific cooperation rates for each characteristic (Table 4); see AAPOR [59] for mathematical formulas. While the oldest age group had the lowest cooperation rate in both races, cooperation was significantly lower for blacks than for whites in that group (51 vs. 63%, p = 0.004). Cooperation decreased substantially post-K for both races, but remained significantly less among blacks than whites (57 vs. 64%, p = 0.003). Cooperation rates among PCa patients with low Gleason scores were significantly lower in blacks than whites (59 vs. 66%, p = 0.003). Similarly, cooperation rates among men with lower tumor stage were lower in blacks than whites (60 vs. 66%, p = 0.007). The overall cooperation rate was lowest when the percent of households in poverty exceeded 20%; however, opposite patterns emerged between races. Blacks in higher poverty tracts (58 vs. 72%, p<0.001), in rural residences (57 vs. 67%, p = 0.003) and in non-urban tracts cooperated less than their white counterparts (57 vs. 67%, p = 0.001). Cooperation rates peaked in Orleans parish for blacks whereas it was lowest there for whites.

Table 4. Cooperation rates: overall and stratified by race (%).

Multivariate logistic regression modelling was used to assess the factors affecting participation status. Race-stratified models determined if there were differences between blacks and whites (Table 5). Membership in the oldest age group and recruitment in the post-K phase were the two statistically significant predictors for blacks with ORs of 0.65 and 0.59, respectively. Membership in the oldest age group had a nonsignificant association with participation for whites. Living in a tract with over 20% of the population in poverty reached statistical significance as a predictor for whites who were 1.5 times as likely to enroll in the study. Post-K enrollment was another factor that reached statistical significance for white men (OR 0.48). Models substituting residence in an urban tract for residence in a rural one yielded similar results for most variables (not reported).

Table 5. Odds ratios for participation according to specific characteristics in multiple logistic regression models, stratified by race.

The 40–59 year age group was the referent category.

We also assessed the factors affecting participation status via models stratified by race and study phase (Table 6). Cooperation remained poorest among blacks who were 70 and older for post-K (OR 0.63). After stratification for study phase, neighborhood poverty became a nonsignificant factor for whites in both study phases as the post-K odds ratio decreased compared to the previous model. Gleason scores were not associated with participation in either study phase among blacks or whites. Before Katrina, while whites with lower stage tumors were less likely to enroll in PCaP than those with higher stage tumors (OR 0.51), blacks with lower stage PCa were more likely to participate (OR 1.94); however these odds ratios lacked statistical significance, reflecting the relative rarity of higher stage PCa and much smaller number of eligible research subjects during that period. After Katrina, the racial difference in odds of participation for men with lower versus higher stage PCa diminished and both ORs approached 1 (0.94 for blacks vs. 1.10 for whites). Residence in Orleans parish showed a statistically significant negative association with cooperation pre-K among blacks and post-K among whites, and non-significant positive associations otherwise.

Table 6. Odds ratios for participation according to specific characteristics in multiple logistic regression models, stratified by race and study phase.

The 40–59 year age group was the referent category for age in all models.


The results showed that older age for blacks (≥70 years), neighborhood poverty for whites, and study phase with respect to Hurricane Katrina for both races were significant predictors of nonresponse among eligible PCaP-LA research subjects. Neighborhood poverty was not a significant factor in the participation of black PCa patients overall. One potential explanation is that whites in low-income areas found the participant compensation for time and effort more attractive than blacks. Another reason might be greater mistrust of research among blacks in those areas ([61], [62], [63]). When stratified by study phase, however, both races had positive but nonsignificant associations between neighborhood poverty and participation pre-K, which substantially weakened (even becoming negative for blacks) post-K. The change after Katrina could be the result of patients’ preoccupation with recovery activities which reduced their willingness to devote time to the study. Orleans parish, which is composed of the city of New Orleans, was the most populous parish in the state and home to a large black community before Katrina. The heavy damage inflicted by the hurricane caused displacement and migration of a large part of the population, particularly lower socioeconomic status (SES) blacks ([64], [65], [66]). Orleans parish thus provided an ideal area to examine how recruitment-related factors may change when outside forces bring major alterations in a population's environment and demographics. One of the most notable changes was the significantly lower odds of participation among blacks living in Orleans parish (OR 0.33) pre-K, which rose above one (but nonsignificant) post-K. The opposite occurred for whites; those residing in Orleans had (non-significantly) higher odds of participation before the hurricane, which became significantly lower after that (OR 0.43). One potential contributor may be the disproportionate displacement of lower SES blacks, who tended to live in the areas of New Orleans that were impacted most heavily by Katrina, and who were less likely to return ([65], [66], [67], [68]). This may have shifted demographics in a direction more favorable to black participation after Katrina [67]. On the other hand, whites were more likely to return to repair or rebuild their damaged homes after the hurricane, and on average they returned sooner ([65], [66], [67]). However, changes in their priorities may have reduced the perceived utility of participation and decreased their likelihood of joining the study. Univariate logistic regression analyses, although not shown, indicated that blacks were less likely to participate in PCaP overall compared to whites (OR 0.75, 0.63–0.90), which is consistent with some previous research ([27], [34]).

The primary strength of this study is its use of extensive supplementary data collected from multiple sources to provide more detail about nonrespondents. However, it has several limitations. We relied on LTR data for some of the additional detail on nonrespondents, but information from questionnaires could provide a more representative characterization of these patients. We were limited to census tract data in the absence of specific individual-level characteristics, such as income level. Tract characteristics are imperfect surrogates for individual ones, and neighborhood characteristics could exert their own independent influence upon participation, but neighborhood data may be all that is available to researchers when planning a study. A substantial proportion of the population moved to a different parish within a year of diagnosis. Nevertheless, movement tended to be of limited distance (e.g. from one parish to a neighboring parish) and sensitivity analyses excluding research subjects with a change in parish between diagnosis and the last known residential address during the recruitment period yielded the same conclusions. PCaP is a population-based case-only study whose population consists entirely of research subjects with the same disease; in a case-control study, factors affecting nonresponse may differ for controls. PCa is most common in older men, many of whom have multiple comorbidities. We examined tumor grade and stage as a marker of more aggressive or extensive cancer, but lack of data precluded addressing overall health or comorbidities as potential factors. Finally, while the study population's balanced racial mix provided greater power for racial comparisons, sample size limits were a problem for some analyses regarding pre- and post-K comparisons.

This study contributes to the understanding of factors affecting nonresponse in PCa studies despite its limitations. For example, we observed that high neighborhood poverty in an area increased the likelihood of cooperation by whites, but it did not alter blacks’ participation overall. Thus, researchers might try tactics other than increasing monetary incentives to boost participation of black men in PCa studies, such as utilizing refusal conversation strategies ([69], [70], [71]) tailored specifically to black men. Offering participants free medications or free treatment as an alternative to the monetary incentives, where feasible, can increase participation of black PCa survivors ([61]). Alternatively, incentives that may appeal to participants’ family members might be offered as an option. Recruiters may try to explain in detail how the study could specifically benefit the black community. Researchers might also consider employing black community members in recruiting black research subjects. Likewise, priority should be given to the strategies that will increase participation of specifically elderly black men in PCa studies. Mixed survey modes are known to increase response rates in the right contexts ([72], [73], [74], [75], [76]); thus, for example in a PCa study where research subjects are being surveyed via computer-assisted telephone interviewing (CATI), older black men could be provided the option of being surveyed via computer-assisted personal interviewing (CAPI) [72]. CAPI generally provides a higher response rate than CATI because of the face-to-face interaction between the interviewer and the respondent. Face-to-face interviews have a greater risk of bringing in social desirability bias, but this problem can be mitigated by exploiting computer-assisted self-interviewing (CASI) when asking sensitive questions (i.e. CAPI-CASI mix) [75]. Mode equivalence can be maximized in mixed-mode survey designs through attention to instrument design and careful implementation of survey protocol ([75], [76], [77]).

Our findings also provide insight to researchers about potential effects if unpredictable events bring major changes to their study population's environment and demographics in the middle of a recruitment process. It is crucial for study managers to monitor the survey process in real time and implement changes as needed to optimize response rates. Furthermore, we believe research on nonresponse must be conducted specific to the disease under study for the reason that every disease-specific population is unique. One cannot, for example, simply assume the factors that determine participation in a pancreatic cancer study would be identical in a PCa study. Finally, one approach with a specific group in a specific study may not work with a similar group under different circumstances. For example, researchers might consider offering jazz concert tickets for the relatives of elderly black PCa survivors as an alternative to monetary incentives in Louisiana, while researchers in other states might need to consider other alternative incentives that would be attractive to black communities in their study area.

Further research is needed to confirm this study's findings. More detail on nonparticipant characteristics for studies of prostate as well as other cancers are critically needed to tailor enrollment approaches and minimize the challenges posed by nonresponse.


The authors thank the staff, advisory committees and research subjects participating in the PCaP study for their important contributions.

Author Contributions

  1. Conceptualization: EO NS CB JB EF.
  2. Formal analysis: EO NS JB.
  3. Methodology: EO NS JB CB EF.
  4. Software: EO NS JB.
  5. Validation: EO NS JB CB EF.
  6. Visualization: EO NS CB JB LJS JLM JTB EF.
  7. Writing – original draft: EO NS.
  8. Writing – review & editing: EO NS CB JB LJS JLM JTB EF.


  1. 1. Stang A. Nonresponse research-an underdeveloped field in epidemiology. Eur J. Epidemiol. 2003; 18: 929–931. pmid:14598921
  2. 2. Bakke PS. Nonresponse in epidemiological studies–How to cope with it? Resp Med. 2010; 104: 323–324.
  3. 3. Brick JM, Williams D. Explaining rising nonresponse rates in cross-sectional surveys. Ann Am Acad Polit SS. 2013; 645: 36–59.
  4. 4. Curtin R, Presser S, Singer E. Changes in telephone survey nonresponse over the past quarter century. Public Opin Quart. 2005; 69(1): 87–98.
  5. 5. Drivsholm T, Eplov LF, Davidsen M, Jorgensen T, Ibsen H, Hollnagel H et al. Representativeness in population-based studies: A detailed description of nonresponse in a Danish cohort study. Scand J Public Health. 2006; 34: 623–631. pmid:17132596
  6. 6. Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007; 17: 643–653. pmid:17553702
  7. 7. Hartge P. Participation in population studies. Epidemiology. 2006; 17(3): 252–254. pmid:16617271
  8. 8. Heraty S, Kent P, O’Mahony E. Smoking behaviours, attitudes and exposure to smoking cessation information in a psychiatric setting; a cross-sectional comparative investigation, 13th Annual Research Conf. 2012; 36.
  9. 9. Johnson T, Owens L. Survey Response Rate Reporting in the Professional Literature, Paper presented at the 58th Annual Meeting of the American Association for Public Opinion Research, Nashville, TN; 2003.
  10. 10. Mazloum M, Bailey HD, Heiden T, Armstrong BK, De Klerk N, Milne E. Participation in population-based case-control studies: does the observed decline vary by socioeconomic status? Paediatr Perinat EP. 2012; 26: 276–279.
  11. 11. Morton L, Cahill J, Harge P. Reporting participation in epidemiologic studies: a survey of practice. Am J Epidemiol. 2005; 163(3): 197–203. pmid:16339049
  12. 12. Slattery ML, Edwards SL, Caan BJ, Kerber RA, Potter JD. Response rates among control subjects in case-control studies. Ann Epidemiol. 1995; 5: 245–249. SSDI 1047-2797(94)00113-8. pmid:7606315
  13. 13. Tolonen H, Helakorpi S, Talala K, Helasoja V, Martelin T, Prattala R. 25-year trends and socio-demographic differences in response rates: Finnish adult health behaviour survey. Eur J Epidemiol. 2006; 21(6): 409–415. pmid:16804763
  14. 14. Hall AE, Sanson-Fisher RW, Lynagh MC, Threlfall T, D’Este CA. Format and readability of an enhanced invitation letter did not affect participant rates in a cancer registry-based study: a randomized controlled trial. J. Clin Epidemiol. 2013; 66: 85–94. pmid:23102853
  15. 15. May L, Gudger G, Armstrong P, Brooks G, Hinds P, Bhat R et al. Multisite Exploration of Clinical Decision Making for Antibiotic Use by Emergency Medicine Providers Using Quantitative and Qualitative Methods, Infect Control Hosp Epidemiol. 2014; (35)9, 1114–1125.
  16. 16. Bethlehem JG, Kersten HMP. On the treatment of nonresponse in sample surveys, Journal of Official Statistics 1985; 1, 287–300.
  17. 17. Kessler R, Little R, Groves RM. Advances in strategies for minimizing and adjusting for survey nonresponse. Epidemiol Rev. 1995; 17(1): 192–204. pmid:8521937
  18. 18. DeLeeuw E, Callegaro M, Hox J, Korendijk E, Lensvelt-Mulders G. The Influence of Advanced Letters on Response in Telephone Surveys: A Meta-Analysis, Public Opin Quart. 2007; 71 (3), 413–443.
  19. 19. Link M, Mokdad A, Town M, Weiner J, Roe D. Improving Response Rates for the BRFSS: Use of Lead Letters and Answering Machine Messages. American Assoc for Public Opinion Research. 2003; 141–148.
  20. 20. Singer E, Groves RM, Dillman DA, Eltinger JL, Little RJA, eds. The use of incentives to reduce nonresponse in household surveys. Wiley-Interscience; 2002; 163–177.
  21. 21. Sinibaldi J, Jackle A, Tipping S, Lynn P. Interviewer characteristics, their doorstep behaviour, and survey co-operation. AAPOR, May 14–17, 2009; 5955–5969.
  22. 22. Bakke P, Gulsvik A, Lilleng P, Overa O, Hanoa R, Eide GE. Postal Survey on airborne occupational exposure and respiratory disorders in Norway: Causes and consequences of nonresponse. J. Epidemiol. Community Health 1990; 44(4): 316–320. pmid:2277255
  23. 23. Tibblin G. A population study of 50-year old men. An analysis of the non-participation group. Acta Med Scand. 1965; 178: 453–459. pmid:5838318
  24. 24. Ford J, Howerton MW, Lai GY, Gary TL, Bolen S, Gibbons MC et al. Barriers to recruiting underrepresented populations to cancer clinical trials: a systematic review. Cancer. 2008; 112:228–242. pmid:18008363
  25. 25. Smith G, Thomas S, Williams M, Moody-Ayers S. Attitudes and beliefs of African Americans toward participation in medical research. J Gen Intern Med. 1999; 14(9): 537–546. pmid:10491242
  26. 26. Symonds RP, Lord K, Mitchell AJ. Recruitment of ethnic minorities into cancer clinical trials: experience from the front lines. Br J Cancer. 2012; 107(7): 1017–1021. pmid:23011540
  27. 27. Jackson R, Chambless LE, Yang K, Byrne T, Watson R, Folsom A et al. Differences between respondents and nonrespondents in a multicenter community-based study vary by gender and ethnicity. J. Clin. Epidemiol. 1996; 49(12): 1441–1446. SSDI 0895-4356(95)00047-8. pmid:8970495
  28. 28. Sogaard AJ, Selmer R, Bjertness E, Thelle D. The Oslo health study: The impact of self-selection in a large, population-based survey. Int J Equity Health. 2004; 3:3. pmid:15128460
  29. 29. Wilhelmsen L, Ljungbert S, Wedel H, Werko L. A comparison between participants and non-participants in a primary preventive trial. J Chron Dis. 1976; 29(5): 331–339. pmid:939796
  30. 30. Couper M. Survey introductions and data quality. Public Opin Quart. 1997; 61(2):317–338.
  31. 31. Hussain A, Weisaeth L, Heir T. Nonresponse to a population-based postal questionnaire study. J Trauma Stress. 2009; 22(4):324–328. pmid:19644976
  32. 32. Rönmark E, Lundqvist A, Lundback B, Nystrom L. Non-responders to a postal questionnaire on respiratory symptoms and diseases. Eur J. Epidemiol. 1999; 15(3): 293–299. pmid:10395061
  33. 33. Camara RJA, Begre S, Von Kanel R. Avoidance and inhibition do not predict nonrespondent bias among patients with inflammatory bowel disease. J Epidemiol. 2011; 21(1): 44–51. pmid:21088371
  34. 34. Moorman PG, Newman B, Millikan RC, Tse CK, Sandler DP. Participation rates in a case-control study: The impact of age, race, and race of interviewer. Ann Epidemiol. 1999; (9)188–195. PII S1047-2797(98)00057-X.
  35. 35. Burg JA, Allred SL, Sapp JH. The potential for bias due to attrition in the National Exposure Registry: an examination of reasons for nonresponse, nonrespondent characteristics, and the response rate. Toxicol Ind Health, 1997; 13(1):1–13. pmid:9098946
  36. 36. Cunradi CB, Moore R, Killoran M, Ames G. Survey nonresponse bias among young adults: the role of alcohol, tobacco, and drugs. Subst Use Misuse. 2005; 40:171–185. pmid:15770883
  37. 37. Eagan TM, Eide GE, Gulsvik A, Bakke PS. Nonresponse in a community cohort study: predictors and consequences for exposure-disease associations. J Clin Epidemiol. 2002; 55:775–781. pmid:12384191
  38. 38. Hille ET, Elbertse L, Gravenhorst JB, Brand R, Verloove-Vanhorick SP. Dutch POPS-19 Collaborative Study Group. Nonresponse bias in a follow-up study of 19-year-old adolescents born as preterm infants. Pediatrics. 2005; 116:e662–e666. pmid:16263980
  39. 39. O’Neil MJ, Estimating the nonresponse bias due to refusals in telephone surveys. Public Opin Q. 1979; (43)218–232.
  40. 40. Partin MR, Malone M, Winnett M, Slater J, Bar-cohen A, Caplan L. The impact of survey nonresponse bias on conclusions drawn from a mammography intervention trial. J Clin Epidemiol. 2003; 56:867–873. pmid:14505772
  41. 41. Richiardi L, Boffetta P, Merletti F. Analysis of nonresponse bias in a population-based case-control study on lung cancer. J Clin Epidemiol. 2002; 55:1033–1040. pmid:12464380
  42. 42. Shahar E, Folsom AR, Jackson R. The effect of nonresponse on prevalence estimates for a referent population: insights from a population-based cohort study. Atherosclerosis Risk in Communities (ARIC) Study Investigators. Ann Epidemiol. 1996; (6)498–506. pmid:8978880
  43. 43. Lahaut VM, Jansen HA, Van de Mheen D, Garretsen HF. Non-response bias in a sample survey on alcohol consumption. Alcohol and Alcoholism. 2002; 37(3), 256–260. pmid:12003914
  44. 44. Fejer R, Harvigsen J, Kyvik KO, Jordan A, Christensen HW, Hoilund-Carlsen PF. The Funen neck and chest pain study: analyzing nonresponse bias by using national vital statistics data. Eur J Epidemiol. 2006; 21(3): 171–180. pmid:16547831
  45. 45. Reijneveld SA, Stronks K. The impact of response bias on estimates of health care utilization in a metropolitan area: the use of administrative data. Int J Epidemiol. 1999; (28) 1134–1140.
  46. 46. Aanerud M, Braut H, Wentzel-Larsen T, Eagan TML, Bakke PS. Nonresponse in telephone surveys of COPD patients does not introduce bias. J Telemed Telecare. 2013; Feb 28, 2013: 1–5.
  47. 47. Bostrӧm G, Hallqvist J, Haglund BJ, Romelsjo A, Svanstrom L, Diderichsen F. Socioeconomic differences in smoking in an urban Swedish population. The bias introduced by non-participation in a mailed questionnaire. Scand J Soc Med. 1993; 21(2): 77–82. pmid:8367686
  48. 48. Hill A, Roberts J, Ewings P, Gunnell D. Nonresponse bias in a lifestyle survey. J Public Health Med. 1997; 19(2): 203–207. pmid:9243437
  49. 49. Vink JM, Willemsen G, Stubbe JH, Middeldorp CM, Ligthart RS, Baas KD et al. Estimating nonresponse bias in family studies: Application to mental health and lifestyle. Eur J Epidemiol. 2004; 19: 623–630. pmid:15461193
  50. 50. Bennet CM, Hill RE. A comparison of Selected personality characteristics of Responders and Nonresponders to a Mailed Questionnaire study. The J Educ Res. 1964; 58(4): 178–180.
  51. 51. Cope RG. Nonresponse in survey research as a function of psychological characteristics and time of response. J Exp Educ. 1968; (36)3: 32–35.
  52. 52. Gersen JA, McCreary CP. Personality comparisons of responders and nonresponders to a mailed personality inventory. Psychol Rep. 1983; 52: 555–562.
  53. 53. Jacomb PA, Jorm AF, Korten AE, Christensen H, Henderson AS. Predictors of refusal to participate: a longitudinal health survey of the elderly in Australia. BMC Public Health. 2002; 2(4).
  54. 54. Abel GA, Saunders CL, Lyratzopoulos G. Post-sampling mortality and non-response patterns in the English Cancer Patient Experience Survey: Implications for epidemiological studies based on surveys of cancer patients, Cancer Epidemiol. 2016; V41, 34–41.
  55. 55. Brennan C, Oral E, Fontham E, Mohler J, Bensen J, Mishel M et al. The differences in quality of life in prostate cancer project: methods and design of a multidisciplinary population-based follow-up study. American Medical Journal. 2012; 3(2): 104–114.
  56. 56. DeSantis CE, Lin CC, Mariotto AB, Siegel RL, Stein KD, Kramer JL et al. Cancer treatment and survivorship statistics. CA: A Cancer Journal for Clinicians. 2014; 64: 252–271.
  57. 57. Gomella L, Johannes J, Trabulsi E, Thaler H. Current Prostate Cancer treatments: Effects on Quality of Life. Urology. 2009; 73(s5):s28–s35.
  58. 58. Schroeder J, Bensen J, Su L, Mishel M, Ivanova A, Smith GJ et al. The North Carolina-Louisiana Prostate Cancer Project (PCaP): Methods and design of a multidisciplinary population-based cohort study of racial differences in prostate cancer outcomes. Prostate. 2006; (66)1162–1176.
  59. 59. AAPOR, Standard definitions: Final dispositions of case codes and outcome rates for surveys. 7th ed. The American Association of Public Opinion Research; 2011.
  60. 60. Groves RM, Couper MP. Nonresponse in Household Interview Surveys, Wiley-Interscience; 1998.
  61. 61. Corbie-Smith G, Thomas SB, Williams MV, Moody-Ayers S. Attitudes and beliefs of African Americans toward participation in medical research. J Gen Intern Med. 1999; 14:537–546. pmid:10491242
  62. 62. Durant RW, Legedza AT, Marcantonio ER, Freeman MB, Landon BE. Different types of distrust in clinical research among Whites and African Americans. Journal of the National Medical Association. 2011; 103:123–130. pmid:21443064
  63. 63. Scharff DP, Mathews KJ, Jackson P, Hoffsuemmer J, Martin E, Edwards D. More than Tuskegee: understanding mistrust about research participation. J Health Care Poor Underserved. 2010; 21:879–97. pmid:20693733
  64. 64. Falk WW, Hunt MO, Hunt LL. Hurricane Katrina and New Orleanians’ sense of place: Return and reconstitution or ‘Gone with the Wind’? DuBois Review. 2006; 3, 115–128.
  65. 65. Elliott JR, Bellone-Hite A, Devine J. Unequal return: The uneven resettlements of New Orleans’s uptown neighborhoods. Organization & Environment. 2009; 22(4), 410–421.
  66. 66. Fussell E, Sastry N, VanLandingham M. Race, socio-economic status, and return migration to New Orleans after Hurricane Katrina. Population & Environment. 2010 31 (1–3): 20–42.
  67. 67. Fussell E. Constructing New Orleans, constructing race: A population history of New Orleans. Journal of American History. 2007; 94:846–855.
  68. 68. Logan JR. The impact of Katrina: Race and class in storm-damaged neighborhoods. Working paper, Spatial Structures in the Social Sciences, Brown University; 2006.
  69. 69. Calderwood L, Plewis I, Ketende SC, Taylor R. Experimental testing of refusal conversion strategies in a large-scale longitudinal study. Working Paper. 2010; Centre for Longitudinal Studies, Institute of Education, University of London.
  70. 70. Fricker SS. The Relationship between Response Propensity and Data Quality in the Current Population Survey and the American Time-use Survey. PhD Diss. University of Maryland; 2007.
  71. 71. Lynn P, Clarke P, Martin J, Sturgis P. The effects of extended interviewer efforts on nonresponse bias. In Survey Nonresponse, eds. Groves R.M., Dillman D.A., Eltinge J.L and Little R. J. A., New York: Wiley; 2002.
  72. 72. Bethlehem J, Cobben F, Schouten B. Handbook of Nonresponse in Household Surveys. Wiley, Hoboken, New Jersey; 2011.
  73. 73. Biemer PP, Lyberg LE. Introduction to Survey Quality. John Wiley and Sons, Hoboken, New Jersey; 2003.
  74. 74. Buelens BJ, Van den Brakel J. On the Necessity to Include Personal Interviewing in Mixed-Mode Surveys, Survey Practice 2010; 3 (5).
  75. 75. DeLeeuw E. To mix or not to mix data collection modes in surveys, Journal of Official Stat. 2005; 21 (2), 233–255.
  76. 76. Safir A, Goldenberg K. Mode effects in a survey of consumer expenditures. Proceedings of the American Statistical Association, Section on Survey Research Methods, Alexandria, VA: American Statistical Assoc. 2008; 4436–4443.
  77. 77. Martin E, Childs J, DeMaio T, Hill J, Reiser C, Gerber E et al. Guidelines for Designing Questionnaires for Administration in Different Modes, U.S. Census Bureau, Washington, DC 20233; 2007.