Previous studies have shown that survey methodology can greatly influence prevalence estimates for alcohol and illicit drug use. The aim of this article is to assess the effect of data collection modes on alcohol misuse and drug use reports by comparing national estimates from computer-assisted telephone interviews (CATI) and audio-computer-assisted self interviews (A-CASI).
Design: Two national representative surveys conducted in 2005 in France by CATI (n = 24,674) and A-CASI (n = 8,111).
Measurements: Alcohol misuse according to the CAGE test, cannabis use (lifetime, last year, 10+ in last month) and experimentation with cocaine, LSD, heroin, amphetamines, ecstasy, were measured with the same questions and wordings in the two surveys.
Multivariate logistic regressions controlling for sociodemographic characteristics (age, educational level, marital status and professional status) were performed. Analyses were conducted on the whole sample and stratified by age (18–29 and 30–44 years old) and gender. 45–64 years old data were not analysed due to limited numbers.
Overall national estimates were similar for 9 out of the 10 examined measures. However, after adjustment, A-CASI provided higher use for most types of illicit drugs among the youngest men (adjusted odds ratio, or OR, of 1.64 [1.08–2.49] for cocaine, 1.62 [1.10–2.38] for ecstasy, 1.99 [1.17–3.37] for LSD, 2.17 [1.07–4.43] for heroin, and 2.48 [1.41–4.35] for amphetamines), whereas use amongst women was similar in CATI and A-CASI, except for LSD in the 30–44 age group (OR = 3.60 [1.64–7.89]). Reported alcohol misuse was higher with A-CASI, for all ages and genders.
Citation: Beck F, Guignard R, Legleye S (2014) Does Computer Survey Technology Improve Reports on Alcohol and Illicit Drug Use in the General Population? A Comparison Between Two Surveys with Different Data Collection Modes In France. PLoS ONE 9(1): e85810. https://doi.org/10.1371/journal.pone.0085810
Editor: Natalie Walker, The National Institute for Health Innovation, New Zealand
Received: June 21, 2013; Accepted: December 2, 2013; Published: January 22, 2014
Copyright: © 2014 Beck et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The authors have no support or funding to report.
Competing interests: The authors have declared that no competing interests exist.
More and more general population surveys using representative samples are attempting to assess adult drug use in European countries. In France, various surveys have been conducted since the 1990s. The National Health Barometer (HB) is the primary source of information about prevalence, correlates, and trends in substance use and misuse in France . This survey collects data every five years from a nationally representative sample on patterns and correlates of licit and illicit drug use and related problems, with a focus on cannabis use. It is a telephone-administered interview survey. The last but one iteration of this cross-sectional survey was conducted in 2005. A few months later, the Life Events and Health Survey (LEHS) was conducted, a face-to-face interview survey mainly dealing with violence and including an audio computer-assisted self-administered section (the A-CASI) to assess licit and illicit drug use . The questions in the two surveys were the same, based on the European Model Questionnaire elaborated in the late 1990s . Both surveys generated prevalence estimates for lifetime substance use and past-year substance use. The aim of this paper is to compare the results of these two national surveys in order to assess the impact of the technologies used on the measurement of sensitive issues.
Although a comparison between two French surveys among adolescents in the late 1990s has already been published , this is the first time that such a comparison for the whole population has been possible in France, with two large samples and two different data collection methods. The availability of these two large surveys carried out during the same year with nationally representative samples presents a unique opportunity to address the issue of data collection mode effect.
A number of studies in the literature have shown how methodology can influence prevalence estimates for alcohol – or drugs ,  in the adult population, as well as among adolescents –. Although it seems to be difficult to draw conclusions about areas of disagreement in comparisons of this type, most authors agree that increased reporting of substance use is a sign of improved validity in the methodology (because these behaviours tend to be underreported). When the comparison is done on data collected in the same period of time, increased reporting should be attributed to the data collection mode implemented as it cannot be a real increase , .
A large number of studies over several decades have fairly clearly underlined that self-administered questionnaires are more suitable than other data collection modes for collecting reports on sensitive behaviours , , –. This result is particularly convincing among adolescents and young adults –. Indeed, answers given to self-administered questionnaires, whether they are completed with paper and pen or by means of a laptop, seem more reliable, and are particularly well-suited to reporting behaviours that can compromise the respondent in some way, such as intimate or painful feelings or illegal behaviours , . In particular this could be attributable to the absence of a “direct witness,” which ensures the respondent's anonymity . This data collection mode also decreases the inhibitory effect of the immediate environment (in particular family, when the survey takes place within a household) , , . Undoubtedly the choice of data collection mode cannot act as an absolute guarantee of quality. It requires strict compliance with a specific protocol (being alone in the room, preferably being questioned outside the home, etc.), to prevent the psychological or material environment in which the respondent is placed from undermining the advantages expected from this method.
Several studies on drug use have shown that self-administered pen-and-paper questionnaires and CASI provide similar results, both in the United States – or in Europe , . However, many studies argue for computer survey technology, especially for the A-CASI in adults ,  and in adolescents ,  because this mode is presented as the best way to explicitly provide confidentiality and anonymity to the respondent.
In several other studies on methodology for general population surveys, the telephone interview has been also presented as a compelling alternative, as it is often considered more anonymous than face-to-face interviews with an investigator for health surveys –, for sensitive topics –, and especially for drug use , , , –. Phone surveys are also known to considerably reduce costs compared with face-to-face interviews.
Nevertheless, some issues have received less consideration. Our main research question was the following: Is a telephone interview equivalent to A-CASI in collecting information on drug use? From another perspective, we were also interested in measuring potential gender or age effects in these results, because these variables can more easily be taken into account than others in a multimode survey design in order to select the most appropriate data collection mode for each respondent. Methodological studies rarely receive sufficient funding to establish such thorough comparisons. But in the study of drug use, where accuracy is always questionable and where there are strong gender and age effects, these issues are nevertheless worth investigating. Age and gender differences in reported substance use prevalence rates can lead to different representations of those behaviors; these differences are probably also often linked, however, to differing propensities to declare such behaviors.
It is generally accepted that self-administration leads to more honest responses about sensitive behaviours or attitudes than interviewer administration . We thus expected that the A-CASI, which is often presented as the most private data collection mode, would correspondingly encourage more frequent reporting of these behaviours. The aim of this paper is to assess the differences in responses on drug use behaviours in A-CASI and telephone surveys. To do so we compared two large random cross-sectional surveys conducted six months apart in France in 2005 and containing the same questions on drug use. We performed multivariate analyses stratified by age and gender, with special attention paid to young subjects (18–29 years old), among whom illicit substance use is most prevalent.
Because there are several differences between HB and LEHS in sampling procedures, sampling frame, interviewers and questionnaire, such a comparison can not be stricto sensu considered as an experimental design. These differences are possible sources of variation between HB and LEHS results. However, several researchers were implicated in the preparation of both surveys, and this methodological project was designed prior to the surveys. Although everything was done in the survey design to favour reliability of this comparison, it has not been possible to definitely untangle all these different effects. These points are discussed. Nevertheless, a number of important studies on data collection mode effects have been conducted following the same procedure which enables to benefit from very large sample sizes compared to experimental designs , , , , , .
All the data were analyzed anonymously. Participants provided their verbal informed consent to participate in the study at the beginning of the questionnaire. Interviewers keyboarded the participant's consent as part of the computer-assisted interviews. For participants under 18, verbal consent was obtained from the head of the family in the same way. All procedures underwent ethical review by the appropriate national agency, the National commission for data processing and private freedoms (CNIL) and were approved. The CNIL considered that written consent was not requested for such surveys.
Both surveys used a household-based two-stage sampling procedure (household and individual selected using the next birthday method) and the final sample was large: 9,953 individuals aged [18–75] (among which 8,111 aged –) were interviewed by A-CASI and 30,514 aged [12–75] (among which 24,674 aged –) were interviewed by CATI. A letter was sent prior to the survey to improve response rates. Given the questionnaire was long (45 minutes on average for the CATI and 1 hour for the A-CASI) and sometimes sensitive, the interviewers had received specific training. The illicit substance use part of the questionnaires was restricted to respondents aged –.
The Health Barometer (HB) was collected by a private firm on behalf of the National Institute for Prevention and Health Education (Inpes), a public establishment created in 2002 whose mission is to implement prevention and health education policies. The HB is a repeated cross-sectional telephone survey (CATI) representative of the non-institutionalized, civilian population of France. Data were collected between November 2004 and February 2005. To ensure representativeness, a subsample of 3.842 individuals with only a cell phone were added to the 26.672 individuals possessing a land-line in their homes. For this cell phone sample, the letter was not sent before the call but its sending was proposed by the interviewer. The participation rate was 65%. The data were weighted according to the number of telephone lines and the number of individuals in the household to compensate for the greater probability of being selected in households with several telephone lines and the lesser probability in households made up of many individuals. Data were then adjusted to match the French population structure in terms of gender, age group, region, town size and education, as derived from the 2005 French census, by using a calibration procedure issued from a general regression (GREG) carried out with R software. Precise descriptions of sampling and other survey procedures are available elsewhere . The final sample used here comprised 24.674 persons aged 18–64.
The Life events and health survey (LEHS) was collected by National Institute of Statistics and Economic Studies (INSEE) interviewers for the Research, Studies, Evaluation and Statistics Directorate (DREES) of the French Ministry of Health and Sport. It was a face-to-face survey mainly exploring health and difficult life events. This cross-sectional survey is representative of the non-institutionalised, civilian population of France, aged 18–75, based on the most recent national census. Data were collected from September 2005 to November 2005. The participation rate was 72%. The most sensitive questions (sexual behaviour, alcohol and drug use) were asked via A-CASI. The respondents' privacy was ensured as the interviewer, who was alone with the respondent, left the room for the A-CASI part of the questionnaire. Out of 9,953 people interviewed face-to-face, 9,538 (95.8%) agreed to complete the interview with the A-CASI. Data were also adjusted to the French population structure as obtained from the “Continuous employment survey” carried out in 2005, both for the whole sample and for respondents who answered the A-CASI questionnaire. Precise descriptions of sampling and other survey procedures are available elsewhere .
Both HB and LEHS surveys rely on complex sampling: two stages of simple random sampling (SRS) for HB (phone number and then individual), stratified and three stage sampling for LEHS (stratification according to the size and socio-economic status of the area of residence, urban area for the first stage, household for the second stage then individual for the third stage). Both surveys aimed to interview the same population, but with different sampling strategies and each individual could be interviewed for both surveys, although this may be very rare (less then 0.001%). The two datasets were pooled together in order to estimate adjusted differences of A-CASI (LEHS) vs CATI (HB) in multivariate logistic regressions. The sampling design (stratification and clustering in LEHS) and the sampling weights were taken into account in the analyses, by considering HB as a separate stratum containing as many clusters as individuals.
The two surveys employed similar assessments to estimate prevalence for lifetime alcohol and drug use, problematic alcohol use, and 12-month and 30-day cannabis use . The CAGE test was used to assess problematic alcohol use. This test aims to measure the risk of alcohol dependency. It is made up of four simple binary questions :
- Have you ever felt you needed to Cut down on your drinking?
- Have people Annoyed you by criticizing your drinking?
- Have you ever felt bad or Guilty about drinking?
- Have you ever felt you needed a drink first thing in the morning (Eye-opener) to steady your nerves or to feel better?
People with two or more positive answers have a high probability of being excessive alcohol drinkers or alcohol-dependent.
Bivariate analyses with qualitative variables were performed using design-based Pearson chi-squared tests, for the whole sample and by gender. Logistic regressions controlled for age, gender, marital and employment status, and educational level and were used to compute adjusted odds ratios (aOR) by data collection mode for a large set of substance use indicators. Analyses were first performed on the whole sample (18–64 years old), and then stratified by gender and age group: 18–29 and 30–44 years old, using SAS V9.3.2 and Stata V10.1. This strategy makes it possible, first, to take into account the strong age- and gender-related variations in psychoactive substance use  and second, to avoid for possible interactions between these variables and the data collection mode. Data from respondents aged 45–64 were not analysed because of very low illicit drug use prevalences.
Differences were considered significant at the 0.05 level and 95% confidence intervals were computed.
Table 1 shows that several differences between respondents to the CATI survey and respondents to the A-CASI survey remained even after weighting. There were no differences in gender and age structure between the two samples. However, with respect to level of education, in the A-CASI there were slightly fewer individuals with less than the Baccalauréat (final secondary school diploma) among 18- to 64-year-olds (33% versus 36% on the CATI), although this variable was part of the weighting process. This may be explained by differences in survey and weighting methods, the weighting process having been performed on the whole sample in each case (15–75 years old in HB, 18–75 years old in LEHS). Subjects interviewed in the LEHS were more likely to be employed and to live with a partner.
With regard to substance use prevalences, the most significant differences were with respect to alcohol-related behaviours (Table 2, Table 3 and Table 4), both for the samples overall and by gender. Among A-CASI respondents who reported lifetime alcohol use, 15.4% were CAGE-positive, whereas this was the case for only 10.8% of CATI respondents. Among the four CAGE questions, the main difference was in responses relating to the need to cut down on alcohol use: 20.4% responded positively to this question when it was posed via A-CASI, vs. only 13.7% when it was asked by CATI. Concerning illicit drugs, the differences were particularly small and never significant, whether taken separately or as a whole.
Table 5 shows that after adjustment for age, gender, marital and employment status and educational level in logistic regressions, differences in CAGE test responses remain: compared to the CATI, the A-CASI yielded more positive responses for the whole sample (aOR = 1.60 [1.38–1.77]) and for both age groups (aOR = 1.51 [1.16–2.00] in 18–29 and aOR = 1.31 [1.07–1.60] in 30–44). In the male subsample, ORs were significant for the whole sample (aOR = 1.63 [1.41–1.89]) and for both age groups (aOR = 1.48 [1.07–2.04] in 18–29 and aOR = 1.37 [1.07–1.74] in 30–44); in the female subsample, OR was significant only for the whole sample (aOR = 1.41 [1.13–1.75]) and for the 18–29 age group (aOR = 1.61 [1.02–2.55]). In addition, people more often reported a regular cannabis use on the A-CASI (aOR = 1.32 [1.04–1.67]), particularly women in the 30–44 age group (aOR = 2.56 [1.11–5.88]), while men aged 30 to 44-year-olds more often reported having ever used cannabis in their lifetime on the CATI (aOR = 0.81 [0.68–0.97]). Lifetime use of illicit drugs other than cannabis was less often reported by men in the 30–44 age group on the A-CASI (aOR = 0.61 [0.42–0.89]). People aged 18 to 29 more often reported lifetime use of illicit drugs via A-CASI (aOR = 1.45 [1.09–1.91]), escpecially men (aOR = 1.66 [1.18–2.33]). Thus, men aged 18–29 more frequently reported having used ecstasy, amphetamines, LSD, cocaine and heroin on the A-CASI questionnaire. Among women, results were close for lifetime use of illicit drugs other than cannabis, except for LSD for which they reported more on the A-CASI (aOR = 3.60 [1.64–7.89]).
Altough all representative surveys across Europe show age and gender disparities in drug use, few methodological studies have been undertaken to test the influence of data collection mode on drug prevalences reported on surveys, particularly in different age and gender groups. A major feature of this analysis was our use of two large samples from the non-institutionalized French population to compare prevalences of drug use according to data collection mode in three age groups, covering a large part of the population (18–64 years), and in both genders.
We found that, while overall prevalences are very similar for the two data collection modes, differences are found between the different data collection modes according to the age and the gender of the respondents. Males between the ages of 18 and 29 more often responded positively on the CAGE questionnaire and more often reported lifetime use of ecstasy, amphetamines, LSD, cocaine and heroin via A-CASI than in the CATI. Females aged 30 to 44 were more likely to report regular cannabis use and lifetime LSD use via A-CASI.
According to our results, the A-CASI seems particularly suitable for young men, whereas telephone surveys seem well-suited to people aged 30 and over, particularly in men. Reporting on use of cannabis (current and regular) does not appear to be influenced by data collection mode. Differences between data collection modes were less pronounced among women, especially for lifetime use of illicit drugs other than cannabis.
Comparison with other studies
It is generally assumed that under-reporting can be a serious problem for highly sensitive or illegal behaviours such as alcohol misuse or illicit drug use, and that underreporting can be reduced by greater privacy in the mode of survey administration , , ,  and that self-administration leads to more honest responses on sensitive behaviours . Our results suggest that this general pattern may be true for young males, but not valid for all age groups and gender.
A differential effect of the data collection mode by gender has already been reported in a Belgian study . This comparison of self-administered pen-and-paper questionnaires and A-CASI among school children (aged – years old) showed that using a computer yielded higher prevalences among girls and lower prevalences among boys. These results were found in adolescents, while our results were derived from young adults. It can be noted that they are not in accordance with the results for our youngest age group (18–29 years old). This suggests that differential gender effects should be more deeply analyzed in future research.
Interpretations of the findings
Apart from data collection mode, which is considered in the literature as a major influence on the way respondents answer sensitive questions –, , several methodological differences could influence prevalence estimates for substance use. Nevertheless, the sampling procedures and the sampling weights were taken into account in the analyses, by considering HB as a separate stratum containing as many clusters as individuals.
Some of the more salient differences between the Health Barometer and the LEHS include privacy considerations (presence of the interviewer or of other people) and response rate. On one hand, the LEHS was performed face-to-face by interviewers from the French National Institute for Statistics and Economic Studies (INSEE), using computerized self-administration methods (A-CASI) to collect drug-related data. On the other hand, the HB was a telephone survey conducted by a private polling agency. It is difficult to know, however, how much the kind of interviewer influences respondents' perception of privacy and confidentiality. Effects could differ by gender and age group. For example, older people may be more mistrustful towards computers, perceiving them as more likely to enable breaches in confidentiality, or else they may have poorer computing skills. More generally, men may be more mistrustful towards a face-to-face interviewer, as they are generally less trusting about surveys and less willing to participate than women. It could also be because they are more prone to use illicit drugs than women , . On the other side, women may be more reluctant to admit problems and deviant behaviours, particularly drug use: these are male behaviours that generally expose women to formal or informal reproaches . A setting that is free from direct interaction with another person could be favourable. Our results are in accordance with these hypotheses, but additional investigations are needed to explore why the results vary according to substance.
Although our study is based on large representative samples using the same questions, it is subject to several limitations that are common in such methodological comparisons. The interviews were conducted by trained interviewers and response rate was satisfactory for such health surveys. However, selection bias cannot be ruled out and some populations, especially the most deprived such as homeless people, are likely to be under-represented in both surveys, even though some were interviewed thanks to the HB sample based on mobile phone numbers. However, due to the small size of such groups, this has only a small effect on population-level estimates, as it has already been shown for alcohol prevalence .
In general population surveys, recall bias in substance use reporting is a major concern. This is considered as a threat for survey measurements of alcohol consumption in general . Indeed, survey questions encourage error because they do not help respondents to recall extensively all their drinking occasions or because respondents must answer based on standard drink sizes that often do not match their own drinking style . Concerning illicit drugs, recall bias is probably less important than for alcohol consumption. In studies examining agreement between timeline follow-back for self-reported use and biological measures for illicit substances, agreement rates were considered high . However, longitudinal cohort studies have suggested that re-interviews about drug use often lead to recanting resulting in a decreased reports of lifetime substance use .
Several of the substance use behaviors are very rare: for example, lifetime LSD and heroin uses do not exceed 1% among women. This could explain the wide 95% confidence interval obtained for lifetime LSD use odds-ratio for women aged 30–44 ([1.64–7.89]) or for lifetime heroin use odds-ratio for women aged 18–29 ([0.25–5.33]). However, this does not affect the salience of the approach nor the overall quality of the models.
Differences were considered significant at the 0.05 level and no adjustment for multiple comparisons was made. However, the present analysis is suitable for a descriptive purpose.
The few months between both surveys could be underlined as a possible factor of explanation for the differences, in particular concerning seasonal behaviours such as alcohol intake, with differences between holidays and normal working weeks, New Year's Eve parties, etc. Several studies have shown a seasonal effect in alcohol use. This effect is particularly strong for specific populations such as professional sportsmen  and which is mainly a “January effect” in most European countries and the Northern Hemisphere , . But this factor probably plays a minor role because the HB fieldwork was interrupted from 20th December 2004 to 10th January 2005 in order to minimize both overindulgence during the holiday season and New Year's resolutions toward temperance. Moreover, except for cannabis regular use, which is calculated on the last 30 days, the indicators relied on large period of time (lifetime or last 12 months, including all seasons for all the respondents). Such indicators are much less impacted by seasonal effects than daily or recent use of alcohol or drugs.
The databases used in the two surveys (persons living in households in LEHS, members of households equipped with a landline or mobile telephone in HB) could also explain part of the differences in prevalences. However, individuals without a landline but who have a mobile phone were included in the HB sample, and households with no telephone are quite rare in France (about 1%). Moreover, a study conducted on several editions of the National Household Survey on Drug Abuse (NHSDA) found very similar levels of drug use in reachable populations (about 80,000 individuals) and among individuals with no telephone (5,800 individuals) . The differences, although they varied according to the substance, were found to be always small .
The LEHS had a slightly higher response rate than the HB (72% vs. 65%), a factor that would generally be expected to increase substance use estimates, as difficult-to-reach respondents have been shown to have higher rates of risky behaviours  and substance use particularly , , . Recent results show, however, that this distinction is probably negligible with respect to the effect of the mode of collection . Moreover, despite efforts made to reach households (up to 20 calls before giving up a phone number), 8% were unreachable in the phone survey.
There could be also a halo effect: in other words, the influence of questions occurring earlier in the questionnaires. One study showed that having opportunities to express positive behaviours at the beginning of a survey legitimised the later reporting of practices that are difficult to admit to. In addition, the closer the theme covered by the survey was to a theme considered sensitive by public opinion (such as violence or suicide), the higher the reported levels of alcohol use . The results of another study suggested that the larger the number of questions with regard to a given theme, the greater the probability of obtaining a positive answer concerning a deviant behaviour on this theme .
Indeed, although the questions in the drug module were designed to be identical, the questionnaires on the LEHS and the HB nevertheless slightly differed. In both surveys, the drug module was placed toward the end of the questionnaire, but the questions preceding this module were not the same, which could have intensified a possible halo effect. It is particularly difficult to untangle a factor of this kind from other possible effects, in particular those related to data collection mode. Nevertheless, there were so few differences in the design or in the wordings of the questions that this issue is unlikely to have affected drug use prevalence estimates.
The results of this study are twofold: first, with slight differences in the main themes broached in the questionnaires and with different data collection modes, prevalences of illicit drug use appeared to be quite similar between surveys, unlike alcohol misuse. Second, there were marked differences according to gender and age. On one hand, computer survey technology improved reports on alcohol and drugs for the whole population, and reports on illicit drug use for young men only. On the other hand, the use of a telephone survey yielded very similar results for women for most of indicators. The methodological research discussed above demonstrated the impact of the mode of data collection on the quality of responses some time ago. The prevalences found in our study do not, however, clearly demonstrate the unilateral superiority of one mode in collecting data on sensitive topics. From the current point of view, according to which higher prevalences are a sign of more reliable reporting, the A-CASI seems to be more suitable for young men, whereas the telephone interview offers convincing results for women and people aged 30 and over. Thus, our results may support multi-mode approaches as suitable solutions to improve response quality in general population surveys, although it may lead to very complex design. Further research is thus needed to understand gender and age differences and to replicate this study. Beyond the two modes that were chosen for these surveys, other promising modes such as Telephone Audio Computer-Assisted Self-Interviewing (T-ACASI) should certainly be considered in the European setting, following recent experiments –. Finally, our findings underline the fact that if the computer survey technology seems to improve reports on alcohol and illicit drug use in the general population, our results also showed that CATI remains an efficient mode, sometimes probably even more convenient in elderly people or in women.
We would like to thank Catherine Cavalin (LEHS coordinator), Nicolas Razafindratsima (INED) and Jean-Baptiste Richard (INPES) for their valuable comments and assistance.
Conceived and designed the experiments: FB. Analyzed the data: RG SL. Contributed reagents/materials/analysis tools: FB RG SL. Wrote the paper: FB RG SL. Conducted the literature review: FB.
- 1. Beck F, Gautier A, Guilbert P (2007) Baromètre santé 2005. Attitudes et comportements de santé. Saint-Denis: INPES. 608 p.
- 2. Cavalin C (2009) L'élaboration du questionnaire et du protocole de collecte: innovations et précautions méthodologiques. In: Beck F, Cavalin C, Maillochon F, editors. Violences et santé en France: Etats des lieux. Paris: La documentation française.
- 3. Bless R, Korf D, Riper H, Diemel S (1997) Improving the Comparability of General Population Surveys on Drug Use in the European Union. Lisbon: EMCDDA.
- 4. Beck F, Peretti-Watel P (2002) The Impact of Data Collection Methodology on the Reporting of mode de Illicit Drug Use by Adolescents. Population 57: 571–591.
- 5. Gmel G (2000) The effect of mode of data collection and of non-response on reported alcohol consumption: a split-sample study in Switzerland. Addiction 95: 123–134.
- 6. Midanik LT, Greenfield TK (2003) Telephone versus in-person interviews for alcohol use: results of the 2000 National Alcohol Survey. Drug Alcohol Depend 72: 209–214.
- 7. Greenfield TK, Midanik LT, Rogers JD (2000) Effects of telephone versus face-to-face interview modes on reports of alcohol consumption. Addiction 95: 277–284.
- 8. Miller JW, Gfroerer JC, Brewer RD, Naimi TS, Mokdad A, et al. (2004) Prevalence of adult binge drinking: a comparison of two national surveys. Am J Prev Med 27: 197–204.
- 9. Aquilino WS, Lo Sciouto LA (1990) Effect of interview mode on self-reported drug use Public Opinion Quarterly. 54: 362–393.
- 10. Grucza RA, Abbacchi AM, Przybeck TR, Gfroerer JC (2007) Discrepancies in estimates of prevalence and correlates of substance use and disorders between two national surveys. Addiction 102: 623–629.
- 11. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, et al. (1998) Adolescent sexual behavior, drug use, and violence: increased reporting with computer survey technology. Science 280: 867–873.
- 12. Gfroerer J, Wright D, Kopstein A (1997) Prevalence of youth substance use: the impact of methodological differences between two national surveys. Drug Alcohol Depend 47: 19–30.
- 13. Kann L, Brener ND, Warren CW, Collins JL, Giovino GA (2002) An assessment of the effect of data collection setting on the prevalence of health risk behaviors among adolescents. J Adolesc Health 31: 327–335.
- 14. Turner CF, Lessler JT, Devore JW (1992) Effects of mode of administration and wording on reporting drug use. In: Turner C, Lessler J, Gfroerer J, editors. Survey measurement of drug use, methodological issues. Washington, DC: US Departement of Health and Human Services, Government Printing Office.
- 15. Hochstim JR (1967) A Critical Comparison of Three Strategies of Collecting Data from Households. Journal of the American Statistical Association 62: 976–989.
- 16. Wiseman F (1971) Methodological bias in public opinion surveys. Public Opinion Quarterly 36: 105–108.
- 17. Siemiatycki J, Campbell S, Richardson L, Aubert D (1984) Quality of response in different population groups in mail and telephone surveys. Am J Epidemiol 120: 302–314.
- 18. Smart RG (1985) When to do cross-sectional studies. In: Robins L, editor. Studying drug abuse. New Brunswick, NJ: Rutgers University Press. pp. 47–56.
- 19. Aquilino WS (1994) Interview mode effects in surveys of drug and alcohol use: A field experiment. Public Opinion Quarterly 58: 210–240.
- 20. Kraus L, Augustin R (2001) Measuring alcohol consumption and alcohol-related problems: comparison of responses from self-administered questionnaires and telephone interviews. Addiction 96: 459–471.
- 21. Beebe TJ, McRae JA Jr, Harrison PA, Davern ME, Quinlan KB (2005) Mail surveys resulted in more reports of substance use than telephone surveys. J Clin Epidemiol 58: 421–424.
- 22. Needle RH, Jou SC, Su SS (1989) The impact of changing methods of data collection on the reliability of self-reported drug use of adolescents. Am J Drug Alcohol Abuse 15: 275–289.
- 23. Needle R, McCubbin H, Lorence J, Hochhauser M (1983) Reliability and validity of adolescent self-reported drug use in a family-based study: a methodological report. Int J Addict 18: 901–912.
- 24. Fendrich M, Johnson TP (2001) Examining prevalence differences in three national surveys of youth: Impact of consent procedures, mode, and editing rules. Journal of Drug Issues 31: 615–642.
- 25. Babor TF, Kranzler HR, Lauerman RJ (1989) Early detection of harmful alcohol consumption: Comparison of clinical, laboratory, and self-report screening procedures. Addictive Behaviors 14: 139–157.
- 26. Tourangeau R, Rips LJ, Raskinki K (2000) The Psychology of Survey Response. Cambridge, UK: Cambridge University Press.
- 27. Rouse BA, Kozel NJ, Richards LG (1985) Self-Report Methods of Estimating Drug Use: Meeting Current Challenges to Validity. Rockville: Department Of Health And Human Services (NIDA).
- 28. Tourangeau R, Smith TW (1996) Asking Sensitive Questions: The Impact Of Data Collection Mode, Question Format, And Question Context. Public Opinion Quarterly 60: 275–304.
- 29. Beebe TJ, Harrison PA, Mcrae JA Jr, Anderson RE, Fulkerson JA (1998) An Evaluation of Computer-Assisted Self-Interviews in a School Setting. Public Opinion Quarterly 62: 623–632.
- 30. Webb PM, Zimet GD, Fortenberry JD, Blythe MJ (1999) Comparability of a computer-assisted versus written method for collecting health behavior information from adolescent patients. J Adolesc Health 24: 383–388.
- 31. Hallfors D, Khatapoush S, Kadushin C, Watson K, Saxe L (2000) A comparison of paper vs computer-assisted self interview for school alcohol, tobacco, and other drug surveys Evaluation and Program Planning. 23: 149–155.
- 32. Bless R, Korf D, Riper H, Diemel S, Arvidsson O, et al.. (1999) Comparability of general population surveys; Statistics TABoSRa, editor: EMCDDA.
- 33. Vereecken CA, Maes L (2006) Comparison of a computer-administered and paper-and-pencil-administered questionnaire on health and lifestyle behaviors. J Adolesc Health 38: 426–432.
- 34. Newman JC, Des Jarlais DC, Turner CF, Gribble J, Cooley P, et al. (2002) The differential effects of face-to-face and computer interview modes. Am J Public Health 92: 294–297.
- 35. Rodgers SM, Gribble JN, Turner CF, Miller HG (1999) Entretiens auto-administrés sur ordinateurs et mesure des comportements sensibles. Population 54: 231–250.
- 36. Thornberry OT Jr (1987) An experimental comparison of telephone and personal health interview surveys. Vital Health Stat 2: 1–4.
- 37. Biemer PP (2001) Nonresponse Bias and Measurement Bias in a Comparison of Face to Face and Telephone Interviewing. Journal of Official Statistics 17: 295–320.
- 38. Brogger J, Bakke P, Eide GE, Gulsvik A (2002) Comparison of telephone and postal survey modes on respiratory symptoms and risk factors. Am J Epidemiol 155: 572–576.
- 39. ACSF (1992) Analysis of sexual behaviour in France (ACSF). A comparison between two modes of investigation: telephone survey and face-to-face survey. ASCF principal investigators and their associates. Aids 6: 315–323.
- 40. Mangione T, Hingson R, Barrett J (1982) Collecting Sensitive Data A Comparison of Three Survey Strategies. Sociological Methods and Research 10: 337–346.
- 41. Czaja R (1987) Asking Sensitive Behavioral questions in Telephone Interviews. Int Q Community Health Educ 8: 23–32.
- 42. de Leeuw ED, Van der Zouwen J (1988) Data quality in telephone and face to face surveys: a comparative meta-analysis. In: Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL, et al.., editors. Telephone Survey Methodology. New York: John Wiley & Sons. pp. 283–299.
- 43. Sykes W, Collins M (1988) Effects of mode of interview: experiments in the UK. In: Groves RM, Biemer PP, Lyberg LE, Massey JT, Nicholls WL, et al.., editors. Telephone Survey Methodology. New York: John Wiley & Sons. pp. 301–320.
- 44. Gfroerer JC, Hughes AL (1991) The feasibility of collecting drug abuse data by telephone. Public Health Rep 106: 384–393.
- 45. Aquilino WS, Wright DL (1996) Substance use estimates from RDD and area probability samples: Impact of differential screening methods and unit nonresponse. Public Opinion Quarterly 60: 563–573.
- 46. Guilbert P, Baudier F, Arwidson P (1999) [Comparison of two types of behavior and attitude surveys on alcohol, tobacco and illegal drug use]. Rev Epidemiol Sante Publique 47: 129–138.
- 47. Beck F (2000) La tentation de la représentativité dans les enquêtes en population générale sur les usages de drogues. Psychotropes, revue internationale des toxicomanies 6: 7–25.
- 48. Mayfield D, McLeod G, Hall P (1974) The CAGE questionnaire: validation of a new alcoholism screening instrument. Am J Psychiatry 131: 1121–1123.
- 49. EMCDDA (2011) 2011 Annual report on the state of the drugs problem in Europe. Lisbon: EMCDDA. 105 p.
- 50. Beck F, Chaker S, Legleye S (2005) [Gender differences in drug use] Différences de genre dans les usages de substances psychoactives. OFDT, 2005 National report (2004 data) to the EMCDDA by the Reitox National Focal Point France New development, trends and in-depth information on selected issues French version. St Denis: OFDT. pp. 73–91.
- 51. Makela P, Huhtanen P (2010) The effect of survey sampling frame on coverage: the level of and changes in alcohol-related mortality in Finland as a test case. Addiction 105: 1935–1941.
- 52. Gmel G, Daeppen JB (2007) Recall bias for seven-day recall measurement of alcohol consumption among emergency department patients: implications for case-crossover designs. J Stud Alcohol Drugs 68: 303–310.
- 53. Kaskutas LA, Graves K (2000) An alternative to standard drinks as a measure of alcohol consumption. J Subst Abuse 12: 67–78.
- 54. Hjorthøj C, Hjorthøj A, Nordentoft M (2012) Validity of Timeline Follow-Back for self-reported use of cannabis and other illicit substances-systematic review and meta-analysis. Addict Behav 37: 225–233.
- 55. Fendrich M, Mackesy-Amiti ME (2000) Decreased drug reporting in a cross-sectional student drug use survey. J Subst Abuse 11: 161–172.
- 56. Dietze PM, Fitzgerald JL, Jenkinson RA (2008) Drinking by professional Australian Football League (AFL) players: prevalence and correlates of risk. Med J Aust 189: 479–483.
- 57. Carpenter C (2003) Seasonal variation in self-reports of recent alcohol consumption: racial and ethnic differences. J Stud Alcohol 64: 415–418.
- 58. Weir E (2003) Seasonal drinking: let's avoid the “January effect”. Cmaj 169: 1186.
- 59. McAuliffe WE, LaBrie R, Woodworth R, Zhang C (2002) Estimates of potential bias in telephone substance abuse surveys due to exclusion of households without telephones. Journal of Drug Issues 32: 1139–1154.
- 60. Nelson DE, Powell-Griner E, Town M, Kovar MG (2003) A comparison of national estimates from the National Health Interview Survey and the Behavioral Risk Factor Surveillance System. Am J Public Health 93: 1335–1341.
- 61. Cottler LB, Zipp JF, Robins LN, Spitznagel EL (1987) Difficult-to-recruit respondents and their effect on prevalence estimates in an epidemiologic survey. American Journal of Epidemiology 125: 329–339.
- 62. Beck F, Legleye S, Peretti-Watel P (2004) Using the telephone in general population surveys on drugs. In: Decorte T, Korf DJ, editors. European studies on drugs and drug policy Proceedings of the 14th International Conference of the European Society for Social Drug Research (ESSD) in Ghent from 2–4 October 2003. Brussels: VUB Press. pp. 113–140.
- 63. Fowler FJ Jr, Stringfellow VL (2001) Learning From Experience: Estimating Teen Use of Alcohol, Cigarettes, and Marijuana From Three Survey Protocols. Journal of Drug Issues 31: 643–664.
- 64. Brittingham A, Tourangeau R, Kay W (1998) Reports of smoking in a national survey: data from screening and detailed interviews, and from self- and interviewer-administered questions. Ann Epidemiol 8: 393–401.
- 65. Moskowitz JM (2004) Assessment of Cigarette Smoking and Smoking Susceptibility among Youth: Telephone Computer-Assisted Self-Interviews versus Computer-Assisted Telephone Interviews. Public Opinion Quarterly 68: 565–587.
- 66. Villarroel MA, Turner CF, Rogers SM, Roman AM, Cooley PC, et al. (2008) T-ACASI reduces bias in STD measurements: the National STD and Behavior Measurement Experiment. Sex Transm Dis 35: 499–506.
- 67. Turner CF, Villarroel MA, Rogers SM, Eggleston E, Ganapathi L, et al. (2005) Reducing bias in telephone survey estimates of the prevalence of drug use: a randomized trial of telephone audio-CASI. Addiction 100: 1432–1444.
- 68. Turner CF, Al-Tayyib A, Rogers SM, Eggleston E, Villarroel MA, et al. (2009) Improving epidemiological surveys of sexual behaviour conducted by telephone. Int J Epidemiol 38: 1118–1127.
- 69. Harmon T, Turner CF, Rogers SM, Eggleston E, Roman AM, et al. (2009) Impact of T-ACASI on Survey Measurements of Subjective Phenomena. Public Opinion Quarterly 73: 255–280.