Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The REporting of Studies Conducted Using Observational Routinely-Collected Health Data (RECORD) Statement: Methods for Arriving at Consensus and Developing Reporting Guidelines

  • Stuart G. Nicholls,

    Affiliations Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada, Children′s Hospital of Eastern Ontario Research Institute, Ottawa, Canada

  • Pauline Quach,

    Affiliation Institute for Clinical Evaluative Sciences, Toronto, Canada

  • Erik von Elm,

    Affiliation Cochrane Switzerland, Institute of Social and Preventive Medicine, University of Lausanne, Lausanne, Switzerland

  • Astrid Guttmann,

    Affiliations Institute for Clinical Evaluative Sciences, Toronto, Canada, Hospital for Sick Children, Department of Paediatrics and Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada

  • David Moher,

    Affiliations Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada, Ottawa Hospital Research Institute, Ottawa, Canada

  • Irene Petersen,

    Affiliation Department of Primary Care and Population Health, University College London, London, United Kingdom

  • Henrik T. Sørensen,

    Affiliation Department of Clinical Epidemiology, Aarhus University, Aarhus, Denmark

  • Liam Smeeth,

    Affiliation London School of Hygiene and Tropical Medicine, London, United Kingdom

  • Sinéad M. Langan ,

    Contributed equally to this work with: Sinéad M. Langan, Eric I. Benchimol

    Affiliation London School of Hygiene and Tropical Medicine, London, United Kingdom

  • Eric I. Benchimol

    Contributed equally to this work with: Sinéad M. Langan, Eric I. Benchimol

    Affiliations Department of Epidemiology and Community Medicine, University of Ottawa, Ottawa, Canada, Children′s Hospital of Eastern Ontario Research Institute, Ottawa, Canada, Institute for Clinical Evaluative Sciences, Toronto, Canada, Department of Pediatrics, Children′s Hospital of Eastern Ontario, University of Ottawa, Ottawa, Canada



Routinely collected health data, collected for administrative and clinical purposes, without specific a priori research questions, are increasingly used for observational, comparative effectiveness, health services research, and clinical trials. The rapid evolution and availability of routinely collected data for research has brought to light specific issues not addressed by existing reporting guidelines. The aim of the present project was to determine the priorities of stakeholders in order to guide the development of the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement.


Two modified electronic Delphi surveys were sent to stakeholders. The first determined themes deemed important to include in the RECORD statement, and was analyzed using qualitative methods. The second determined quantitative prioritization of the themes based on categorization of manuscript headings. The surveys were followed by a meeting of RECORD working committee, and re-engagement with stakeholders via an online commentary period.


The qualitative survey (76 responses of 123 surveys sent) generated 10 overarching themes and 13 themes derived from existing STROBE categories. Highest-rated overall items for inclusion were: Disease/exposure identification algorithms; Characteristics of the population included in databases; and Characteristics of the data. In the quantitative survey (71 responses of 135 sent), the importance assigned to each of the compiled themes varied depending on the manuscript section to which they were assigned. Following the working committee meeting, online ranking by stakeholders provided feedback and resulted in revision of the final checklist.


The RECORD statement incorporated the suggestions provided by a large, diverse group of stakeholders to create a reporting checklist specific to observational research using routinely collected health data. Our findings point to unique aspects of studies conducted with routinely collected health data and the perceived need for better reporting of methodological issues.


The entry of health care into the electronic age has led to clinical benefits[13], as well as the proliferation of large data repositories containing routinely collected health data. These are defined as data collected for administrative and clinical purposes, without specific a priori research questions[3, 4]. Examples of such routine data collection include, but are not limited to, health claims data, primary care and hospital electronic health records, and disease registries such as those established for audit purposes.

While these data are collected primarily for healthcare administration or clinical management, the nature and scale of the data make them potentially exciting resources for research. Routinely collected data now are being used for observational, comparative effectiveness and health services research, and clinical trials[3, 5]. The Canadian Institutes of Health Research (CIHR), among other research organizations, have outlined and endorsed the use of administrative health databases for outcomes research as one strategy for enhancing patient-oriented research[6] and improving health care efficiency and delivery. However, as with any new research tool, the limitations, biases, and methods associated with research using routine health data have raised increasing concerns[4]. Adequate and clear reporting of research methods and results is needed to enable the research consumer to judge studies′ strengths and limitations.

At present, researchers who conduct observational research are encouraged to use the STRengthening the Reporting of OBservational studies in Epidemiology (STROBE) statement as guidance when reporting their research[7]. Transparent reporting facilitates decision making by all readers and reproducibility of methods by interested researchers.[8] However, the rapid evolution and availability of routinely collected data for research has brought to light specific issues not addressed by STROBE. This gap was acknowledged by a large group of scientists (including five members of the STROBE Steering Committee) at a meeting following the 2012 Primary Care Database Symposium (27 January 2012 in London, UK). The group identified the need to expand the STROBE statement to encompass studies based on routinely collected health data, most of which are observational (non-randomized) in design. Since stakeholders in research relying on routine health data are diverse (including researchers, clinicians, health policymakers, and representatives of the pharmaceutical industry), meeting attendees recommended that a wide range of stakeholders participate in expanding the reporting guidelines to ensure adequate representation of interests and views.

Since not all stakeholders can attend a face-to-face meeting to create reporting guidelines, a Delphi exercise is often used to obtain information to inform those who write the guidelines.[9] Such an exercise can be conducted using web and social media technologies to allow for the creation of a document that incorporates a large and diverse group of stakeholders. The purpose of the project reported here was to determine the interests and priorities of stakeholders in research conducted using routine health data, in order to guide the development of the REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statement, an extension of the STROBE statement. We sought to identify issues important to stakeholders in order to create the most representative set of reporting guidelines possible.


We used a three-stage process to develop an extension of the STROBE guidelines specific to observational studies conducted using routinely collected health data, i.e., the RECORD statement ( The first stage of the process was a modified Delphi exercise to elicit the priorities of a large, diverse group of stakeholders through two surveys (Fig 1). The second stage consisted of a face-to-face meeting of the RECORD working committee members. It reviewed the survey results and processed stakeholder recommendations in order to create checklist items and explanatory text. The third stage of the process consisted of review of the draft checklist and explanatory document posted in an online message board on the RECORD website ( by a wide group of research stakeholders.

Fig 1. Flow diagram of steps used to elicit stakeholder priorities for RECORD.

Participants in the surveys

Eligible survey participants were stakeholders in the broad context of use of routinely collected health data for research, including both researchers and users of research results. Stakeholders included, but were not limited to, clinicians, clinical and academic researchers, biomedical journal editors, policymakers, and pharmaceutical industry representatives.

Multiple methods were used for recruitment. Members of the RECORD Steering Committee each identified 5–10 experts in the field, who were contacted directly. Additional participants were recruited through snowball sampling[10], in which the initially invited participants were asked to identify others. In addition, stakeholders were recruited through appropriate electronic mailing lists (listservs) including the Cochrane Collaboration, AcademyHealth, and the Agency for Healthcare Research and Quality. Interest was also garnered through editorials in the Journal of Clinical Epidemiology[11] and Clinical Epidemiology[12]. The editorials outlined the need for expanded reporting guidelines, requested stakeholder involvement, and provided contact information for the study. Stakeholders′ interest in research based on routinely collected health data was ascertained in order to ensure the relevance of their input. All stakeholders who expressed an interest were included in the surveys. See S1 Appendix for a complete list of stakeholders who participated in various stages of the surveys and provided message board feedback.


A two-stage modified Delphi process was used to generate items for inclusion in the RECORD guidelines and to rank responses. The first stage was used to generate an extensive list of potential items, and the subsequent stage focused on reducing and prioritizing these items through a consensus process of rating each item in terms of its importance.

In the first survey round, participants were asked to identify specific themes that should be included in the RECORD reporting guidelines (see example question Fig 2A). This question sought to generate overall themes that are important to consider in the reporting of research based on routinely collected health data, and allowed participants to propose broad groupings of items. Participants also were presented with existing the STROBE guideline categories[7] [e.g., title and abstract, introduction (background/rationale), introduction (objectives), methods (study design), methods (setting), etc.] and were asked to list additional items needed to report research based on routinely collected health data and thus important for inclusion in the RECORD statement. Open-ended free text responses were collected to allow for full elaboration of meaning and rationale.

Fig 2.

Examples of layout from (A) first survey (free-text responses) and (B) second survey (Likert-scale quantitative ranking).

In the second survey round, respondents received a list of themes emerging from the first round (see example in Fig 2B). In order to provide clear examples to participants of the items relating to each theme, the second survey presented themes with example phrases of individual components. For both the overarching themes and items within existing STROBE categories, rating was performed using a 5-point Likert scale (1-strongly disagree to 5-strongly agree).

Links to the online surveys were emailed to participants with a specified deadline. Reminder emails were sent one week prior to the deadline, and a two-week extension was provided for both stages to ensure maximum participation. Both surveys were conducted using SurveyMonkey (, Palo Alto, CA, U.S.A.) and ethics approval was granted by the Children′s Hospital of Eastern Ontario (CHEO) Research Ethics Board (#13/45X).

Data Analysis of Surveys

Suggestions and comments procured from the first stage were imported into Qualitative Data Analysis Software (QDA) for analysis. Comments provided by respondents under each section of the STROBE checklist were coded using a process of qualitative description.[13] In this low-inference approach to coding qualitative data, the focus was a descriptive account of the text, as opposed to generation of theory. This was in keeping with our objectives for the Delphi process, i.e., to group elements into broad themes or constructs and to rank them, rather than to develop a theory concerning attitudes towards reporting of studies using routinely collected health data.

Initial coding for both overarching themes and category-specific items was undertaken by one investigator (SN). The initial list of themes and items then was refined and reduced through discussion with the steering committee. The themes generated within each STROBE category were then compiled across categories to provide an overall list of themes to be considered in the reporting guideline extension.

During the second (quantitative) stage of the Delphi process, we determined the mean, median, and interquartile range (IQR) for individual themes under each STROBE category and a rank order for each category. This step provided a prioritized list of themes on which to base the RECORD guidelines.

Working Committee Meeting

The working committee (which included members of the steering committee, journal editors, researchers, and other stakeholders) met in Lausanne, Switzerland from the 22nd to 24th of October, 2013. Working committee members consisted of three groups: 1) internationally recognized scientists who use routinely collected health data for research, 2) members of the STROBE working committee and/or EQUATOR Network who are experts in reporting guideline development, and 3) editors from journals which frequently review and publish observational research using routinely collected health data (BMJ, CMAJ, Health Services Research, PLoS Medicine, with input in absentia from editors of Clinical Epidemiology and Journal of Clinical Epidemiology). A list of working committee members is presented in S2 Appendix. The list of themes produced by the stakeholder surveys was presented in ranked order by observational research theme (as categorized in STROBE) to all members. The committee was divided into working groups by theme and asked to review stakeholder comments and quantitative rankings from the surveys. The committee created draft statements and themes for inclusion in the RECORD checklist and explanatory document. The full working committee then reviewed these draft statements and voted on agreement. Statements were discussed and revised until >80% agreement was achieved. Live polling was conducted using Poll Everywhere (, San Francisco, CA).

Post-Meeting Evaluation of Draft Statements

The draft RECORD statement written by the working committee and explanatory documentation were made available for stakeholder review from the 8th of September to 14th of November, 2014. The draft checklist items with explanatory text were posted on a password-protected message board on the website Checklist items were grouped by STROBE categories. Stakeholders were assigned a username and password and encouraged to engage in discussion. Those who did not provide written comments on a statement were asked to rank the statement on a 10-point scale.


Fig 1 provides a flow diagram of steps used to assess stakeholder priorities for RECORD. For the first survey, 98 stakeholders, nine steering committee members, and 16 working committee members were invited to complete the survey during April and May 2013. Of the 123 potential participants, 76 responded (response rate of 61.8%), and, of these, 68 responded to the optional demographic questions. For the second survey, 106 stakeholders, nine steering committee members, and 20 working committee members were invited to complete the survey between July and September 2013. Of the 135 potential participants, 71 responded to the second survey (response rate of 52.6%), and, of these, 56 responded to the optional demographic questions. Participant demographics are shown in Table 1.

First Stage Survey (Qualitative Analysis and Development of Themes)

Open-text responses for both the overarching themes and STROBE-specific categories indicated a focus on methodological issues. Comments reflected a need for information not only on the specific contents of the database and the proposed operational definitions, but also other contextual information such as whether the data were drawn from a publicly funded healthcare system or one based on private medical insurance.

The first stage open-text survey generated 311 responses relating to overarching themes to be included in the RECORD statement. These were collated into 131 individual codes. From these codes, a total of 10 overarching themes were derived. A list of the overall themes, together with examples of items coded within the themes, is provided in Table 2.

Table 2. Overall themes suggested by respondents in the first survey and mean ratings from the second survey.

Within each original STROBE category, the total number of responses and codes varied, ranging from 15 responses generating 10 unique codes for the STROBE category ‘Results-Main Results’, to 149 responses generating 86 distinct codes for the category ‘Methods-Setting’. Compiling the themes generated within each STROBE category provided a total of 13 themes (Table 3). These themes broadly reflected the overarching themes, with the majority of themes being associated with relevant STROBE categories.

Table 3. Mean rating of themes by manuscript section, as defined by the STROBE reporting checklist.

Second Stage Survey (Quantitative Analysis of Themes)

Ratings from the second stage survey indicated that priorities for RECORD checklist items centered on reporting of methodological aspects, as opposed to reporting of results. The highest rated overall themes for inclusion in the RECORD reporting guidelines were: (i) Disease/exposure identification algorithms (mean 4.78); (ii) Characteristics of the population included in databases (mean 4.76); (iii) Characteristics of the data (mean 4.63); (iv) Linkage (mean 4.62); and (v) Validity of diagnostic codes (mean 4.56) (see Table 2).

Within existing STROBE categories the importance assigned to each of the compiled themes varied, indicated by the mean rating. For example, describing the ‘Characteristics of data quality, data source/setting, data collection’ was rated more highly in relation to the STROBE categories of Methods (Bias) (mean 4.17) and Discussion (Limitations) (mean 4.47), than it was for Methods (Setting) (mean 3.69) or Results (Outcome Data) (mean 3.28). Table 4 presents summary data on the top-rated themes within each existing STROBE category. The raw data for both surveys is also available on the RECORD website ( or Data available from the Dryad Digital Repository:

Working Committee Meeting and Post-Meeting Feedback

During the working committee meeting, its members processed and prioritized stakeholder comments to create the draft RECORD checklist. Consensus of >80% was achieved for each checklist item. The resulting checklist will be made available in the main RECORD publication and on the RECORD website ( along with explanations and examples for each item. Working committee meeting minutes are available upon request.

Following the working committee meeting, draft checklist items and explanatory text were posted on a message board on for review by stakeholders. During the review period, 311 users accessed the website. Of these, 66.5% were new users. The most common countries of origin of website users were United Kingdom (20.6%), Brazil (14.5%), Canada (14.5%), Italy (11.9%), and the United States (9.8%) (Google Analytics for website Ratings by message board participants are provided in Table 5.

Table 5. Summary of post-meeting message board activity for draft statements.


The quality of reporting of research based on routinely collected health data has been suboptimal [14, 15], potentially resulting in misinterpretation or misapplication of research findings. We conducted two inclusive, comprehensive surveys of stakeholders to determine the topics of highest priority for inclusion in the RECORD guidelines checklist for studies using routinely collected health data. The results of these surveys have been used to ensure that RECORD guidelines adequately reflect the priorities of individuals who make scientific, policy, and clinical decisions based on these data. Our findings point to unique aspects of studies conducted with routinely collected health data and confirm the need for clarity regarding reporting of methodological issues.

Reporting guidelines most often take the form of a checklist, flow diagram, or explicit text designed to assist authors in reporting a specific type of research. This may allow for transparency and reproducibility of research methods and findings. They may also improve the completeness of reporting research.[16] Reporting guidelines have been demonstrated to improve reporting of studies when they are adopted by the research community and relevant journals,[1720] although their effectiveness varies[20, 21] and may depend on implementation by journals rather than simple endorsement.[12] Creation of guidelines typically involves consensus to determine a minimum set of criteria for reporting research [22, 23]. Concomitantly, adherence to and implementation of guidelines may aid in the conduct of systematic reviews and meta-analyses.

In response to the increasing number of reporting guidelines available or under development, the Enhancing the QUAlity and Transparency of health Research (EQUATOR) Network has published guidance on how to develop a reporting guideline[9]. The document recommended a Delphi survey as part of the development process to allow inclusion of participants unable to attend face-to-face meetings. RECORD undertook expansive surveys of stakeholders and social media to solicit the opinions of a large, geographically diverse group of stakeholders from multiple disciplines. This work has allowed us in order to prioritize the draft items of a reporting guideline checklist to better reflect the needs of these stakeholders.

The qualitative data generated by participants in our Delphi survey were illuminating in terms both of the specific nature of comments received, and their implications for topics to be emphasized in the RECORD guidelines. In particular, the survey generated many comments regarding broad themes for reporting of methodology, but far fewer comments regarding reporting of results or discussion. This emphasis on the methodological aspects of research may reflect the fact that RECORD was considered as an extension of the original STROBE statement. Respondents may have perceived that the strength of RECORD would lie in addressing specific methodological concerns regarding research based on routinely collected health data that were not addressed within the more general STROBE statement. Prioritization of data linkage methods and code validation reflects the activity of the research community in these fields, including work performed by members of the RECORD group[24, 25]. Other themes represent new concerns not previously reported in the literature, such as the need to report on characteristics of the database itself, the underlying population from which the database was derived, and issues of governance and availability of the data to other researchers. Finally, some themes prioritized by respondents represented general research methods not specific to studies based on routinely collected health data (e.g., missing data, potential confounding, and data security issues), but which are particularly salient in this form of research. While the RECORD guidelines addresses as many concerns as possible, it will remain specific to studies conducted using routinely collected health data. The two surveys described above allowed the RECORD working committee to prioritize checklist items for inclusion in the RECORD statement and to guide the placement of the items within the publication′s structure.


A limitation of the modified Delphi approach is that findings are specific to the group of individuals surveyed and may not necessarily represent popular opinion among the broad range of stakeholders. In order to strengthen the validity of our results, we identified potential survey participants using active and passive selection. The actively selected group was approached based on expert recommendation, to ensure breadth of experience and interest in the methodological and substantive content of research publications. The passively selected group approached us following a wide-ranging awareness campaign. One limitation that may be levelled at the stakeholder group is the apparent dominance of academic scholars. However, this belies the multiple roles occupied by participants, with many respondents also holding editorships within journals serving the target population of end users of health administrative data, as well as individuals who serve policy makers in consultative capacities. As such we believe that the stakeholder composition is robust with respect to both academic rigor, but also in terms of encouraging quality decision-making within a learning healthcare environment. In particular, a strength of our approach was the commitment of panelists in completing the Delphi process. A limitation was that the stakeholder group originated predominantly from North America, Europe, and Australia, although participation was possible without regard to geographic region. English-language advertisements and editorials requesting participation may explain this pattern of input. As well, the majority of medical research using routine health data originates from these regions. We thus believe that our stakeholder group is representative of the broad community of researchers using such data. It is noteworthy that the most frequent visitors to the website were from both English and non-English speaking countries.

Use of a password-protected message board to elicit post-meeting feedback may have also limited the generalizability of this feedback. However, we aimed to receive comments on the draft statements from the same stakeholder group who provided feedback prior to statement creation. Approximately 10% of persons viewing the draft statements on the message board provided written comments, while the remainder gave numerical ratings. This level of contribution follows the “1% rule” of internet culture, i.e., that 1% of internet users create content, 9% of users modify or comment on that content, and 90% of users consume or observe internet activity.[26] Despite this, the contributions to the message board led to valuable pre-publication discussion and revision.

Another potential limitation involved use of a traditional working committee meeting to draft the checklist and explanatory document. While we attempted to select the working committee from a wide range of stakeholders, particularly those with the greatest enthusiasm during the survey stages, only 17 committee members could participate in the face-to-face meeting. However, survey responses reflected the interests and input of the larger stakeholder group. Future reporting guideline initiatives should consider use of more advanced forms of social media and internet-based conferencing to ensure involvement of a broader group of stakeholders in creating the checklist.

Next Steps—the RECORD Statement

In summary, we elicited input from a broad range of stakeholders to specify priorities for a definitive reporting guideline checklist for research conducted using routinely collected health data. The RECORD statement incorporated the suggestions provided by stakeholders to create a reporting checklist specific to observational research using routine health data. The draft checklist and explanatory document were made available to the stakeholder group for comment and revision via online message board.

While participants reflected a range of academic and non-academic roles across a range of health-related interests, post-publication activities will seek to further engage stakeholders from a range of perspectives, including journal editors, policy decision-makers, and those involved in the development of health administrative datasets. Following peer review and publication, the final RECORD checklist will be available for comment on the RECORD website. We will also seek the endorsement of the guidelines by appropriate journals, utilizing the stakeholder membership to facilitate this process, while undertaking an active dissemination process presenting the guidelines at appropriate academic and non-academic meetings. This will include translation of the guidelines into non-English language versions to enhance uptake internationally. Furthermore, we will evaluate the impact of the RECORD guidelines through a planned systematic review that will allow the group to compare adherence within articles published by journals that do and do not endorse the guidelines. Using these diverse methods, we anticipate that RECORD will improve the clarity of reporting of research conducted using routinely collected health data.

Supporting Information

S1 Appendix. List of stakeholders who participated in the two surveys.


Author Contributions

Conceived and designed the experiments: SGN EvE AG DM IP HTS LM SML EIB. Performed the experiments: SGN PQ SML EIB. Analyzed the data: SGN PQ SML EIB. Contributed reagents/materials/analysis tools: SGN PQ SML EIB. Wrote the paper: SGN PQ EvE AG DM IP HTS LM SML EIB.


  1. 1. Hillestad R, Bigelow J, Bower A, Girosi F, Meili R, Scoville R, et al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs. Health Affairs. 2005;24(5):1103–17. pmid:16162551
  2. 2. Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors? New England Journal of Medicine. 2010;362(12):1066–9. pmid:20335582
  3. 3. Dean BB, Lam J, Natoli JL, Butler Q, Aguilar D, Nordyke RJ. Review: Use of Electronic Medical Records for Health Outcomes Research A Literature Review. Medical Care Research and Review. 2009;66(6):611–38. pmid:19279318
  4. 4. Harpe SE. Using secondary data sources for pharmacoepidemiology and outcomes research. Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy. 2009;29(2):138–53.
  5. 5. Maeng M, Tilsted HH, Jensen LO, Krusell LR, Kaltoft A, Kelbaek H, et al. Differential clinical outcomes after 1 year versus 5 years in a randomised comparison of zotarolimus-eluting and sirolimus-eluting coronary stents (the SORT OUT III study): a multicentre, open-label, randomised superiority trial. Lancet. 2014;383(9934):2047–56. Epub 2014/03/19. pmid:24631162.
  6. 6. Research CIoH. Canada's Strategy for Patient-Oriented Research (Improving health Outcomes through Evidence-Informed Care). Ottawa, Canada2011. p. 1–33.
  7. 7. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Preventive medicine. 2007;45(4):247–51. pmid:17950122
  8. 8. Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505(7485):612–3. Epub 2014/02/01. pmid:24482835; PubMed Central PMCID: PMCPmc4058759.
  9. 9. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS medicine. 2010;7(2):e1000217. Epub 2010/02/20. pmid:20169112; PubMed Central PMCID: PMCPmc2821895.
  10. 10. Bowling A. Research Methods in Health. Maidenhead: Open University Press; 2004.
  11. 11. Benchimol EI, Langan S, Guttmann A. Call to RECORD: the need for complete reporting of research using routinely collected health data. J Clin Epidemiol. 2013;66(7):703–5. Epub 2012/11/29. pmid:23186992.
  12. 12. Langan SM, Benchimol EI, Guttmann A, Moher D, Petersen I, Smeeth L, et al. Setting the RECORD straight: developing a guideline for the REporting of studies Conducted using Observational Routinely collected Data. Clin Epidemiol. 2013;5:29–31. Epub 2013/02/16. pmid:23413321; PubMed Central PMCID: PMCPmc3570075.
  13. 13. Gershon AS, Wang L, To T, Luo J, Upshur RE. Survival with tiotropium compared to long-acting Beta-2-agonists in Chronic Obstructive Pulmonary Disease. COPD. 2008;5(4):229–34. Epub 2008/08/02. 795441644 [pii] 10.1080/15412550802237507 [doi]. pmid:18671148.
  14. 14. de Lusignan S, van Weel C. The use of routinely collected computer data for research in primary care: opportunities and challenges. Family Practice. 2006;23(2):253–63. pmid:16368704
  15. 15. Benchimol EI, Manuel DG, To T, Griffiths AM, Rabeneck L, Guttmann A. Development and use of reporting guidelines for assessing the quality of validation studies of health administrative data. Journal of clinical epidemiology. 2011;64(8):821–9. Epub 2011/01/05. pmid:21194889.
  16. 16. Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012;11:Mr000030. Epub 2012/11/16. pmid:23152285.
  17. 17. Sorensen AA, Wojahn RD, Manske MC, Calfee RP. Using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement to assess reporting of observational trials in hand surgery. The Journal of hand surgery. 2013;38(8):1584–9.e2. Epub 2013/07/13. pmid:23845586; PubMed Central PMCID: PMCPmc3989883.
  18. 18. Armstrong R, Waters E, Moore L, Riggs E, Cuervo LG, Lumbiganon P, et al. Improving the reporting of public health intervention research: advancing TREND and CONSORT. Journal of public health (Oxford, England). 2008;30(1):103–9. Epub 2008/01/22. pmid:18204086.
  19. 19. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354(9193):1896–900. Epub 1999/12/10. pmid:10584742.
  20. 20. Prady SL, Richmond SJ, Morton VM, Macpherson H. A systematic evaluation of the impact of STRICTA and CONSORT recommendations on quality of reporting for acupuncture trials. PloS one. 2008;3(2):e1577. Epub 2008/02/14. pmid:18270568; PubMed Central PMCID: PMCPmc2216683.
  21. 21. Bastuji-Garin S, Sbidian E, Gaudy-Marqueste C, Ferrat E, Roujeau JC, Richard MA, et al. Impact of STROBE statement publication on quality of observational study reporting: interrupted time series versus before-after analysis. PloS one. 2013;8(8):e64733. Epub 2013/08/31. pmid:23990867; PubMed Central PMCID: PMCPmc3753332.
  22. 22. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Medicine. 2010;7(2):e1000217. pmid:20169112
  23. 23. Simera I, Moher D, Hoey J, Schulz KF, Altman DG. The EQUATOR Network and reporting guidelines: Helping to achieve high standards in reporting health research studies. Maturitas. 2009;63(1):4–6. Epub 2009/04/18. pmid:19372017.
  24. 24. Benchimol EI, Guttmann A, Mack DR, Nguyen GC, Marshall JK, Gregor JC, et al. Validation of international algorithms to identify adults with inflammatory bowel disease in health administrative data from Ontario, Canada. J Clin Epidemiol. 2014;67(8):887–96. Epub 2014/04/25. pmid:24774473.
  25. 25. Harron K, Wade A, Muller-Pebody B, Goldstein H, Gilbert R. Opening the black box of record linkage. J Epidemiol Community Health. 2012;66(12):1198. Epub 2012/06/19. pmid:22705654.
  26. 26. van Mierlo T. The 1% rule in four digital health social networks: an observational study. Journal of medical Internet research. 2014;16(2):e33. Epub 2014/02/06. pmid:24496109; PubMed Central PMCID: PMCPmc3939180.