Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A science impact framework to measure impact beyond journal metrics

  • Mary D. Ari ,

    Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • John Iskander,

    Roles Conceptualization, Methodology, Validation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • John Araujo,

    Roles Conceptualization, Data curation, Methodology, Writing – review & editing

    Current address: Drug Enforcement Administration, Department of Justice, Arlington, Virginia, United States of America

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • Christine Casey,

    Roles Conceptualization, Methodology, Validation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • John Kools,

    Roles Validation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • Bin Chen,

    Roles Validation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • Robert Swain,

    Roles Data curation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • Miriam Kelly,

    Roles Validation, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America

  • Tanja Popovic

    Roles Conceptualization, Methodology, Supervision, Writing – review & editing

    Affiliation Centers for Disease Control and Prevention (CDC), Atlanta, Georgia, United States of America


Measuring the impact of public health science or research is important especially when it comes to health outcomes. Achieving the desired health outcomes take time and may be influenced by several contributors, making attribution of credit to any one entity or effort problematic. Here we offer a science impact framework (SIF) for tracing and linking public health science to events and/or actions with recognized impact beyond journal metrics. The SIF was modeled on the Institute of Medicine’s (IOM) Degrees of Impact Thermometer, but differs in that SIF is not incremental, not chronological, and has expanded scope. The SIF recognizes five domains of influence: disseminating science, creating awareness, catalyzing action, effecting change and shaping the future (scope differs from IOM). For public health, the goal is to achieve one or more specific health outcomes. What is unique about this framework is that the focus is not just on the projected impact or outcome but rather the effects that are occurring in real time with the recognition that the measurement field is complex, and it takes time for the ultimate outcome to occur. The SIF is flexible and can be tailored to measure the impact of any scientific effort: from complex initiatives to individual publications. The SIF may be used to measure impact prospectively of an ongoing or new body of work (e.g., research, guidelines and recommendations, or technology) and retrospectively of completed and disseminated work, through linking of events using indicators that are known and have been used for measuring impact. Additionally, linking events offers an approach to both tell our story and also acknowledge other players in the chain of events. The value added by science can easily be relayed to the scientific community, policy makers and the public.


Many frameworks have been used to measure programs, research, and other aspects of science and technology advancements [17]. Commonly used measures of science and research impact often are based on publication metrics [3]. There has been heavy dependence on quantitative measures by the scientific community, driving the value of journal metrics, with various indices having been developed to credit publication contributions to knowledge [3, 6]. This is not unusual as scientific, peer-reviewed publications are recognized as one of the most important formal outputs or deliverables of a research project that can be used to infer the quality and impact of the underpinning science. In addition, journal metrics, such as citations and impact factor are relatively easy to collect, and they are valuable indicators of the reach of the research in terms of how widely it is disseminated and its uptake. But they do not characterize the influence created such as resulting actions or changes or the way in which the research knowledge is used.

Funders continue to grapple with how to assign measurable criteria of a more practical value to research under review in research proposals and awards [810] or said another way, the impact of science and research efforts beyond just the publication of findings [4, 11], because using a metrics-only approach will not suffice to capture broader societal impacts on economic, technologic and innovative advancements [2]. While this idea is welcomed by some, others express reservation driven by the concern that innovative research may be stifled this way [12, 13]. How and when to use these measures is a subject of intense debate [3, 10, 14].

While CDC has a framework for program evaluation in public health that is widely being used for public health programs [15], this evaluation framework has not been conducive for assessing the impact of science and research efforts. Other frameworks used for evaluation of public health interventions are mostly very specific and narrow in scope, limiting broad applicability [1619]. For example, a framework such as Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM), is narrowly focused on evaluating behavior change in health interventions, it does so effectively, and has been adapted for use in evaluating built environment strategies [20]. However, it is only flexible enough to be applied to the evaluation of similar applications within the scope of its design. Instead, we are turning our attention to a broader assessment of how to describe the role of science in contributing to the improvement of public health, for which we developed the Science Impact Framework.

Materials and methods

Developing the science impact framework (SIF)

To develop this framework, a literature review was undertaken to identify frameworks previously developed or used [17, 21]. Next, we considered those elements of the frameworks identified that would best demonstrate the impact of CDC science. One example of this is the Payback Framework which has been in existence since the 90’s and is applied to medical and health services. Several other frameworks developed later are based on the Payback Framework [2, 3]. The Research Excellence Framework (REF) and the Research Quality Framework (RQF) emerged recently. The existing frameworks we studied are mostly research or health services frameworks. In order to capture other science efforts, such as developing guidelines and recommendations that contribute to health outcome, we defined science more broadly than research. We embraced some of the concepts we highlighted from these frameworks (Table 1). But as our primary model, we adapted the Institute of Medicine (IOM) “Degrees of Impact” Thermometer [21]. The key attraction of the IOM model was the focus on influences. However, we needed to extend and expand these concepts because IOM serves in an advisory role so, the scope of their work is in the realm of knowledge diffusion (user- pull end of spectrum), while CDC has a broader scope; diffusion of knowledge, applied research, technology creation, capacity building, and program/initiative implementation. And the IOM model suggests an incremental progression of processes and actions, our model fundamentally differs in this aspect as well.

Description of the science impact framework

The Science Impact Framework (SIF) consists of five domains of impact each with key indicators for the specific domain (Table 2).

Table 2. The science impact framework–Examples of indicators and data sources*.

The resulting SIF is a collection of logically related or associated elements (influence). Influence in this case is the term used to describe the evidence of impact within each domain of the SIF as described by the key indicators. Description of the domains of impact are as follows:

  1. 1. DISSEMINATING SCIENCE: This represents producer push and may include the publication of findings in peer reviewed journals or other reports, presentations at conferences or through other media channels.
  2. 2. CREATING AWARENESS: This represents user pull and may include awards, general awareness, or acceptance of a concept or findings by scientific community or policy makers, generating new discussion based on shared science.
  3. 3. CATALYZING ACTION: This represents actions taken as a result of the science and may include partnerships and collaborations, technology creation, new funding, congressional hearings or bills, or introduction in practice.
  4. 4. EFFECTING CHANGE: This represents changes that occur as a result of the science or the actions taken, and may include building public health capacity, legal/policy change, cultural/social/behavior change, or economic change.
  5. 5. SHAPING THE FUTURE: This represents additional considerations (scope beyond IOM “Degrees of Impact” Thermometer) that affect the future direction, drive further progress in understanding of the science, or implementation in practice, and may include new hypothesis or strategies, implementation of new programs/initiatives, or quality improvement.

Fig 1: The Science Impact Framework, illustrates the SIF with five domains of scientific impact that express the scope and type of influence generated by the scientific undertaking. The degree of impact is not necessarily a linear sequence of progression through the five domains; therefore, events captured may not be reflected in every domain and may not occur chronologically. The model also portrays the complexity of the measurement environment with other influences beyond the ones described by the domains of the SIF. For example, there may be other influences that may or may not work synergistically with the desired influence for the work under consideration. Thus, impacting the ability to achieve desired outcome positively or negatively. Our model uses both quantitative and qualitative measures. The types of impact of interest transcend from the impact on the field, to broader societal impacts, including policy and practice impact and for CDC, the goal is health outcomes.

Fig 1. The science impact framework.

Health outcomes are the ultimate goals, driven by the 5 domains of influence as shown in Table 1. Health outcomes, for example include positive effects on prevalence and incidence (e.g., frequency of outbreaks, trends); reduction in morbidity and mortality; increased life expectancy; and increased quality of life improvements.

For the purpose of applying the SIF, we define CDC science broadly to include 1. basic and applied epidemiology, 2. laboratory studies, 3. Surveillance, and 4. other scientific outputs such as models, methods, meta-analyses and guidelines and recommendations developed to improve prevention and control or improve the practice of public health. The tracing and linking of actual instances of scientific influence through the framework involves either identifying points of impact and tracing backward events related to the science or research impact or going back to original scientific work (which may include synergistic efforts) and tracing forward events that have link to that work [5]. The latter approach ensures clear linkages can be made, and it is feasible to identify effects within 2–5 years of dissemination since these effects may be formative and do not have to be the ultimate outcome. The SIF relies on user judgment as expert opinion, which could be supported by peer reviews, or interviews, to identify credible links that can be traced through the framework. This is how it works:

The reviewer identifies a point of scientific significance and places it within one of the SIF five domains of influence based on alignment with key indicators for that domain. The reviewer using the framework further:

  • Identifies forward and/or backward events or activities that link or can be associated (logically or empirically) with the point of scientific significance,
  • Validates the links with peer review (which could be internal or involve external partners) or expert opinion,
  • Assigns or reassigns the linked events to one of the five domains of influence.

Because the SIF can be used to track the impact retrospectively or monitor it prospectively, it is a culture change from focusing on outputs or journal metrics. It allows the investigating of what changes occurred or are occurring because of the work. Ultimately, the impact can be tracked to an individual, groups or entire society.

Validating the framework using case studies

Once the SIF was developed it was evaluated for functional utility using publications of research and findings that cover various areas of public health: basic research, laboratory science, epidemiology, guidelines and recommendations, surveillance, infectious diseases, non-communicable diseases, meta-analyses, and evaluation (Table 3). These topical areas were selected from a) Morbidity and Mortality Weekly Report (MMWR) published by the CDC, b) papers competing for CDC’s Charles C. Shepard Science Award ( [22], and c) topics from CDC’s Public Health Grand Rounds (PHGR), monthly webcasts that addresses key public-health challenges [23]. The data for tracing of influence in each domain using identified key indicators included citation analysis and subject matter experts’ opinions (Table 2), a combination of these sources was used to establish effects and linkages. In addition, current journal metrics in use for measuring impact such as number of citations, impact factor, were also assessed for each of the original manuscripts of our case studies to compare with influences and impact identified by the SIF model.

Table 3. Case studies used to validate the science impact framework.

In addition, a case study approach was used to test the SIF, a total of 11 case studies were conducted (Table 3). The starting point for each case study was a publication describing the findings or output of interest. Three of the eleven case studies consisted of more than one relevant publication as starting point. Specifically, the tuberculosis (TB) and Group B streptococcal (GBS) case studies had three, and the pediatric cough and cold medication (CCM) had two relevant publications respectively, (Table 3). Findings from these publications worked in synergy with each other to produce the documented impact. Citations of the original publications were identified, and each reviewed to assess and understand the way the disseminated knowledge or output was used in the citing publication. In the 11 case studies that were used, the time frame assessed was from time of dissemination of scientific knowledge and tracing it forward to 2011 (Table 3). This is like systematic review; in this case, qualitative analysis was undertaken to investigate evidence of importance of the original published work. Identified effects or influences were placed under the relevant domain of influence as described by the SIF. When tracing the events, it was important to research in more detail the role/influence of the original CDC manuscript(s) in these events to establish documented links between the manuscript(s) and these events and identify a link to the that domain of influence based on alignment with key indicators for the domain. Not all domains of influence are utilized, and the link does not have to be to the immediate domain as listed in the SIF model. Appropriateness of links and placement of events under each domain is validated through expert opinions.

Results and discussion

Summary of key findings from case studies

A summary of key findings using SIF are presented in Table 3. Bibliometric analysis was done on the original manuscript(s) to show the number of primary and secondary citations. Naturally, the number of citing sources was minimal when an original paper was recently published and when the topic may only be of professional interest to a narrow audience. A few of the case studies have been presented at the CDC PHGR [66] and at the Office of Science web page

Further considerations based on the case studies.

It is important to disseminate findings through publications, but that does not represent the end-product of research, rather the beginning of further influence. Impact beyond publications could be in form of products and technology; however, we recognize that ultimately information about these products, programs, initiatives and advancements can be provided in the form of publications, thus dissemination of science was used as one of the domains of influence in the SIF. Publication metrics such as number of citations would likely underestimate the impact of the work. A careful review of citations data for our case studies suggest the community of users drives the number of citations. For example, a comparison of publications from two of our case studies (Table 3): The publication on cochlear implant had 97 citations, 761 2nd generation citation, 5-year impact factor of 52.36, and 9.70 average cites per year, versus the publication on pneumococcal vaccine, which had 1,035 citations, 23,415 2nd generation citations, 5-year impact factor 52.36, and 103.6 average citations per year. These were published in the same journal, the same year, and consequently have the same impact factor; yet, there is a significant difference in both the primary citation and second-generation citations. Hence, impact factor of the journal does not seem to be the driving factor. The size of the community that needs the science or information can vary significantly and therefore can influence the number of citations. For example, the cochlear implant publication will be of interest predominantly to manufacturers of the device, physicians, and patients who use them, and as a result, the numbers are small compared to infectious disease such as pneumococcal pneumonia that affect significantly larger number of people. Merely counting citations does not reveal the way the science was used e.g. as background information or foundational to the steps or actions taken. Furthermore, just because an article is cited does not mean it is for a positive reason, sometimes articles are cited as examples of bad or flawed science [67, 68]. There is ample evidence that even publications that have been retracted as bad science or due to scientific misconduct continue to get cited [69]. Newer measures, such as Altmetric have similarly been found not to reflect broad societal impacts [70] but can provide data on the reach of publications and be a good resource in using the SIF. The main reason all the afore mentioned indicators are attractive is that they are quantitative and readily available. Perceptions of participants in a recent evaluation suggest that the incentivizing of publications may be at the expense of generation of broader impacts [71]. Scientific work is generally not linked to dollar investment or time to produce results. However, assessment using the SIF would prompt the question—Was the investment of dollars, time and efforts worth it? Just because a publication is infrequently cited does not discount the potential magnitude of contribution. For example, in the West Nile case studies (Table 3), the bibliometrics of this publication [52] showed 193 citations. However, the findings were instrumental to development of animal vaccine and subsequently human vaccine [66] (

The body of work with impact on practice and policy especially, is rarely captured in peer reviewed publications. Currently, there is no easy way to get to these types of information. Most of the citations are peer reviewed journals with a few books and conference proceedings. Hence, impact on practice and policy is rarely captured. The CDC scope of work as the United States premier public health agency, leads to technology creation such as laboratory methods, analytical methods and in addition, the knowledge generated from research informs further actions such as policy, practice and future research. In our search for information related to key indicators that define each domain of influence in the SIF, we found that the peer review literature was not necessarily the best venue for the information we sought. It took a combination of discussions with subject matter expects and internet searches for progress in the topical area to identify non-quantitative measures. Indicators that are qualitative in nature are more difficult to find without a deliberate effort and having a system in place that captures such information. Examples include policy changes, ongoing dialogue, and changes in practice.

It is important to measure the broader impact of CDC science on research, technology, practice and ultimately health outcome. Interest in evaluating science and research arises as both an interesting problem in scholarship and for public value. The challenge of scholarship lies in the complex environment of the science and research enterprise as well as how knowledge is accumulated and disseminated. Making these relationship even more complicated is an agency’s portfolio of science and research in which more than one project yields results that are commingled into a single output without clear specification or linkage of how to attribute the contribution of the various lines of science and research into the combined output [72].

Applicability of the science impact framework

The issue of how to measure impact of science or research is not simple and is even more complex for public health science. Measurement, especially when it comes to health outcomes, is complex because, it could take years, multiple actions may be involved, multiple players may be involved, which raises the issue of where to assign the credit. Because impact is frequently the result of synergy of many factors, it is important to not overestimate a single contributor; highlighting importance of collaborators can lead to stronger partnerships; professional networks have been shown to be effective in promoting uptake of research findings [73]. Because there are other players, other strategies and approaches as well, that may or may not be synergistic, the SIF allows a view of positive or negative effects.

Finally, decision on what to measure that will provide the most value, and the venue to obtain data for such measurement is a challenge. Systems with the capacity to capture all interactions including outputs and interim impact have been suggested as a possible approach since those provide a network of data [74]. The clear delineation of potential domains of science influence inherent in the SIF provides a useful construct. It helps science initiatives to be viewed through the lens of practice, and as a result, ask and answer similar kinds of questions as more traditional implementation efforts regarding what impact can be made or is being made and what changes will produce bigger or yield most distal outcomes. Therefore, the SIF complements and strengthens traditional evaluation [15]. The SIF serves both planning and evaluation functions; it helps us think about the myriad of outcomes that result from science and research efforts and that singly or jointly allow our efforts to contribute in the longer-term to improve health outcomes. The SIF provides a very useful description of the sequence of potential outcomes for a science effort without assuming that the same sequence will hold for all science efforts or that all science efforts will affect the most distal health outcomes in the framework. As such, it allows for constructive discussions with stakeholders and skeptics alike who wonder about the extent of our accomplishments where the direct relationship to health outcomes is difficult to demonstrate. Likewise, by laying out an expected sequence of outcomes for an effort, it is possible to look for those low-hanging fruits so that if expected outcomes are not being achieved, there is room to stop and examine in real-time, how to make the efforts more powerful or re-calibrate to stay on the right trajectory. Furthermore, the variety of outcomes allows us to compare successful science efforts side by side, to determine if there are any patterns in influences through the SIF that most quickly or powerfully affect health outcomes. The SIF provides for an iterative process that continues to give; the assessment of the work can be on a continuum into the future. Thus, a retrospective assessment can be continued as a prospective monitoring for the foreseeable future. Perhaps some of our cases by now would have registered further impacts beyond what we found at the time we conducted these case studies and the respective programs can build on our findings to continue to monitor progress. New technologies such as machine learning and artificial intelligence can make the SIF assessments faster and easier, but human input will still be required to understand what the output means [75] and as other technologies become available, they can be leveraged as well. Several programs within CDC are beginning to use the SIF to measure program and public health impact. SIF was used by a CDC cooperative agreement recipient in assessing the uptake of CDC good laboratory practice recommendations in biochemical genetic testing and newborn screening communities and developing plans to advance the impact [76]. In addition, a few findings using SIF have been published in peer-reviewed journals [7779].


In this paper, we present an approach to measuring science impact that goes beyond journal metrics. The initial development of the SIF and the case studies were based on CDC science, but has application beyond CDC. The SIF has flexibility that makes it feasible to assess retrospectively or monitor prospectively different efforts, those of established scientific programs, projects and research, specific scientific documents, such as publications and guidelines, and even individual scientist’s body of work. The focus of public health is reducing morbidity and mortality with the goal of improving quality of life and wellbeing. The whole essence of the CDC science is to create what is actionable that would produce positive impact to keep people safe and healthy. It is important to know if what we think has potential of making an impact produces the anticipated impact. The SIF can serve as a framework for focusing and monitoring broader impact of science, beyond the impact of individual publications and products. With the SIF, a choice can be made as to what to monitor to show the broader impact of the science. What is unique about this is that the focus is not just on the projected impact or outcome rather on the effects that are occurring in real time with the recognition that the measurement field is complex. It can promote a culture change from assessing the impact of science primarily through journal metrics, to a more robust approach that captures qualitative data that measure the changes occurring because of science. Currently, it is rare to find a single source for evidence data. That may be more feasible for prospective monitoring as data sources can be determined in the planning stages of work, such as what systems to leverage to obtain data that substantiate impact. Moreover, it is for this reason that prospective monitoring using the SIF is considered easier than retrospective. However, once a retrospective assessment is done, future impacts can be tracked prospectively for that work, essentially SIF is evergreen in nature. Information generated from these assessments can be used to produce annual reports or other communication products to relay value added by science to the scientific community, policy makers and the public. The framework is broad enough and adaptable to address many areas of science. Almost anyone can tailor it to the work they do all that is needed is to define relevant key indicators for each of the domains of influence. Using the SIF will allow the translation of the value of science/research to the public in a simplified manner that is more likely to be of interest to them than peer-reviewed publication.

We are interested in the further dissemination and use of the SIF within the public health community or other venues.


The authors would like to thank Tom Chapel for his perspective on how this framework complements the traditional CDC evaluation framework, Dr. Gerardo Garcia-Lerma for input in creating the Science Impact Framework figure and Dr. Betty Wong for review and comments on the case studies.


  1. 1. Rukmani R. Measures of impact of science and technology in India: Agriculture and rural development. Current Science. 2008;95(12):1694–8.
  2. 2. Donovan C. State of the art in assessing research impact: introduction to a special issue. Research Evaluation. 2011;20(3):175–179.
  3. 3. Bollen J, Van de Sompel H, Hagberg A, & Chute R. A principal component analysis of 39 scientific impact measures. PLoS ONE. 2009;4(6). pmid:19562078
  4. 4. Buykx P, Humphreys J, Wakerman J, Perkins D, Lyle D, McGrail M, et al. 'Making evidence count': a framework to monitor the impact of health services research. Aust J Rural Health. 2012;20(2):51–58. pmid:22435764
  5. 5. Ruegg R & Jordan G. Overview of Evaluation Methods for R&D Programs: A Directory of Evaluation Methods Relevant to Technology Development Programs. 2007. Booklet.
  6. 6. King DA. The scientific imapct of nations. Nature. 2004;430:311–316 pmid:15254529
  7. 7. Research Excellence Framework. (2014).
  8. 8. Berg J. Measuring the scientific output and impact of NIGMS grants. 2010.
  9. 9. Mervis J. Beyond the data. Science. 2011;334:169–170. pmid:21998363
  10. 10. Editorial. The maze of impact metrics. Nature. 2013;502: 271 pmid:24137834
  11. 11. Dance A. Impact: pack a punch. Nature. 2013;502(7471): 397–8 Available from:
  12. 12. Owens B. Judgment day. Nature. 2013;502: 288–290 pmid:24132272
  13. 13. Gordon LG & Bartley N. Views from senior Australian cancer researchers on evaluating the impact of their research: results from a brief survey. Health Research Policy & Systems. 2016; 14 (2):1–8. pmid:26754325
  14. 14. Van Noordeen R. A profusion of measures. Nature. 2010;863–666.
  15. 15. Chaidez V, Kaiser LL. Framework for program evaluation in public health. MMWR. 1999; 48: RR-11.
  16. 16. Glasgow RE, Vogt TM, & Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–1327. pmid:10474547
  17. 17. Wilson KM, Brady TJ, & Lesesne C. An organizing framework for translation in public health: The Knowledge to Action Framework. Prev. Chronic Dis. 2011; 8(2): A46. pmid:21324260
  18. 18. Fielding JE, & Teutsch SM. So what? A framework for assessing the potential impact of intervention research. Prev. Chronic Dis. 2013; 10:120–160. pmid:23327829
  19. 19. Harris JR, Cheadle A, Hannon PA, Forehand M, Lichiello P, Mahoney E, et al. A framework for disseminating evidence-based health promotion practices. Prev. Chronic Dis. 2012; 9: E22. pmid:22172189
  20. 20. King DK, Glasgow RE, & Leeman-Castillo B. Reaming RE-AIM: using the model to plan, implement, and evaluate the effects of environmental change approaches to enhancing population health. Am. J. Public Health. 2010; 100(11): 2076–2084. pmid:20864705
  21. 21. Finberg, HV. (2013). The Institute of Medicine: What makes it great.
  22. 22. Centers for Disease Control and Prevention. Charles Shepard Award.
  23. 23. Iskander J, Ari M, Chen B, Hall S, Ghiya N, Popovic T. Public Health Grand Rounds at the Centers for Disease Control and Prevention: Evaluation Feedback from a Broad Community of Learners. J Public Health Manag Pract. 2014; 20(5): 542–50. pmid:24100242
  24. 24. Shults RA, Elder RW, Sleet DA, Nichols JL, Alao MO, Carande-Kulis VG, et al; Task Force on Community Preventive Services. Review of evidence regarding intervention to reduce alcohol-impaired driving. Am J Prev Med. 2001 Nov;21(4 Suppl):66–88 pmid:11691562
  25. 25. Sleet DA, Shawna ML, Cole HK, Shults RA, Elder RW, Nichols JL. Scientific evidence and policy change: lowering the legal blood alcohol limit for drivers to 0.08% in the USA. Global Health Promotion. 2011; 18(1): 23–26. pmid:21721296
  26. 26. National Highway Traffic Safety Administration. Traffic safety facts 2009 Data: Alcohol-impaired driving. 2010. Traffic safety facts 2009 Data: Alcohol-impaired driving. DOT HS 811 385. Washington, DC: US Department of transportation.
  27. 27. Bergen G, Shults RA, Rudd RA. Scientific evidence and policy change: Lowering the legal blood alcohol limit for drivers to 0.08% in the USA. MMWR Vital Signs. 2010;60(39):1351–6.
  28. 28. Cain KP, McCarthy KD, Heilig CM, Monkongdee P, Tasaneeyapan T, Kanara N, et al. An algorithm for Tuberculosis screening and diagnosis in people with HIV. NEJM. 2010; 362:707–716. pmid:20181972
  29. 29. Monkongdee P, McCarthy KD, Cain KP, Tasaneeyapan T, Nguyen HD, Nguyen TN, et al. Yield of acid-fast smear and mycobacterial culture for tuberculosis diagnosis in people with human immunodeficiency virus. Am J Respir Care Med. 2009;180: 903–908. pmid:19628775
  30. 30. Samandari T, Agizew TB, Nyirenda S, Tedla Z, Sibanda T, Shang N, et al. 6-month versus 36-month isoniazid preventive treatment for tuberculosis in adults with HIV infection in Botswana: a randomised, double-blind, placebo-controlled trial. Lancet. 2011; 377:1588–98. pmid:21492926
  31. 31. Getahun H, Kittikraisak W, Heilig CM, Corbett EL, Ayles H, Cain KP, et al. Development of a standardized screening rule for tuberculosis in people living with HIV in resource-constrained settings: individual participant data meta-analysis of observational studies. PLoS Med. 2011; 8(1): e1000391.
  32. 32. World Health Organization. Guidelines for intensified tuberculosis case-finding and isoniazid preventive therapy for people living with HIV in resource-constrained settings. Geneva, Switzerland. 2011. Available from: Guidelines for intensified tuberculosis case-finding and isoniazid preventive therapy for people living with HIV in resource-constrained settings.
  33. 33. MMWR. Guidelines for field triage of injured patients: Recommendations of the National Expert Panel on Field Triage. MMWR. 2009; 58: 1–35.
  34. 34. Lerner BE, Shah MN, Swor RA, Cushman JT, Guse CE, Brasel K et al. Comparison of the 1999 and 2006 trauma triage guidelines: where do patients go? Prehosp Emerg Care. 2011;15: 12–7. pmid:21054176
  35. 35. Faul M, Wald MM, Sullivent EE, Sasser SM, Kapil V, Lerner BE, et al. Large cost savings realized from the 2006 field triage guideline: Reduction in over triage in U.S. Trauma Centers. 2012.
  36. 36. US Department of Transportation, Federal Highway Administration. Safe, Accountable, Flexible, Efficient Transportation Equity Act: a legacy for users. 2006. Available from: Safe, Accountable, Flexible, Efficient Transportation Equity Act: a legacy for users. 42 USC § 300d-4
  37. 37. McKibben L, Horan T, Tokars JI, Fowler G, Cardo DM, Pearson ML, et al. Guidance on public reporting of healthcare-associated infections: Recommendations of the Healthcare Infection Control Practices Advisory Committee. Am J Infect Control. 2005;33: 217–26. pmid:15877016
  38. 38. Srinivasan A, Craig M, Cardo D. The Power of Policy Change, Federal Collaboration, and State Coordination in Healthcare-Associated Infection Prevention. Clin Infect Dis., 2011;55(3): 426–31. pmid:22523266
  39. 39. Dixon R. Control of Healthcare-Associated Infections, 1961–2011. MMWR. 2011; 60: 58–63. pmid:21976167
  40. 40. MMWR. Infant deaths associated with cough and cold medications. MMWR. 2007; 56: 1–4. pmid:17218934
  41. 41. Schaefer MK, Shehab N, Cohen AL, Budnitz DS. Adverse events from cough and cold medications in children. Pediatrics. 2008; 121: 783–7. pmid:18227192
  42. 42. Consumer Healthcare Products Association. Makers of OTC Cough and Cold Medicines Announce Voluntary Withdrawal of Oral Infant Medicines. 2007.
  43. 43. Shehab N, Schaefer MK, Kegler SR, Budnitz DSl. Adverse events from cough and cold medications after a market withdrawal of products labeled for infants. Pediatrics. 2010; 126: 1100–7. pmid:21098150
  44. 44. Medical Product Safety. U.S. Department of Health and Human Services.
  45. 45. Decker SL. Changes in Medicaid physician fees and patterns of ambulatory care. Inquiry. 2009; 46: 291–304 pmid:19938725
  46. 46. Bauhoff S, Hotchkiss DR, Smith O. Responsiveness and Satisfaction with Providers and Carriers in a Safety Net Insurance Program: Evidence from Georgia’s Medical Insurance for the Poor. Health Policy. 2011;102:286–294. pmid:21820197
  47. 47. Centers for Medicare and Medicaid Services (CMSApplication of AARP for leave to file Amicus Curiae brief and proposed brief urging affirmance of CMS Decision disapproving proposed California state plan amendments. 2011.
  48. 48. Reefhuis J, Honein MA, Whitney CG, Chamany S, Mann EA, Biernath KR, et al. Risk of bacterial meningitis in children with cochlear implants. N Engl J Med. 2003; 349:435–45. pmid:12890842
  49. 49. Benjamin WPC, Shepherd RK, Robins-Browne RM, Clark GM, O’Leary SJ. Pneumococcal Meningitis: Development of a New Animal Model. Otol Neurotol. 2006; 27(6): 844–854. pmid:16936571
  50. 50. MMWR. Notice to Readers: Pneumococcal Vaccination for Cochlear Implant Recipients. MMWR. 2002; 51(41): 931.
  51. 51. MMWR. Notice to Readers: Limited Supply of Pneumococcal Conjugate Vaccine: Suspension of Recommendation for Fourth Dose. MMWR. 2004; 53(05): 108–109.
  52. 52. Davis BS, Chang GJ, Cropp B, Roehrig JT, Martin DA, Mitchell CJ, et al. West Nile virus recombinant DNA vaccine protects mouse and horse from virus challenge and expresses in vitro noninfectious recombinant antigen that can be used in enzyme-linked immunosorbent assays. J Virol. 2001;75: 4040–4047. pmid:11287553
  53. 53. Chang GJ., Davis BS, Stringfield C, Lutz C. Prospective immunization of the endangered California condors (Gymnogyps californianus) protects this species from lethal West Nile virus infection. Vaccine. 2007; 25(12): 2325–2330. pmid:17224209
  54. 54. Martin JE, Pierson TC, Hubka S, Rucker S, Gordon IJ, Enama ME, et al. A West Nile Virus DNA Vaccine Induces Neutralizing Antibody in Healthy Adults during a Phase 1 Clinical Trial. J. Infect Dis. 2007;196 (12): 1732–40. pmid:18190252
  55. 55. Whitney CG, Farley MM, Hadler J, Harrison LH, Bennett NM, Lynfield R, et al.; Active Bacterial Core Surveillance of the Emerging Infections Program Network. Decline in invasive pneumococcal disease after the introduction of protein-polysaccharide conjugate vaccine. N Engl Med. 2003; 348: 1737–46. pmid:12724479
  56. 56. MMWR. Direct and Indirect Effects of Routine Vaccination of Children with 7-Valent Pneumococcal Conjugate Vaccine on Incidence of Invasive Pneumococcal Disease—United States, 1998–2003. MMWR. 2005; 54(36): 893–897. pmid:16163262
  57. 57. Pilishvili T, Lexau C, Farley MM, Hadler J, Harrison LH, Bennett NM, et al; Active Bacterial Core Surveillance/Emerging Infections Program Network. Sustained Reductions in Invasive Pneumococcal Disease in the Era of Conjugate Vaccine. J. Infect. Dis. 2010; 201(1): 32–41. pmid:19947881
  58. 58. Stephens D. Protecting the herd: the remarkable effectiveness of the bacterial meningitis polysaccharide-protein conjugate vaccines in altering transmission dynamics. Trans Am Clin Climatol Assoc. 2011; 122: 115–23. pmid:21686214
  59. 59. MMWR. Prevention of perinatal group B streptococcal disease: a public health perspective. Centers for Disease Control and Prevention. MMWR Recomm Rep. 1996; 45(RR-7): 1–24. pmid:8637497
  60. 60. Schrag S, Gorwitz R, Fultz-Butts K, Schuchat A. Prevention of perinatal group B streptococcal disease. Revised guidelines from CDC. MMWR. 2002; 51(RR-11):1–22. pmid:12211284
  61. 61. Verani JR, McGee L, Schrag SJ; Division of Bacterial Diseases. Prevention of perinatal group B streptococcal disease—revised guidelines from CDC. MMWR Recomm Rep. 2010; 59(RR-10): 1–36. pmid:21088663
  62. 62. Stoll BJ, Hansen NI, Sánchez PJ, Faix RG, Poindexter BB, Van Meurs KP et al.; Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network. Early onset neonatal sepsis: the burden of group B Streptococcal and E. coli disease continues. Pediatrics. 2011; 127(5): 817–26.
  63. 63. Verani JR, Schrag SJ. Group B streptococcal disease in infants: progress in prevention and continued challenges. Clin Perinatol. 2010; 37(2): 375–92. pmid:20569813
  64. 64. MMWR. Use of WHO and CDC growth charts for children aged 0–59 months in the US. MMWR Recomm. Rep. 2010; 59: 1–5.
  65. 65. Mei Z, Ogden CL, Flegal KM, Grummer-Strawn LM. Comparison of the prevalence of shortness, underweight, and overweight among US children aged 0 to 59 months by using the CDC 2000 and the WHO 2006 growth charts. J Pediatr. 2008; 153(5): 622–628. pmid:18619613
  66. 66. Centers for Disease Control and Prevention. Public health Grand rounds. 2014.
  67. 67. Moore A. Bad Science in the headlines. EMBO Report, 2006; 7 (12): 1193–1196. pmid:17139292
  68. 68. Flanders WD, Obrien TR. Inappropriate comparison of incidence and prevalence in epidemiologic research. AJPH. 1989; 79(9): 1301–1303.
  69. 69. Budd JM, Sievert M, Schultz TR. Phenomena of retraction: reasons for retraction and citations to the publications. JAMA. 1998; 280: 296–297. pmid:9676689
  70. 70. Chan TM, Kuehl DR. On Lampposts, Sneetches, and Stars: A Call to Go Beyond Bibliometrics for Determining Academic Value. Academy Emergency Medicine. 2019; 1–7
  71. 71. Deeming S, Reeves P, Ramanathan S, Attia J, Nilsson M, Searles A. Measuring research impact in medical research institutes: a qualitative study of the attitudes and opinions of Australian medical research institutes towards research impact assessment frameworks. Health Research Policy & Systems. 2018; 16 (28): 1–20. pmid:29548331
  72. 72. Alston JM, Pardey PG Attribution and Other Problems in Assessing the Returns to Agricultural R&D. Agricultural Economics. 2001; 25(2–3): 141–152.
  73. 73. Reed RI, McIntyre E, Jackson-Bowers E, Kalucy L. Pathways to research impact in primary healthcare: What do Australian primary healthcare researchers believe works best to facilitate the use of their research findings. Health Research Policy & Systems. 2017; 15 (17): 1–8. pmid:28253903
  74. 74. Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluation, and definitions of research impact: A review. Research evaluation. 2013; 23: 21–32.
  75. 75. Broussard Meredith. Artificial Unintelligence. How Computers Misunderstand the World. The MIT Press 2018.
  76. 76. Association of Public Health Laboratories and CDC. Utilization of CDC Recommendations for Good Laboratory Practices in Biochemical Genetic Testing and Newborn Screening for Inherited Metabolic Diseases: Current Status, Lessons Learned and Next Steps to Advance and Evaluate Impact. 2014.
  77. 77. Trivers KF, Rodriguez JL, Cox SL, Crane BE, Duquette D. The Activities and Impact of State Programs to Address Hereditary Breast and Ovarian Cancer, 2011–2014. Healthcare. 2015; 3: 948–963. pmid:27417805
  78. 78. Green RF, Ari M, Kolor K, Dotson DW, Bowen S, Habarta N, et al. Evaluating the role of public health in implementation of genomics-related recommendations: a case study of hereditary cancers using the CDC Science Impact Framework. Genetics in Medicine. 2019; 21(1): 28–37. pmid:29907802
  79. 79. Ko LK, Jang SH, Friedman DB, Glanz K, Leeman J, Hannon PA, et al. An application of the Science Impact Framework to the Cancer Prevention and Control Research Network from 2014–2018. Preventive Medicine. 2019;129: Supplement, 105821.