Figures
Abstract
The number of prediction models developed for use in emergency departments (EDs) have been increasing in recent years to complement traditional triage systems. However, most of these models have only reached the development or validation phase, and few have been implemented in clinical practice. There is a gap in knowledge on the real-world performance of prediction models in the ED and how they can be implemented successfully into routine practice. Existing reviews of prediction models in the ED have also mainly focused on model development and validation. The aim of this scoping review is to summarize the current landscape and understanding of implementation of predictions models in the ED. This scoping review follows the Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist. We will include studies that report implementation outcomes and/or contextual determinants according to the RE-AIM/PRISM framework for prediction models used in EDs. We will include outcomes or contextual determinants studied at any point of time in the implementation process except for effectiveness, where only post-implementation results will be included. Conference abstracts, theses and dissertations, letters to editors, commentaries, non-research documents and non-English full-text articles will be excluded. Four databases (MEDLINE (through PubMed), Embase, Scopus and CINAHL) will be searched from their inception using a combination of search terms related to the population, intervention and outcomes. Two reviewers will independently screen articles for inclusion and any discrepancy resolved with a third reviewer. Results from included studies will be summarized narratively according to the RE-AIM/PRISM outcomes and domains. Where appropriate, a simple descriptive summary of quantitative outcomes may be performed.
Citation: Chan SL, Lee JW, Ong MEH, Siddiqui FJ, Graves N, Ho AFW, et al. (2022) Implementation of prediction models in the emergency department from an implementation science perspective—Determinants, outcomes and real-world impact: A scoping review protocol. PLoS ONE 17(5): e0267965. https://doi.org/10.1371/journal.pone.0267965
Editor: Jason Scott, Northumbria University, UNITED KINGDOM
Received: September 15, 2021; Accepted: April 19, 2022; Published: May 12, 2022
Copyright: © 2022 Chan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: NL is supported by the Duke-NUS Signature Research Programme funded by the Ministry of Health, Singapore. The funders had and will not have a role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Patients presenting to the emergency department (ED) have varying needs for urgent medical attention and limited hospital resources often necessitate prioritization of some patients over others [1]. Overcrowding of the ED is a common and increasing problem and can lead to adverse patient outcomes [2]. EDs therefore need to quickly determine the urgency and level of care required for each patient in order to optimize the allocation of scarce hospital resources [3]. To achieve this, most modern EDs have a triage process to assess patients’ severity of illness or injury upon arrival, assign priorities and then provide appropriate treatment [3,4]. Currently, ED triage is most commonly guided by semi-subjective scale-based systems, with some notable examples being the Emergency Severity Index (ESI) [5] and the Canadian Triage and Acuity Scale (CTAS) [6]. Using a mixture of qualitative and quantitative metrics, these scale-based protocols guide the healthcare practitioner in assigning the patient to a label that reflects his/her required level of care. Although scale-based systems have been widely implemented and have shown their usefulness, their accuracy is highly dependent on the triaging doctors’ or nurses’ experience [3]. In recent years, various prediction models have been developed for ED patients that could complement subjective scale-based triage processes and further optimize management of patients in the ED [7,8]. These models are typically derived from real-world data and utilize a range of statistical and machine learning tools, ranging from traditional regression models to cutting-edge neural networks [9]. Some examples include models predicting in-hospital mortality [10], intensive care unit (ICU) admission or readmission [11–13]. However, among the many models developed, few were externally validated and even fewer had their impact on clinical practice analysed [7].
Nevertheless, some of these prediction models may have been implemented into routine clinical practice with the increasing emphasis on harnessing big data and building learning healthcare systems [14]. While the area under curve (AUC) and other quantitative summary statistics are used in model development and validation, they do not entirely capture the actual consequences of model implementation [15]. Predictive analytics promise to improve patient outcomes but several intervening steps leading to providers responding appropriately to model outputs are necessary to result in actual patient benefit [16]. Studying the implementation process and its impact on outcomes can identify potential barriers and facilitators to implementation and strategies that may promote implementation [17].
There have been systematic reviews providing informative overviews of prediction models in the ED primarily in terms of model structure, development, and performance [7,12,18]. One review on clinical decision support systems for triage in ED found that less than half of the included studies had an implementation phase even though the majority of them showed promising potential in the validation phase [7]. These findings suggest that there exists a host of barriers which are unrelated to performance and that despite the paucity in implementation, these barriers can be overcome with proper knowledge and execution. Furthermore, to the best of our knowledge, there has been no review of the process and outcomes of implementing these models into routine clinical practice. Therefore, a gap still exists in our current understanding of the logistical and administrative challenges involved in prediction model implementation in the ED. This gulf in the current body of knowledge also extends to an understanding of how outcomes of such implemented models are assessed and perceived by the healthcare providers. This points to a need for a scoping review on this topic, a type of systematic evidence synthesis where the intent is to assess and understand the extent of knowledge or map the concepts in a particular field, rather than to answer a specific clinical question to aid decision making as is the case for a systematic review [19].
To understand both the contextual determinants as well as outcomes affecting implementation success into routine clinical practice, we will be using the revised, enhanced Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM)/Practical, Robust, Implementation, and Sustainability Model (PRISM) 2019 framework to guide this review (Fig 1) [20]. The revised RE-AIM/PRISM framework retains the original 5 RE-AIM domains, which can be used as evaluation outcomes for an implementation effort and includes multilevel contextual determinants from PRISM that can explain these implementation outcomes [20]. This would be useful in guiding future implementation studies, but also in providing valuable insight into the factors that influence to the success of implementing prediction models in the ED [21,22].
This review is also focused on implementation of prediction models into routine ED practice. Implementation is a process, where implementation studies can occur from pre-implementation to the maintenance phase. Studies focused on implementation outcomes at any stage are therefore relevant for our review. Prediction models typically progress from development, internal validation, external validation to impact assessment before being implemented [23,24]. Model performance prior to routine implementation is therefore theoretical, even if performed on actual patient data. For effectiveness, we therefore chose to only include results after implementation, as this represents ‘real-world effectiveness’.
The aim of this scoping review is to summarize the current landscape and understanding of implementation of prediction models in the ED from an implementation science angle. Specifically, apart from a descriptive summary of characteristics of prediction models implemented in clinical practice, we will summarize the implementation outcomes and contextual determinants affecting implementation success according to the revised RE-AIM/PRISM framework [20].
Methods
This scoping reviews follows the Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist [25]. At the time of writing, we have completed the searching, title and abstract screening, and full-text screening.
Inclusion and exclusion criteria
We expected that reports on prediction model implemented in the ED would have varying degrees of focus on implementation outcomes and reporting of implementation strategies. We also recognized that not all implementation studies may be comparative. For clarity, we have combined the PICOT (Population, Intervention, Comparison, Outcome and Time) framework and the Standards for Reporting Implementation Studies (StaRI) guidelines [26] to differentiate the intervention and implementation strategy of interest. The intervention describes the type of prediction models of focus in this review, and implementation strategy describes the studies. We therefore included studies that satisfied the PICOT elements for both the intervention and implementation strategy (Table 1). For the implementation strategy, the intervention and comparison elements were seen as optional to accommodate non-comparative studies.
All types of primary research studies were included (e.g., randomized, quasi-experimental, observational, qualitative, etc). For reviews that met our inclusion criteria, we included the relevant primary studies within instead of the review itself, allowing more flexibility with data extraction and synthesis.
We excluded conference abstracts and papers, theses and dissertations, letters to editors and commentaries as these were unlikely to contain sufficient information to contribute meaningfully to the review. We also excluded non-research documents in the grey literature and studies with no full-text in English due to lack of resources for searching and translation, respectively.
A wide variety of predictive tools may be used in the ED. For the sake of clarity, the following types of interventions were not included:
- Triage models that rely on subjective judgement and/or do not involve quantitative variables, including simple criteria-based rules.
- Models used in the ED but not applied on patients (e.g., for operations planning, staff roster planning, analysis of scans or reports, etc).
- Models for prediction or prognostication based on a single predictor or diagnostic procedure (e.g., troponin, CT scan, etc).
- Treatment protocols, guidelines, or pathways with multiple decision points/actions. Although some of these may include triage or prediction, they are typically only one of many components of a complex intervention. It is therefore difficult to assess the implementation aspects of the prediction component alone. We will focus on studies where the prediction model is the primary focus for implementation.
- Models or tools for diagnosis or measuring a single construct (e.g., pain, alcohol intake, etc).
- Models focused on improving operational efficiency (e.g., quality improvement studies).
Information sources
We searched 4 electronic databases from the time of their inception until 30 June 2021: MEDLINE (through PubMed), Embase, Scopus and CINAHL. In addition, the reference lists of relevant reviews and articles included at the full-text screening stage will be screened for any additional studies.
Search strategy
A list of keywords and index terms for informative PICOT elements for the intervention and implementation strategy was generated. The index terms for each database were searched and curated according to the controlled vocabulary of the database. For example, for PubMed, MeSH terms were searched using the keywords and the most relevant ones chosen. The keywords and index terms within each concept were then combined using the BOOLEAN operator ‘OR’ and searched in all databases. The results from the 3 concepts were then combined using the BOOLEAN operator ‘AND’ to narrow the search. The team then reviewed a sample of the initial search results and updated the search terms with additional keywords found in relevant articles. The following filters were applied: English, “Full text” and “Journal Article” to remove conference abstracts and other non-research articles. The final search terms are shown in Table 2.
Selection of sources of evidence
The entire team developed and piloted the search strategy. After the search strategy was finalized, the actual search was performed by one reviewer in all the databases (SLC). The results from searches in all databases were combined and duplicates removed using EndNote. The resultant list of citations was then imported into Rayyan.ai for screening [27]. In the first level of screening, two reviewers (SLC and JWL) screened the titles and abstracts independently and selected articles for full-text review. Any discrepancies were resolved by consensus with a third reviewer (NL). Next, two reviewers (SLC and JWL) screened the full-texts of articles selected for inclusion and any discrepancies were resolved by consensus with a third reviewer (NL). The reference lists of included articles were then scanned for further relevant articles. These additional articles were also subject to the same screening process as the initially included articles. The search will be repeated prior to writing up the results to capture any new articles that may be eligible.
Data charting process
Two reviewers (SLC and JWL) will extract information from included articles using a charting form independently. The initial form may be revised to include additional relevant fields after the first 5 articles. Information extracted by both reviewers will then be combined and summarized. Any substantial discrepancies will be resolved by consensus with a third reviewer (NL).
Data items
The initial variables that will be extracted are:
- Citation details (authors, title, year of publication, journal, volume, issue, pages)
- Country
- Context (institution name, type of hospital, type of setting, hospital size)
- Study design
- Study period
- Participants (description)
- Intervention details (model name, type, outcome(s), performance, validation status)
- Intervention strategy (actor, action, action target, temporality, dose)
- Methods (for each outcome, including sample size)
- Results
- Other information relevant to the RE-AIM/PRISM framework
Synthesis of results
A description of included studies will be done in terms of the characteristics of the ED (country, context, participants), study design and model characteristics. Results and any information relevant to the RE-AIM/PRISM framework will be categorized according to the elements in the framework. The definitions of each outcome or domain given by the originators of the RE-AIM and PRISM frameworks, respectively, will be used to guide the categorization [28,29]. We will summarize the results by type of RE-AIM outcome(s) and PRISM domain by providing a descriptive summary (for quantitative outcomes, as appropriate) or narrative summary (for qualitative outcomes). The revised RE-AIM/PRISM also emphasizes fit among context, intervention and implementation strategies, and explicitly includes costs and adaptations under overarching issues. Where appropriate, we will also provide a narrative summary of these aspects [30]. Results may be presented by type of study, intervention, type of setting and presence, type or dose of implementation strategies, as appropriate. As ‘real-world effectiveness’ will be influenced by both the efficacy of the model (i.e., accuracy) and implementation success, we may also discuss the results in light of validation results and implementation strategies (if present) where appropriate.
Discussion
This scoping review is the first to focus on the implementation process and real-world outcomes of prediction models implemented in the ED. The body of existing reviews on prediction models in the ED have principally focused on model development and validation [7,12,18]. This review aims to fill a gap in the current literature and complement existing reviews by providing an overview of how validated prediction models in routine clinical use are implemented and evaluated. It will therefore complement existing reviews that focus only on the performance of prediction models. Additionally, considering the burgeoning prevalence of high-performing Artificial Intelligence (AI) based models [31] and the increasing adoption of computerized clinical decision support systems [32], an inquiry into the implementation of prediction models carries tremendous practical implications. This is especially so in the ED, where a range of undesirable outcomes such as overcrowding [33], patient readmission [34] and septic shock [35] continue to persist even after patients leave the ED and are not directly addressed by triage itself.
Despite advancements in the development and validation of ED-based prediction models, there has been comparatively little progress in the implementation and integration of such models into clinical practice. One major barrier to adoption of prediction models is lack of evidence of clinical utility, which requires the model to provide information over and above what is already known, thereby prompting actions that lead to improved outcomes compared to without the model [36]. A host of other challenges such as data barriers, lack of transparency, regulation and certification, ethics, need for education and training, exists especially for models designed to harness electronic medical records to produce real-time prediction [37]. This review will summarize the collective experience of implementing various types of prediction models in EDs around the world using an implementation science framework and provide a sense of what factors promote or hinder implementation, what strategies might work in promoting implementation, how models perform in the real world and which area are lacking in implementation research. For EDs considering implementing certain prediction models in their setting, this review can provide valuable information on their model’s potential clinical impact, key strategies to maximize implementation success and what implementation studies to perform while doing so. This review can also reveal pitfalls and gaps in the implementation of certain prediction models. Taken together, a collection of both positive and negative real-world experiences can provide a holistic perspective of ED prediction model implementation, potentially aiding and streamlining the implementation process for future prediction models.
There are some strengths and limitations of our study. The key strength is that this review focuses on implementation, which is a gap in the review literature currently. Another strength is that we intend to include a broad range of models, which increases the applicability of the findings. The limitations are firstly, the exclusion of protocols, guidelines or pathways that include a prediction component. Prediction models are often not used in isolation but part of a care plan. However, the focus of this review is to inform how to implement new prediction models that are likely to be an addition to current clinical workflows rather than creation of a whole new workflow, although that might be necessary in some cases. Secondly, the summary of implementation outcomes may require our interpretation and categorization of study findings. This is inevitable and necessary as terminology within the implementation science is not standardized [38]. Moreover, many studies may not even have explicitly utilized implementation science methods or tools.
In conclusion, this scoping review will be a valuable resource for informing future implementation studies of prediction models in the ED.
References
- 1. Aacharya RP, Gastmans C, Denier Y. Emergency department triage: an ethical analysis. BMC Emerg Med. 2011;11:16. Epub 2011/10/11. pmid:21982119; PubMed Central PMCID: PMC3199257.
- 2. Lynn SG, Kellermann AL. Critical decision making: Managing the emergency department in an overcrowded hospital. Annals of Emergency Medicine. 1991;20(3):287–92. pmid:1996824
- 3. FitzGerald G, Jelinek GA, Scott D, Gerdtz MF. Emergency department triage revisited. Emerg Med J. 2010;27(2):86–92. Epub 2010/02/17. pmid:20156855.
- 4. Iserson KV, Moskop JC. Triage in medicine, part I: Concept, history, and types. Ann Emerg Med. 2007;49(3):275–81. Epub 2006/12/05. pmid:17141139.
- 5. Green NA, Durani Y, Brecher D, DePiero A, Loiselle J, Attia M. Emergency Severity Index version 4: a valid and reliable tool in pediatric emergency department triage. Pediatr Emerg Care. 2012;28(8):753–7. Epub 2012/08/04. pmid:22858740.
- 6. Bullard MJ, Musgrave E, Warren D, Unger B, Skeldon T, Grierson R, et al. Revisions to the Canadian Emergency Department Triage and Acuity Scale (CTAS) Guidelines 2016. CJEM. 2017;19(S2):S18–S27. Epub 2017/08/02. pmid:28756800.
- 7. Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JMC. Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: a Review. Artif Intell Med. 2020;102:101762. Epub 2020/01/26. pmid:31980099.
- 8. Goldstein BA, Navar AM, Pencina MJ, Ioannidis JP. Opportunities and challenges in developing risk prediction models with electronic health records data: a systematic review. J Am Med Inform Assoc. 2017;24(1):198–208. Epub 2016/05/18. pmid:27189013; PubMed Central PMCID: PMC5201180.
- 9. Beam AL, Kohane IS. Big Data and Machine Learning in Health Care. JAMA. 2018;319(13):1317–8. Epub 2018/03/14. pmid:29532063.
- 10. Xie F, Ong MEH, Liew J, Tan KBK, Ho AFW, Nadarajan GD, et al. Development and Assessment of an Interpretable Machine Learning Triage Tool for Estimating Mortality After Emergency Admissions. JAMA Netw Open. 2021;4(8):e2118467. Epub 2021/08/28. pmid:34448870.
- 11. Artetxe A, Beristain A, Grana M. Predictive models for hospital readmission risk: A systematic review of methods. Comput Methods Programs Biomed. 2018;164:49–64. Epub 2018/09/10. pmid:30195431.
- 12. Brabrand M, Folkestad L, Clausen NG, Knudsen T, Hallas J. Risk scoring systems for adults admitted to the emergency department: a systematic review. Scand J Trauma Resusc Emerg Med. 2010;18:8. Epub 2010/02/12. pmid:20146829; PubMed Central PMCID: PMC2835641.
- 13. Huang Y, Talwar A, Chatterjee S, Aparasu RR. Application of machine learning in predicting hospital readmissions: a scoping review of the literature. BMC Med Res Methodol. 2021;21(1):96. Epub 2021/05/07. pmid:33952192; PubMed Central PMCID: PMC8101040.
- 14. Krumholz HM. Big data and new knowledge in medicine: the thinking, training, and tools needed for a learning health system. Health Aff (Millwood). 2014;33(7):1163–70. Epub 2014/07/10. pmid:25006142; PubMed Central PMCID: PMC5459394.
- 15. Jung K, Kashyap S, Avati A, Harman S, Shaw H, Li R, et al. A framework for making predictive models useful in practice. J Am Med Inform Assoc. 2021;28(6):1149–58. Epub 2020/12/24. pmid:33355350; PubMed Central PMCID: PMC8200271.
- 16. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood). 2014;33(7):1148–54. Epub 2014/07/10. pmid:25006140.
- 17. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Republished research: Implementation research: what it is and how to do it. British Journal of Sports Medicine. 2014;48(8):731–6. pmid:24659611
- 18. Nannan Panday RS, Minderhoud TC, Alam N, Nanayakkara PWB. Prognostic value of early warning scores in the emergency department (ED) and acute medical unit (AMU): A narrative review. Eur J Intern Med. 2017;45:20–31. Epub 2017/10/11. pmid:28993097.
- 19. Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119–26. pmid:33038124.
- 20. Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, et al. RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review. Front Public Health. 2019;7:64. Epub 20190329. pmid:30984733; PubMed Central PMCID: PMC6450067.
- 21. Longoni C, Bonezzi A, Morewedge CK. Resistance to Medical Artificial Intelligence. Journal of Consumer Research. 2019;46(4):629–50.
- 22. Scott I, Carter S, Coiera E. Clinician checklist for assessing suitability of machine learning applications in healthcare. BMJ Health Care Inform. 2021;28(1). Epub 2021/02/07. pmid:33547086; PubMed Central PMCID: PMC7871244.
- 23. Moons KG, Kengne AP, Woodward M, Royston P, Vergouwe Y, Altman DG, et al. Risk prediction models: I. Development, internal validation, and assessing the incremental value of a new (bio)marker. Heart. 2012;98(9):683–90. Epub 20120307. pmid:22397945.
- 24. Moons KG, Kengne AP, Grobbee DE, Royston P, Vergouwe Y, Altman DG, et al. Risk prediction models: II. External validation, model updating, and impact assessment. Heart. 2012;98(9):691–8. Epub 20120307. pmid:22397946.
- 25. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018;169(7):467–73. Epub 2018/09/05. pmid:30178033.
- 26. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795. Epub 20170306. pmid:28264797; PubMed Central PMCID: PMC5421438.
- 27. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. Epub 2016/12/07. pmid:27919275; PubMed Central PMCID: PMC5139140.
- 28. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7. pmid:10474547; PubMed Central PMCID: PMC1508772.
- 29. Feldstein AC, Glasgow RE. A Practical, Robust Implementation and Sustainability Model (PRISM) for Integrating Research Findings into Practice. The Joint Commission Journal on Quality and Patient Safety. 2008;34(4):228–43. pmid:18468362
- 30. Glasgow RE, Estabrooks PE. Pragmatic Applications of RE-AIM for Health Care Initiatives in Community and Clinical Settings. Prev Chronic Dis. 2018;15:E02. Epub 20180104. pmid:29300695; PubMed Central PMCID: PMC5757385.
- 31. Yu KH, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018;2(10):719–31. Epub 20181010. pmid:31015651.
- 32. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17. Epub 20200206. pmid:32047862; PubMed Central PMCID: PMC7005290.
- 33. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126–36. Epub 2008/04/25. pmid:18433933; PubMed Central PMCID: PMC7340358.
- 34. Schwab C, Hindlet P, Sabatier B, Fernandez C, Korb-Savoldelli V. Risk scores identifying elderly inpatients at risk of 30-day unplanned readmission and accident and emergency department visit: a systematic review. BMJ Open. 2019;9(7):e028302. Epub 2019/08/01. pmid:31362964; PubMed Central PMCID: PMC6677948.
- 35. Nguyen HB, Rivers EP, Abrahamian FM, Moran GJ, Abraham E, Trzeciak S, et al. Severe sepsis and septic shock: review of the literature and emergency department management guidelines. Ann Emerg Med. 2006;48(1):28–54. Epub 2006/06/20. pmid:16781920.
- 36. Harris AH. Path From Predictive Analytics to Improved Patient Outcomes: A Framework to Guide Use, Implementation, and Evaluation of Accurate Surgical Predictive Models. Ann Surg. 2017;265(3):461–3. Epub 2016/10/21. pmid:27735825; PubMed Central PMCID: PMC5645012.
- 37. Amarasingham R, Audet AM, Bates DW, Glenn Cohen I, Entwistle M, Escobar GJ, et al. Consensus Statement on Electronic Health Predictive Analytics: A Guiding Framework to Address Challenges. EGEMS (Wash DC). 2016;4(1):1163. Epub 2016/05/04. pmid:27141516; PubMed Central PMCID: PMC4837887.
- 38. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Epub 2010/10/20. pmid:20957426; PubMed Central PMCID: PMC3068522.