Implementation strategies in emergency management of children: A scoping review

Background Implementation strategies are vital for the uptake of evidence to improve health, healthcare delivery, and decision-making. Medical or mental emergencies may be life-threatening, especially in children, due to their unique physiological needs when presenting in the emergency departments (EDs). Thus, practice change in EDs attending to children requires evidence-informed considerations regarding the best approaches to implementing research evidence. We aimed to identify and map the characteristics of implementation strategies used in the emergency management of children. Methods We conducted a scoping review using Arksey and O’Malley’s framework. We searched four databases [Medline (Ovid), Embase (Ovid), Cochrane Central (Wiley) and CINAHL (Ebsco)] from inception to May 2019, for implementation studies in children (≤21 years) in emergency settings. Two pairs of reviewers independently selected studies for inclusion and extracted the data. We performed a descriptive analysis of the included studies. Results We included 87 studies from a total of 9,607 retrieved citations. Most of the studies were before and after study design (n = 68, 61%) conducted in North America (n = 63, 70%); less than one-tenth of the included studies (n = 7, 8%) were randomized controlled trials (RCTs). About one-third of the included studies used a single strategy to improve the uptake of research evidence. Dissemination strategies were more commonly utilized (n = 77, 89%) compared to other implementation strategies; process (n = 47, 54%), integration (n = 49, 56%), and capacity building and scale-up strategies (n = 13, 15%). Studies that adopted capacity building and scale-up as part of the strategies were most effective (100%) compared to dissemination (90%), process (88%) and integration (85%). Conclusions Studies on implementation strategies in emergency management of children have mostly been non-randomized studies. This review suggests that ‘dissemination’ is the most common strategy used, and ‘capacity building and scale-up’ are the most effective strategies. Higher-quality evidence from randomized-controlled trials is needed to accurately assess the effectiveness of implementation strategies in emergency management of children.


Introduction
While it would be ideal, not all hospitals have a separate pediatric emergency department (ED), and a significant number of children present at the general EDs [1,2]. The requirements to manage pediatric emergencies differ from adults because of their unique needs in medication, equipment, staff, and pediatric-specific policies and protocols [3]. As new evidence is developed from well-designed research studies aimed at improving the health outcomes of children visiting EDs, it is critical to identify effective strategies to help implement new research findings to improve health outcomes [4].
In brief, implementation strategies are methods or techniques used to enhance the uptake and sustainability of research findings into routine practice [4]. They can be categorized into the following classes: (1) dissemination strategies: actions that target healthcare providers' awareness, knowledge, attitudes, and intention to adopt an evidence-based intervention (EBI) [5], (2) process strategies: activities or processes related to quality improvement in planning, selecting and integrating EBI into practice [6], (3) integration strategies: activities or actions taken to address factors that positively or negatively influence optimal integration of specific EBI into practice [5], and (4) capacity building and scale-up strategies: strategies that target the general capacity of individuals to execute implementation process strategies [5]. These include training, technical assistance, tools, and opportunities for peer networking. An implementation strategy is described as being successful or effective when it leads to an increase in the uptake or utilization of guidelines, protocols or evidence into routine practice [7]. However, it remains unclear if study designs play a role in determining the effectiveness of implementation.
Accumulating implementation studies have continued to report on various implementation strategies used in the emergency management of children with inconclusive evidence on the effectiveness of the strategy used [8][9][10][11][12]. Apart from patient-measured outcomes, implementation studies are expected to also focus on healthcare professional and organizational behavior to accept or utilize evidence-based practices [7], but some studies neither investigated nor reported on it [13]. It is unclear what the characteristics of successful implementation strategies in EDs are. Thus, the aim of this scoping review is to identify and map the characteristics of implementation strategies in the emergency management of children.

Methods
We used Arksey and O'Malley's 5-stage framework to conduct our scoping review [14]. An a priori protocol of this study is available on the Open Science Framework platform (https://osf.io/h6jv2). Our review question was: What are the characteristics of successful implementation strategies used in the emergency management of children? We reported this review in accordance with the reporting guidance provided in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Extension for Scoping Review Checklist (S1 Table) [15].

Study eligibility criteria
This review included implementation studies conducted in EDs managing children (e.g., �21 years) [16]. Our intervention of interest was the use of any of the implementation strategies described earlier [5, 6]. We focused on controlled studies, defined in this case as studies that applied at least a guideline, protocol or a specific treatment plan compared to before implementation or to another setting in which the implementation strategy was not applied. Citations were limited to peer-reviewed, full-text articles published in English. There was no limit on the date of publication.

Search strategy
A medical librarian (N.A.) designed and executed a literature search strategy in MEDLINE (Ovid) from inception through May 2019 (S2 Table). The search strategy was adapted for other bibliographic databases: Embase (Ovid), Cochrane Central (Wiley), and Cinahl (Ebsco). All retrieved citations were imported into an Endnote X8 (Clarivate Analytics, Philadelphia, PA).

Study selection
Two pairs of reviewers (A.A., G.N.O., M.M.J., and O.L.) independently screened the identified citations for eligibility using a two-stage sifting approach to review the title, abstract, and fulltext article. Disagreements between reviewers were observed on a few studies (<1%), which was resolved by a discussion between reviewers or by involving another reviewer (A.M.A.S.), on two occasions. We have reported the study selection process using the PRISMA flow diagram (Fig 1).

Data extraction
Data were extracted using standardized pilot-tested forms, entered into MS Excel (Microsoft Corporation, Redmond, WA, USA) by one reviewer (A.A., G.N.O., M.M.J., or O.L.), and verified for accuracy and completeness by a second reviewer. Disagreements were resolved by discussion between reviewers or by involving another reviewer (A.M.A.S.), when necessary. We extracted data on the study details: first author, year of publication, study period, country, study design, study objective, area of study, and the intervention: use of any of the implementation strategies described earlier [5,6], the effectiveness of the implementation strategies as it relates to healthcare provider-measured and patient-measured outcomes, and the number of strategies used.

Risk of bias assessment
We did not appraise the risk of bias of the included studies, which is consistent with established scoping review methods [17].

Data analysis
We performed screening and data management using MS Excel (Microsoft Corporation, Redmond, WA, USA). A descriptive analysis of different implementation strategies and the effectiveness was conducted and presented in tabular and narrative formats. We determined the effectiveness of the implementation strategy of the included studies based on the positive change reported in the outcomes of studies. We labelled the studies accordingly in cases where no change or negative change was observed in the outcomes. The effectiveness (%) of an implementation strategy was computed by dividing the number of studies that reported a positive effect (on the study outcome using the strategy) by the total number of studies that used the strategy [18, 19].
Included studies adopted one (n = 27, 31%), two (n = 27, 31%) or three (n = 27, 31%) implementation strategies, while less than one-tenth of the studies used four strategies (n = 6, 7%). The details of the number and how these strategies were used in each included study are presented in S3 Table. Dissemination strategies were utilized by most studies (n = 77, 89%) compared to other implementation strategies; process (n = 47, 54%), integration (n = 49, 56%), and capacity building and scale-up strategies (n = 13, 15%) (Fig 2).  Studies that adopted capacity building and scale-up as part of the implementation strategies were most effective (100%) compared to dissemination (90%), process (88%), and integration (85%) ( Table 2). A similar pattern of effectiveness was observed when each strategy was adopted alone. Compared to other strategies (dissemination, 92%, process, 90%, integration, 87%), the highest level of effectiveness was observed in non-randomized studies, which focused on healthcare provider-related outcomes that used capacity building and scale-up (100%) ( Table 3). In contrast, the effectiveness of these strategies was low in RCTs on patientmeasured outcomes. On average, the effectiveness of strategies was higher in studies conducted on healthcare provider-measured outcomes versus patient-measured outcomes for both RCTs (29.3% versus 8.3%) and non-randomized studies (92.3% versus 38.3%). Reduction in the effectiveness of strategies between healthcare provider-measured outcome and patient-measured outcome, however, was higher in RCTs versus non-randomized studies. The change in the number of

Discussion
This review is the first systematic scoping review that identified the various implementation strategies in the emergency management of children to the best of our knowledge. Most evidence on implementation strategies came from non-randomized studies showing that dissemination strategies were most commonly used, but capacity building and scale-up strategies were the most effective implementation strategies in emergency management of children. About a third of the included studies used one implementation strategy, while two-thirds of the studies used two and three strategies, and less than 10% used four strategies. The effectiveness of the implementation strategies varied by study design and study outcome (e.g. healthcare providermeasured versus patient-measured outcomes). We observed a higher level of effectiveness of strategies in non-randomized studies that used capacity building and scale-up with a focus on healthcare provider-measured outcomes. A timely intervention in the emergency management of children is crucial to attaining an optimal level of care. The skills and resources needed to manage children, especially in EDs require the rapid implementation of up-to-date research. Different types of implementation

PLOS ONE
Implementation strategies and children emergency strategies have been used in emergency management of children [4-6], but the questions remain as to which ones are effective and how many strategies are needed in the real world. Although dissemination strategies were most used in the included studies, capacity building and scale-up were most effective. A before and after study [9] investigated the effect of implementing a simulation-based training program on healthcare provider confidence in teambased management of severely injured pediatric trauma patients and found a positive response as the healthcare provider confidence on long-term exposure was improved. They used dissemination strategies in which healthcare providers underwent a 40-minute structured debriefing with trained debriefers after a training session. They also adopted capacity building and scale-up strategies in which various pediatric simulators and tools were used to support the implementation process to achieve a positive effect on healthcare providers.
It is crucial to identify implementation strategies, which may not produce desired results. As observed in our scoping review, a few implementation studies found no significant benefit following implementation. For example, Tavarez et al.
[10] evaluated the effects of implementing e-mail-only, provider-level performance feedback on admission practice variation of physicians and reported no significant impact on management practices. They used integration strategies in which individual physician's data/ performance was highlighted in red if it fell within the lowest quartile among all physicians and highlighted in blue if it fell within the highest quartile of performance.
The success of the implementation strategies appeared to be somewhat influenced by the study design. Our review showed that most of the included studies were non-randomized studies, and only less than one-tenth were RCTs. Arguably, the large number of study designs skewed towards the non-randomized studies may have powered the effectiveness of implementation strategies observed in non-randomized studies. That said, our review showed that the highest level of effectiveness was observed in capacity building and scale-up in non-randomized studies that focused on healthcare provider-measured outcomes. What appeared to be consistent for both RCTs and non-randomized studies was that the effectiveness of the strategies was higher in studies that focused on healthcare provider-measured outcomes versus patient-measured outcomes.
Although the ultimate goal of implementation research is to promote the overall quality of healthcare, the success of implementation strategies is in being able to influence healthcare professionals and organizational behavior positively to accept or utilize evidence-based practices [7]. Lee et al.
[11] conducted a before and after study investigating the effect of implementing a clinical pathway to decrease the period of observation following the management of anaphylaxis at EDs to reduce the admission rate. They found a positive effect on the overall admission rate, which was reduced from 58 to 25% following the implementation. While they reported a positive outcome on healthcare provider-measured outcomes, they found no benefit on the patient-measured outcome (percentage of patients that returned to the ED within 72 hours) following the implementation. Their findings and that of other included studies in this review showed that perhaps the focus of implementation studies on patient-measured outcomes may not be a good marker of a successful implementation strategy. We also observed that the reduction in the effectiveness of strategies between healthcare provider-measured outcomes and patient-measured outcomes was higher in RCTs than non-randomized studies. This suggests that the effectiveness of the implementation strategies may be exaggerated in non-randomized studies.
The strengths of this review include using an a priori protocol that followed the standard accepted methods for scoping reviews, and reporting according to the PRISMA Extension for Scoping Review guidelines. The inclusion of a multidisciplinary team, including experienced systematic reviewers, experts in implementation science, clinical epidemiology, and pediatric emergency management, provided adequate guidance to the reviewers during study selection, data extraction, and interpretation of the results. Our study is not without limitations. Most of the included studies were from North America; thus, worldwide generalizability of our results may be difficult because of cultural variations, which may affect behavior towards implementing research evidence. We did not appraise the risk-of-bias of included studies, which is in keeping with scoping review methods [17]. Because most of the included studies were nonrandomized studies with possible exaggeration of effectiveness of strategies, we acknowledged that more robust implementation RCTs with sophisticated methodological approaches are needed to accept or refute our findings. We only performed descriptive statistical analysis, which is consistent with our a priori protocol. Although we searched multiple bibliographic databases for completeness of the search, we acknowledged that we may not have captured all relevant studies due to our inclusion criteria.
Further research is needed to determine barriers to adopting other implementation strategies that appeared to be more effective but not commonly used. More data is also required to determine the optimal time to implement these strategies and their long term effects. Our scoping review has helped summarize the available evidence on implementation strategies in emergency management of children and highlighted the characteristics of successful ones.
In conclusion, studies on implementation strategies in emergency management of children have mostly been non-randomized study designs with possibly exaggerated effect sizes. Better study designs such as RCTs should be conducted more frequently when comparing implementation strategies. This review suggests that dissemination is the most common strategy, and capacity building and scale-up strategies are the most effective strategies.