Are we prepared? The development of performance indicators for public health emergency preparedness using a modified Delphi approach

Background Disasters and emergencies from infectious diseases, extreme weather and anthropogenic events are increasingly common. While risks vary for different communities, disaster and emergency preparedness is recognized as essential for all nation-states. Evidence to inform measurement of preparedness is lacking. The objective of this study was to identify and define a set of public health emergency preparedness (PHEP) indicators to advance performance measurement for local/regional public health agencies. Methods A three-round modified Delphi technique was employed to develop indicators for PHEP. The study was conducted in Canada with a national panel of 33 experts and completed in 2018. A list of indicators was derived from the literature. Indicators were rated by importance and actionability until achieving consensus. Results The scoping review resulted in 62 indicators being included for rating by the panel. Panel feedback provided refinements to indicators and suggestions for new indicators. In total, 76 indicators were proposed for rating across all three rounds; of these, 67 were considered to be important and actionable PHEP indicators. Conclusions This study developed an indicator set of 67 PHEP indicators, aligned with a PHEP framework for resilience. The 67 indicators represent important and actionable dimensions of PHEP practice in Canada that can be used by local/regional public health agencies and validated in other jurisdictions to assess readiness and measure improvement in their critical role of protecting community health.


Introduction
The global experience with recent public health emergencies such as outbreaks of Ebola Virus Disease and differential impacts of climate change has public health workers and the general public asking: Are we prepared? The burden of morbidity and mortality from emergencies and disasters can be severe, resulting in public health systems investing substantial time and resources toward preparedness [1]. The public health system is the lead in responding to outbreaks and in minimizing the impact of diverse emergencies on health [2,3]. Public health sector activities in infectious disease emergencies include leading other emergency management organizations during outbreaks, conducting surveillance and investigation, implementing control measures, developing guidance for health-care practitioners, and communicating risks [3]. In addition, public health is the lead sector in preparing for the population health effects of non-infectious events incited by natural or anthropogenic hazards. Emergency preparedness levels have been a concern globally in past emergencies; for example, Canada's response to the 2003 Severe Acute Respiratory Syndrome outbreak raised a number of issues: lack of surge capacity in the clinical and public health systems, difficulties with timely access to laboratory testing and results, and weak links between public health and the health care system were clear gaps in preparedness [3]. Recognizing complex and system-level challenges that affect emergency preparedness efforts globally, the World Health Organization (WHO) has called for all countries to create resilient integrated systems that can be responsive and proactive to any future threat, although this remains a knowledge gap [4,5]. While risks vary for different communities, disaster and emergency preparedness is recognized as essential for all nation-states [4,6]. Local and regional public health agencies aim to mitigate risks and protect population health; however, they face challenges to ensure readiness for potential emergencies ranging widely in likelihood and impact. Further, investments change over time with economic and policy priorities, which can influence the resources available for this purpose. Thus, the ability to define and measure essential elements of public health emergency preparedness (PHEP) is important for local and/or regional public health agencies.
Measurement and reporting of performance indicators has been shown to impact system performance [7]. In Canada, the Canadian Institutes for Health Information and Statistics Canada report indicators of health status and health care system performance [8]; in addition, performance measurement has been used in Canada to inform health system decision-making [9]. The precise ways measurement and reporting influence health systems, however, remains unclear [10]. In recent years, increasing attention has been paid to performance measurement for the public health system [11,12]. While preparedness metrics are few in the literature [13][14][15], the pressure for public health agencies to articulate their degree of preparedness is increasing. Globally, countries are asked to meet targets aimed at reducing disaster risks in their communities, which includes health impacts [6], and the International Health Regulations (IHR) require that all nations report on indicators aligned with the IHR [16,17]. As nation-states examine their own readiness, indicators for relevant jurisdictional levels have been developed by some countries. For example, the United States (US) has examined aspects of preparedness in the context of national health security and emergency planning [18,19], including the concept of resilience [20,21], but measurement considering resilience relevant to and actionable for practice in local/regional public health is lacking.
Approaches in PHEP include event or risk-based planning, such as planning for the health impacts of an international sporting event, and all-hazards planning which aims to achieve preparedness for a range of possible hazards, both infectious (i.e. influenza) and non-infectious (i.e. natural disasters). The all-hazards approach is viewed as essential for public health systemlevel readiness, enabling effective and efficient preparedness that accounts for the difficulty in predicting the type and severity of events [14,22,23]. The conventional cycle of emergency management includes four phases which are (1) prevention/mitigation, (2) preparedness, (3) response and (4) recovery; public health agency activities relate to all four phases [24]. In this study, we focus on preparedness as upstream activities and actions that promote enhanced public health system capacity and resilience throughout all four phases. It is important to note that in Canada, PHEP addresses population-level preparedness, distinct from clinical care and health care facility preparedness. Communication and integration of preparedness activities between sectors like health care, government and the community is, however, often a responsibility of public health agencies. Relevant levels of the public health system in Canada are local or regional (varies by province/territory), provincial/territorial, and federal. We consider all three as the public health system, and we identify local/regional public health agencies as the primary locus of public health service delivery in Canada [3,25].
Defining a PHEP framework, establishing indicators, measuring performance, and supporting quality improvement (QI) can be viewed in a continuum to support building system resilience. Conceptual frameworks or maps serve as a starting point for performance measurement and QI [7]. "Indicators only indicate" and will never entirely capture the complexity of a system, making clarity and conceptualization about what the system is aiming to do essential [7]. To address the important task of ensuring readiness and creating resilient systems, our previous work developed a framework which identifies the essential elements of PHEP relevant to Canada, and considers the complexity of the public health system and emergency context [26]. The framework for resilience includes eleven essential elements and constitutes an evidence-based approach to defining PHEP for local/regional public health agencies and supporting practice for community health protection from disaster risks. In developing the framework, we noted that promoting resilience for public health systems requires consideration of complex aspects of preparedness such as social infrastructure [26,27]; for example, assessment of workforce capacity is influenced by individual workers' willingness to respond [28]. In addition, addressing challenges across these systems may require measuring dimensions such as network strength or "connectivity" of relevant stakeholders [29]. The framework for resilience thus conceptualizes the essential elements to consider in measuring PHEP. The objective of this study is to identify and define a set of PHEP indicators aligned with the framework to advance performance measurement for local/regional public health agencies.

Approach
The modified Delphi method is an iterative survey and consultative process useful for indicator development in health research for fields with a limited evidence base like PHEP indicators [30,31]. We used a modified Delphi technique with two rounds of online surveys based on a scoping review and indicators suggested by the panel (second round only) [31]. The use of existing literature to inform the first round is an established modification to the Delphi which enhances the efficiency of a time-consuming open-ended question only round [31]. Reporting details according to standards for Delphi studies are found in S1 Table [30]. The study used an Integrated Knowledge Translation (iKT) approach and a steering committee of knowledge users, defined as professionals likely to use the results, that was consulted at key milestones [32]. Research ethics approval was obtained from Public Health Ontario and University of Ottawa Ethics Review Boards.

Panel selection
This national study was conducted in Canada, where health services and programs are provided at the provincial/territorial level for ten provinces and three territories. In Canada, regional health authorities or networks generally include more than one municipality, while locally-organized health services are based at the municipal level [33]. Leaders involved in PHEP in Canada include local public health officials, provincial public health and health emergency management partners, and federal public health and health system partners. Purposive sampling augmented by snowball recruitment was employed to deliberately select PHEP experts for a national sample of public health leaders and decision-makers [34]. Rationale for the sample definition is to ensure that key indicators in PHEP were identified by individuals with knowledge and experience specifically in PHEP, and who hold leadership roles and/or have clear responsibility for PHEP within their health unit, agency or jurisdiction, and for whom indicators would be relevant [31]. Medical Officers of Health (MOHs), Associate MOHs, Environmental Health Officers, and other leaders or decision-makers with experience and/or expertise in PHEP from the federal, provincial and municipal levels were recruited. We aimed to identify 20-30 PHEP experts across Canada and establish a heterogeneous composition of the panel [31,35]. In the performance measurement indicator literature, selection of expert participants is described through a process of nomination, which we employed to recruit established experts in PHEP [36]. A nomination process by email was thus used to identify experts in the field of PHEP based on experience, scholarship or reputation in their organization or jurisdiction [31,36].
The nomination process resulted in 48 PHEP nominees. Thirty-eight nominees were invited to participate based on geographic and professional diversity. Five nominees declined the invitation due to availability. Consistent with the criteria for nominations, the final Delphi panel was comprised of 33 experts representing senior-level positions spanning all jurisdictional levels across 12 of 13 provinces and territories. Self-reported areas of expertise included public health preparedness, response and management (63.6%) and health services emergency preparedness, response and management (57.6%). Other key areas of expertise included communicable diseases (42.4%) and environmental health (39.4%). The majority of the panel (78.7%) had over ten years of experience, with 42.4% of the panel with 20+ years of experience in their field. A profile of the expert panel characteristics is found in Table 1.

Data collection and analysis
A scoping review was used to identify and extract existing indicators for PHEP from the literature [37]. A librarian-assisted search strategy was developed and four databases were explored for relevant, English language, peer-reviewed literature. Grey literature searches included web searches, government research reports and key documents collected from knowledge users. The search strategy and related keywords for peer reviewed and grey literature is found in S1 Appendix Tables 1 and 2, respectively. The Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) was used to map the number of records identified, included and excluded and the reasons for exclusion. The study selection process was followed by data extraction and data charting according to the descriptive numerical summary approach and conducted by two team members. Quality appraisal was conducted using the Meta Quality Appraisal Tool which is a tool specific to public health research [38]. The tool was used to qualitatively appraise the strengths and weaknesses of included studies by assessing relevancy, reliability, validity, and applicability to public health. Grades of high, moderate or low were assigned based on qualitative assessment of these dimensions, with a focus on validity of the development process for existing indicators, including description of the methodology used, and were reported in the data charting table.
The data from the final group of articles were synthesized with a hybrid approach of deductive and inductive thematic analysis, using NVivo 10. Themes were identified from extracted indicators, corresponding with each framework element. Extracted indicators corresponding to the PHEP framework were assessed for relevance to local/regional public health agency practice. Themes were used by the research team to develop and refine lists of indicators for inclusion in the round one survey by framework element.
Panel members were asked to rate each indicator based on criteria for quality indicators [7]. The United Kingdom's National Health Service Institute for Innovation and Improvement has established a systematic approach to developing indicators using criteria of importance, validity, possibility, meaning and implications. The knowledge user steering committee provided feedback on quality indicator criteria and the criteria of importance and actionability were most relevant for the early stage of indicator development and included for indicator rating. Importance and actionability were defined respectively as: (1) this indicator is a key priority in public health preparedness for emergencies; and (2) this indicator is under the control of the local or regional public health agency. The survey asked participants to rate each indicator on both criteria on a seven-point Likert scale. Open-ended questions augmented the round one survey to elicit suggestions for additional indicators and obtain feedback on indicator clarity. The round 1 survey was input to the web-based platform Acuity4. The survey was piloted with experts who were not panel members but met criteria as a PHEP expert. Piloting aimed to assess clarity of the data collection instrument, functionality of the online format, and relevance of companion documents. Survey administration was managed by a research coordinator; participants were emailed a personalized URL and a companion document explaining the PHEP framework and indicator extraction/development. Three weekly attempts were made to contact non-respondents [31].
Responses were exported to Microsoft Excel for analysis. Ratings of agreement (5-7) and disagreement (1-4) were calculated into a percentage reflecting the level of panel consensus for each criterion statement by indicator. An a priori cut-off for consensus of 70% was used based on published ranges [31]. Indicators that achieved 70% consensus as both important and actionable were retained as PHEP indicators after round 1. Indicators that reached consensus as both not important and not actionable (disagreement consensus of 70%) were discarded. Finally, indicators that achieved 70% consensus on importance or actionable but not both were deemed as unclear and were retained for revision according to panel feedback. Sensitivity analyses were carried out to examine the thresholds for consensus [31,36]. New indicators suggested by the panel during round 1 were extracted and analyzed using thematic analysis as there may be multiple descriptors of the same indicator [39]. First, multiple reviews of the raw data were conducted. Second, manual coding was completed and a set of unique themes (i.e. indicators) produced. Based on the resultant themes, a group of new indicators were developed for rating in round two.
The round two survey included revised versions of the indicators with unclear consensus for re-rating and the new suggested indicators. A summary of panel feedback and results of round one accompanied the round two survey link. Open-ended questions enabled participants to comment on the indicators. Consensus level of agreement was analyzed based on round two responses. Indicators from round two rating were retained, discarded or deemed to have unclear consensus. The third round was a meeting of the panel, with both web-conferencing and in-person participation. A summary of round two panel feedback was distributed in advance. Indicators with unclear consensus were revised and discussed to achieve final consensus to retain or discard. Anonymous rating was conducted using the polling feature in Adobe Connect to achieve final consensus for retaining or discarding indicators. The meeting was audio-recorded and transcribed to document panel feedback.
In keeping with the iKT approach in this study, the steering committee was consulted at key milestones. These included development of indicators from the scoping review; survey piloting; interpretation of survey results; and review and feedback on the final indicator list.

Search results
The librarian-assisted search yielded 4,516 articles and 117 grey literature sources. After screening, a total of six peer-reviewed articles and thirteen grey literature sources were included in the final group for indicator extraction. The flow of selection is outlined by a PRISMA diagram in S1 Appendix Fig 1. The data charting table, descriptive summary, and quality assessments are found in S2 Table. From the literature, 397 indicators spanning 62 themes were extracted and classified by the 11 PHEP framework elements [26]. Themes and indicators extracted from the literature relevant to PHEP are summarized in Table 2. Based on the themes, 62 indicators were identified for round one panel rating.

Modified Delphi
Three rounds of data collection occurred between November 2017 and January 2018. The response rate for round one was 100%. Of the 62 indicators proposed for rating, 41 achieved consensus agreement of 70% on importance and actionability and were retained after the first round. The remaining 21 indicators had unclear consensus. Nineteen indicators achieved consensus on importance but not actionability. Two indicators reached consensus on actionability but not importance. Comments pertaining to actionability generally related to jurisdictional responsibility, and/or resource/financial constraints out of local/regional level control. The results of round one by indicator are provided in S2 Appendix Tables 1 and 2. Indicators with unclear consensus were revised; however, indicators were not modified to address actionability comments if the indicator reached consensus for importance. Panel suggestions resulted in an additional 14 new indicators. A list of indicators suggested by the panel is found in S2 Appendix Table 3. A total of 35 indicators were incorporated into the round 2 survey.
Round one achieved a 100% response rate. Of the 35 indicators, 23 reached the 70% level of consensus on both importance and actionability; the remaining 12 indicators had unclear  Tables 4 and 5). Feedback on the 12 indicators was reviewed and indicators revised accordingly, with the 12 indicators forming the basis for discussion at the final meeting. During the course of the half-day round three meeting, participation ranged from 22-28 members (67-85%). Analyses of indicator ratings were adjusted according to the number of votes received in each poll. At the meeting, three indicators reached consensus and two indicators were discarded (S2 Appendix Tables 6 and 7). Seven indicators were deemed to be important but not actionable (S2 Appendix Table 8). Summary qualitative comments from round 3 are provided in S2 Appendix Table 9. Fig 1 outlines the modified Delphi process used to identify PHEP indicators relevant to local/regional public health agencies. The results of the analyses of the final set of indicators are presented in Table 3.
Over the three rounds of the survey, indicators were confirmed or identified for all domains of the PHEP framework. There was, however, a range in number of indicators identified per element, with Governance and Leadership having the most indicators identified at 12, followed by Communication with 11. Learning and Evaluation had the fewest at three; Surveillance and Monitoring, Collaborative Networks, and Community Engagement had four. The number of indicators per other element ranged from five to seven. In total, 76 indicators were proposed for rating across all three rounds; of these, 67 were considered to be important and actionable PHEP indicators.

Discussion
The objective of our study was to identify and define a set of indicators to advance PHEP performance measurement and guide quality improvement for local/regional public health agencies. A total of 67 indicators were developed and categorized according to an empiricallyderived PHEP framework. This development of indicators by a locally-based, nationally representative expert panel represents a potentially valuable contribution to evidenceinformed public health practice with particular relevance to local/regional public health. PHEP indicator sets have previously been developed for various jurisdictions. Generally these have been oriented around accountability for funding and resource allocation for preparedness [18]. However, recent research on resilient health systems indicates that funding accountability-focused metrics may not capture a meaningful conceptualization of PHEP to answer the question 'Are we prepared?', when it comes to protecting community health [20,21]. Further, while improved preparedness has been demonstrated in organizations with experience managing a disaster [40], indicators and greater and more consistent measurement can enhance learning and improvement after real or simulated events. Continuous QI is an important part of public health practice and an emphasis on learning is a cornerstone for resilience-oriented approaches [4,6]. This study advances the PHEP measurement literature in that it aligns with existing targets and regulations, but furthers it through the lens of tools to support monitoring, learning and improvement.
Some local/regional public health agency PHEP indicator sets use existing datasets [18]. Although this has benefits for feasibility in creating snapshots of preparedness, it poses

Governance and Leadership (12 indicators)
1. The public health agency is a member of a local/regional structure for health-sector emergency management that aims to coordinate health system preparedness for emergencies. Network partners involved in this structure may include, for example, acute care, primary care, or emergency medical services, depending on the jurisdiction. 7 (1) 32 97 6 (2) 29 87.9 2. The public health agency's policies describe the authority and procedures under which it would respond to an emergency as the lead agency.
6 (1) 32 97 6 (2) 28 84.8 3. The public health agency's policies define the conditions and procedures for using incident management structures and processes to coordinate agency activities in emergencies.
6 (1) 32 97 6 (2) 27 81.8 4. The public health agency aligns its emergency plans and/or protocols with provincial, territorial and/or federal policy on public health and emergency management.
6 (1) 31 93.9 6 (1) 31 93.9 5. The public health agency's policies describe the authority and procedures under which it would respond to an emergency in a supportive role to the lead agency.
6 (1) 31 93.9 6 (0) 29 87.9 6. The public health agency's policies define the conditions and procedures for escalating response to an emergency, including processes for declaring an event multi-jurisdictional. 6 (1) 31 93.9 6 (1) 25 75.8 7. The public health agency is a member of a local/regional multidisciplinary structure that aims to reduce community risks to emergencies and disasters. Network partners involved in this structure may include transportation, planners, industry, local/regional elected officials.
6 (1) 31 93.9 5 (2) 24 72.7 8. The public health agency's policies align with requirements for reporting to the provincial/territorial and/or federal public health authority on community health risks in the context of an emergency; for example, radio-nuclear, chemical or biosecurity events. 6 (2) 31 93.9 5 (1) 28 84.8 9. The public health agency engages with policy-makers to address gaps in policy and/or legislation that pertain to the effectiveness of its emergency management plans and/or protocols.

Planning Process (6 indicators)
13. The public health agency reviews its emergency plans and/or protocols with involved departments and/or programs internal to the agency. 6 (1) 33 100 6 (1) 33 100 14. The roles and responsibilities of the public health agency for responding to all-hazards emergencies are defined in agency plans and/or protocols. 17. The public health agency's emergency management plans and/or protocols relate to all phases of a disaster (i.e. Prevention/mitigation, preparedness, response, and recovery).

Risk Assessment (5 indicators)
19. The public health agency uses the results of the risk assessment to inform relevant plans/ protocols for emergency management, business continuity and/or risk reduction. 6 (1) 32 97 6 (1) 30 90.9 20. The public health agency's risk assessment process includes an analysis of organizational capacity to manage the identified risks.

Resources (6 indicators)
24. The public health agency has established procedures to facilitate timely dispensing of physical resources to the community in the context of emergencies (e.g., may include medical prophylaxis and/or treatment). 27. The public health agency has or has access to a system to support management of physical resources relevant to emergencies; for example, equipment, supplies or medical prophylaxis and/or treatment (e.g. may include tracking, monitoring and/or reporting components).
6 (1) 31 93.9 5 (1) 25 75.8 28. The public health agency is familiar with established procedures for the exceptional procurement of physical resources relevant to the emergency context, including procedures for procurement outside of business hours; for example, equipment, supplies or medical prophylaxis and/or treatment from the provincial, territorial or federal government. 33. The public health agency has mutual aid agreements in place with health-sector network partners that describe how resources and/or services will be shared during an emergency, including meeting demands for surge capacity.

Community Engagement (4 indicators)
34. The public health agency provides and/or endorses education programs directed at the public to raise awareness about preparedness for relevant community risks.
6 (1) 30 90.9 5 (1) 28 84.8 35. The public health agency dedicates time for the continuous development of relationships with community organizations relevant to preparedness for local risks and the agency context; for example, building relationships with members of the public and/or advocacy groups that represent the public.
6 (1) 27 81.8 6 (1) 28 84.8 36. The public health agency has or participates in an established structure to facilitate inclusion of community considerations in relevant aspects of public health emergency management. For example, a community advisory committee to inform emergency mitigation, planning and/or recovery including members of the public and/or advocacy groups that represent the public.

Communication (11 indicators)
38. The public health agency has a mechanism to formally or informally coordinate joint messaging with relevant network partners in a timely manner. challenges for QI. For example, the indicators may not be part of a model anchored around the agency as the focus and thus may not be specific to this context. Further, indicators may not be aligned with activities within agency jurisdiction and control. Our set of indicators aligns with a PHEP framework comprised of essential elements identified based on empiric data for local/regional public health agencies [26]. The indicators correspond with the essential elements and were assessed through this study as relevant to PHEP, achieving high consensus agreement and consistency for importance. Our list of indicators contributes to the applied public health literature in that they represent actionable aspects of PHEP practice for public health agencies. While specific to this context, our work contributes to global efforts to gauge preparedness given the indicators were derived based on existing global indicators, such as the Joint External Evaluation tool [17].
There are limitations to this study. Like much indicator development, the evidence underlying metrics is limited and largely reliant on grey literature. There were few examples in the literature of rigorously derived and validated indicators. Given the broad scope of PHEP, our literature review may not have been exhaustive. This was mitigated by conducting an in-depth search of peer-reviewed and grey literature, contacting experts requesting documents, and examining key websites in the field. Indeed, new knowledge emerged as our study was in progress. Specifically, the European Centers for Disease Control (ECDC) released a report describing PHEP core competencies for European Union member states in 2017 [41]. While a new approach for the ECDC, this work was an adaptation of a US-based model published previously [15,26,41], and the indicators corresponding with the model were derived from similar documents [17,27]. Our indicator development process used a breadth of sources and aligned with an empirically-developed conceptual framework. Further, the panelists evaluated each of the proposed indicators and had the opportunity to suggest additional ones. Future work will benefit from validation of these indicators in practice.
Our study results have implications for policy and practice. Public health agencies can establish and use these indicators to create a baseline and measure PHEP. While the final list confirmed 67 important and actionable indicators, another seven indicators were found to be important but not actionable. This additional group of indicators is highly relevant to PHEP practice due to the high importance ratings; however, these seven indicators highlight the complexity around measuring PHEP and the PHEP system. For example, a Governance and Leadership indicator: Provincial/territorial authorities and local/regional public health agencies jointly develop policies and/or structures defining the agency mandate in public health emergency management met consensus at 88.9% for importance but only 50% for actionability. The "joint" aspect of this indicator was identified as key to its importance; however, it may not be actionable based on the context of a single agency and may be most useful for local public health agencies as they assess the collective readiness of their region, advocate and plan to increase readiness.
The indicators are many and varied, which may raise concerns about the feasibility of QI and burden of reporting. While challenging, this reflects the diversity of risks, actors and organizations with which emergency preparedness planners engage. The range for the number of indicators by element was likely influenced by the literature as there were more existing indicators for concepts such as governance, communication and resources, while other concepts such as collaboration and learning were less explored. It is important to note, however, that in keeping with a complex system, the elements are seen as interconnected and adaptive. For example, aspects of collaboration are captured through other elements, including Governance and Leadership, Planning Process and Communication.
Future research should address the usefulness of these indicators in practice. It will be important to assess gaps in indicators that relate to key elements of the PHEP framework. Further, some indicators-around communication and community engagement in particularrequire multiple perspectives for validation. Research should be directed toward developing standardized tools for measurement that are relevant across organizations. Another approach uses a logic model or strategy map where lead indicators or those likely to change earlier (often process indicators) can be related against lag indicators or those that are likely to change later (often outcome indicators). Our framework suggests that success across all elements is likely necessary for successful response to disasters and emergencies [26], making examination of correlations between elements or indicators challenging. To further advance the science of performance measurement for PHEP, field-based piloting and validation of the indicators will be beneficial.

Implications
1. This study presents relevant and useful indicators for local/regional public health agencies to assess practice in PHEP and guide improvement.