Figures
Abstract
Valid, reliable, and acceptable tools for assessing self-reported competence in evidence-informed decision-making (EIDM) are required to provide insight into the current status of EIDM knowledge, skills, attitudes/beliefs, and behaviours for registered nurses working in public health. The purpose of this study was to assess the validity, reliability, and acceptability of the EIDM Competence Measure. A psychometric study design was employed guided by the Standards for Educational and Psychological Testing and general measurement development principles. All registered nurses working across 16 public health units in Ontario, Canada were invited to complete the newly developed EIDM Competence Measure via an online survey. The EIDM Competence Measure is a self-reported tool consisting of four EIDM subscales: 1) knowledge; 2) skills; 3) attitudes/beliefs; and 4) behaviours. Acceptability was measured by completion time and percentage of missing data of the original 40-item tool. The internal structure of the tool was first assessed through item-subscale total and item-item correlations within subscales for potential item reduction of the original 40-item tool. Following item reduction which resulted in a revised 27-item EIDM Competence Measure, a principal component analysis using an oblique rotation was performed to confirm the four subscale structure. Validity based on relationships to other variables was assessed by exploring associations between EIDM competence attributes and individual factors (e.g., years of nursing experience, education) and organizational factors (e.g., resource allocation). Internal reliability within each subscale was analyzed using Cronbach’s alphas. Across 16 participating public health units, 201 nurses (mean years as a registered nurse = 18.1, predominantly female n = 197; 98%) completed the EIDM Competence Measure. Overall missing data were minimal as 93% of participants completed the entire original 40-item tool (i.e., no missing data), with 7% of participants having one or more items with missing data. Only one participant (0.5%) had >10% of missing data (i.e., more than 4 out of 40 items with data missing). Mean completion time was 7 minutes and 20 seconds for the 40-item tool. Extraction of a four-factor model based on the 27-item version of the scale showed substantial factor loadings (>0.4) that aligned with the four EIDM subscales of knowledge, skills, attitudes/beliefs, and behaviours. Significant relationships between EIDM competence subscale scores and education, EIDM training, EIDM project involvement, and supportive organizational culture were observed. Cronbach’s alphas exceeded minimum standards for all subscales: knowledge (α = 0.96); skills (α = 0.93); attitudes/beliefs (α = 0.80); and behaviours (α = 0.94).
Citation: Belita E, Fisher K, Yost J, Squires JE, Ganann R, Dobbins M (2022) Validity, reliability, and acceptability of the Evidence-Informed Decision-Making (EIDM) competence measure. PLoS ONE 17(8): e0272699. https://doi.org/10.1371/journal.pone.0272699
Editor: Joseph Telfair, Georgia Southern University, UNITED STATES
Received: September 3, 2021; Accepted: July 25, 2022; Published: August 5, 2022
Copyright: © 2022 Belita et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data files are available from Scholars Portal Dataverse repository https://doi.org/10.5683/SP3/SE3YUP.
Funding: The author(s) received no specific funding for this work.
Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: Dr. Jennifer Yost is an independent contractor with the American College of Physicians. The other authors confirm they have no competing interests. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
Introduction
The crucial need for implementing evidence-informed public health interventions that are effective and cost-efficient is increasingly demonstrated amid emerging communicable diseases, sustained rates of chronic conditions, climate change, and other public health threats to populations, communities, and individuals [1]. The concept of basing public health decision making on diverse forms of high quality, available evidence is recognized as evidence-informed decision-making (EIDM) [2]. EIDM in public health practice involves the identification, appraisal, and application of evidence related to research, along with professional expertise, local context, and client and community characteristics, needs, and preferences [2–4].
Across the functions of public health preparedness, prevention, protection, and promotion [5], there is considerable evidence of high-quality, effective, and cost-efficient interventions [6, 7]. Despite this evidence base, a persistent gap exists where research evidence is not consistently used by public health professionals to inform decision-making in practice [8, 9]. Findings across multiple studies highlight this research-to-practice gap among the public health workforce, of which nurses are the largest professional discipline [10]. In two studies exploring EIDM capacity building among public health professionals, participants reported that only 58–72% of public health programs offered by their state and local organizations were informed by research evidence [11, 12]. Furthermore, in a review of 33 studies exploring public health professionals’ information needs for evidence-informed decision-making, Barr-Walker [13] reported that colleagues served as a primary information source more often than online databases. In another study investigating public health decision-making among local governments in Australia, [14] published literature such as journal articles or reports by academic institutions were reported to be the least influential and least useful resources in decision-making. Findings from international studies centred specifically on nurses in public health further validate this research-to-practice gap. Authors of studies in Canada [15] and Norway [16] have reported that nurses frequently used knowledge from clients, other clinical experts, and professional development trainings to inform their professional practice, and in fewer instances relied on evidence from published research.
Deficits in EIDM public health practice relate largely to low levels of confidence, knowledge, and skills in performing EIDM related tasks [17, 18], along with organizational barriers, such as work cultures resistant to change, insufficient resource allocation, or lack of protected time for EIDM work [17, 19]. Strategies to encourage sustained organization-wide EIDM uptake include supportive nursing leadership and mentorship, a focus on competence development at the individual practitioner level through professional development opportunities [17, 20], integration of EIDM as a strategic priority in organizational missions or visions, and explicit indicators of EIDM responsibilities and expectations for practitioners and health care leaders [20–22]. Regarding the latter, the articulation of EIDM expectations provides organizations and health care professionals with shared and consistent language around standards that can be integrated into competence assessment measures for use in real-world practice.
While many tools exist to measure EIDM, there are only a few that specifically seek to assess EIDM competence among health care professionals. The Evidence Based Practice Evaluation Competence Questionnaire (EBP-COQ) is a 25-item tool with demonstrated internal reliability, construct and discriminant validity which assesses self-reported competence in EIDM among undergraduate nursing students [23]. Developers of this instrument defined competence as the ability to choose and use knowledge, skills and attitudes with the intention of performing a task in a specific context [23]. The Fresno test has also been labelled by its developers as a competence assessment measure for EIDM knowledge and skills [24]. This objective measure has been tested among family practice residents, EIDM educators [24], and an adapted version used among pediatric nurses [25]. The Fresno test requires responses to short answer questions based on hypothetical clinical scenarios. Inter-rater reliability, internal consistency, and content, and construct validity have all been established through its psychometric testing [24]. While the original developers do not explicitly discuss their established conceptual definition of competence [24], the author of the adapted Fresno test discusses the conceptual meaning of competence as related to knowledge and skill acquisition [25].
Missing from the conceptual definitions of competence used in these existing measures is the inclusion of all attributes that comprise competence, which includes knowledge, skills, attitudes/values, and behaviours [26–28]. Competence has been defined as the integration of these four attributes with a focus on the quality of task performance related to a specific standard [26, 28, 29]. Specifically, in relation to competence in EIDM, these four attributes are well described across the literature (see Table 1).
This conceptual underrepresentation of EIDM competence attributes has also been confirmed in a recent systematic review [35] of 35 measures assessing EIDM knowledge, skills, attitudes, and/or behaviours in which the authors reported that only three measures assessed all four competence attributes: a) the Evidence-Based Practice Questionnaire [36], b) the School Nursing Evidence-Based Practice Questionnaire [37], and c) a self-developed measure by Chiu et al. [38]. In addition, there were limitations among these three measures, including a lack of assessment of the quality of the competence attributes, a critical component of competence, particularly with respect to EIDM behaviour items [35].
To address these conceptual gaps among EIDM measures, this study aimed to develop and psychometrically test a comprehensive self-report EIDM competence measure assessing the quality of knowledge, skills, attitudes/beliefs, and behaviours among nurses in public health. The first phase of this study, described elsewhere [39], focused on the process of item development, content validation (analyzing the relationship between test content and the construct under measure) [40], and response process assessment (exploring how thought processes used in responding to a scale align with a construct under measure) [40] of the new EIDM Competence Measure. The purpose of this paper is to describe the second phase of the study of the EIDM Competence Measure; to assess acceptability, internal reliability, and validity evidence, based on the internal structure and relationship of items in the measure to other variables.
Methods
Design
A psychometric study design was used for this study. It was guided by general measure development principles [41] and the Standards for Educational and Psychological Testing which provide established criteria for the development and psychometric evaluation of tests, with a focus on assessment of validity evidence [40]. Full ethical approval was granted for the study by the Hamilton Integrated Research Ethics Board.
Sample and recruitment
Recruitment occurred from November 2019 to March 2020. A multi-stage sampling design was used; health units were recruited at the first stage and individual nurses were recruited at the second stage. A convenience sample of public health units in Ontario was recruited. Existing relationships with Medical Officers of Health were used to identify organizations for this convenience sample. Thirty-two Ontario health units were invited to participate through an email sent to each Medical Officer of Health informing them about and requesting support for the study. Champions (e.g., Chief Nursing Officers, nursing managers) were identified by Medical Officers of Health and responsible for sending out emails to all registered nurses working in their respective public health units, inviting them to voluntarily participate in the study. Invitation emails included a study introductory letter with a link to the written participant consent form and anonymous online survey.
All registered nurses within the participating health units were invited to enrol in the study. The target sample size was 400 participants based on a recommended 10:1 respondent to item ratio for factor analysis [42] and given the original EIDM competence tool consists of 40 items. Inclusion criteria were: 1) licensure as a registered nurse; 2) employment in an Ontario public health unit; and 3) working in any nursing or non-nursing role (e.g., health promoter) within the health unit. Those who participated in the first phase of the study (response process testing) were not eligible to participate, as they would have had familiarity with the measure, with potential to bias the results.
Measures
Data collection occurred from December 2019 to March 2020. Upon confirming written consent via an online form, participants completed a one-time anonymous online survey via the LimeSurvey platform. The self-report survey consisted of three sections: 1) demographics; 2) organizational factors; and 3) the 40-item self-report EIDM Competence Measure (see S1 File). Demographic information collected included: number of years worked as a registered nurse and in public health; gender; current role; primary area of work specialization; highest earned degree; completion of formal EIDM training; and involvement in EIDM projects/work. Regarding organizational factors, 12 out of 19 items relevant to a public health setting, from the Organizational Culture and Readiness for System-Wide Implementation of EBP (OCR-SIEP) scale [43] were used with participants responding to items on a 5-point Likert scale (1 = not at all to 5 = very much). Since the OCR-SIEP employed a summative score, this same scoring process was applied to the subset of 12 selected items, with higher scores denoting greater organizational readiness for EIDM. Permission for use of scale items was granted by the original developer, Dr. Bernadette Melnyk [44]. The original 40-item self-report EIDM Competence Measure, which was assessed for content and response process validity in the first phase of the study [39], consisted of four subscales: knowledge (11 items), skills (10 items), attitudes/beliefs (7 items), and behaviours (12 items). Participants responded to all subscale items on a 7-point Likert scale: knowledge (1 = poor to 7 = excellent); skills (1 = beginner to 7 = expert); attitudes/beliefs (1 = strongly disagree to 7 = strongly agree); and behaviours (1 = not competent to 7 = highly competent). Given that the instrument was capturing different constructs, an overall total score was not computed. Instead, a total for each of the four subscales was computed. Reverse coding was conducted for only one item in the EIDM attitudes/beliefs subscale (i.e., I believe EIDM is difficult). Higher total subscale scores denoted higher competence for that EIDM competence attribute.
Data analysis
Acceptability.
Acceptability is associated with the practicality of a measure and was operationalized by the amount of time respondents were required to complete the tool and the extent to which there was difficulty with completion as measured by the amount of missing data [45]. Mean completion times were analyzed for the entire original 40-item tool and each EIDM subscale. Acceptable completion time was identified as < 10 minutes [46, 47] for the entire original 40-item tool. Missing data were analyzed through: 1) percentage of participants who completed the entire 40-item tool (i.e., no missing data) [48]; 2) percentage of participants who had data missing for at least one item [49]; and 3) percentage of participants who had >10% of missing data [50] across the entire 40-item tool (i.e., more than 4 out of 40 items with missing data).
Validity evidence based on internal structure.
Validity evidence based on internal structure is defined as the “degree to which the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” [37 p. 36]. Internal structure was assessed by performing the following analyses using SAS 9.4 statistical software: 1) item-subscale total polychoric correlations, representing the correlation of individual items with subscale totals [41]; 2) item-item polychoric correlations within each subscale [51]; and 3) exploratory factor analysis using principal components analysis (PCA) using an oblique rotation which allowed for correlated factors, a common method used to extract potential latent variables/factors in the assessment of dimensionality and to reduce item components into more meaningful data [51–53]. Polychoric correlations were computed given that response scales of all items included in the tool consisted of ordinal data (i.e., 7-point Likert scales) [54, 55]. The use of polychoric correlations in the factor analysis of ordinal data, compared to Pearson correlations which are to be used with at least interval-level data, yields results that demonstrate less error and better alignment with originally proposed theoretical models [54].
Conceptual literature was considered alongside statistical criteria in decision-making related to item deletions or item combinations [56]. Items in which item-subscale correlations were low (<0.3) were deleted [41]. Item-item correlations within subscales were analyzed and those with low correlations <0.30 were flagged for possible deletion [41]. High item-item correlations within subscales of >0.80 may signal redundancy [51] and as such, were flagged for possible item deletion or item combination. Potential item deletions and merging of similar items were discussed and finalized with consideration of conceptual literature [51] among co-investigators given the team’s expertise in EIDM and public health nursing.
Following the item reduction process, a revised EIDM competence measure was proposed and used in conducting PCA. Prior to conducting PCA, the Kaiser-Meyer-Olkin (KMO) test was performed to determine sampling adequacy [51, 52] with a value above 0.5 being used to determine adequacy [42, 57]. PCA was performed using an oblique rotation, given that items were assumed to be interrelated [51]. A four factor model solution was forced in the PCA analysis, consistent with the design of the tool which was based on a priori conceptual literature [58] that EIDM competence is comprised of four key attributes of knowledge, skills, attitudes/beliefs, and behaviours. Items were assessed as loading onto a factor if loadings were a minimum of ≥ 0.4 [51], with strong loadings indicated by values of ≥ 0.5 [58, 59].
Validity evidence based on relations to other variables.
Validity evidence based on variable relationships was assessed using the revised EIDM competence measure. This source of validity evidence involves the testing of relationships between instrument subscale scores and socio-demographic and organizational factors to determine their consistency with the construct under measurement [40]. A large body of evidence has shown that measurement scales created from at least four Likert-response format items are often linear and interval, unlike the ordinal scale applicable to the items in the scale [60]. This emergent property of measurement scales allows for parametric methods to be used, providing other key parametric assumptions are met. We tested the underlying assumptions of the parametric tests involved in the subscale analyses (i.e., normality, linearity, constant variance). A moderate departure from normality was seen for the skills and attitudes subscales (each comprised of five items), a minor departure for the knowledge subscale comprised of seven items, and no departure for the 10-item behaviour subscale. A linear relationship was seen between all four subscales and two continuous variables (years as a RN, organization tool score). The constant variance assumption held for all four subscales for two group variables (education, professional role). On the basis of these results, parametric tests were used in the sub-scale analyses. Correlations, t-tests, and ANOVA analyses were performed using IBM SPSS version 26 with level of significance set at alpha = 0.05 (2-sided) to explore variable relationships. The following relationships were hypothesized: 1) years of experience as a registered nurse would be positively correlated with EIDM knowledge, skills, attitudes, and behaviours [49]; 2) those working in a supervisory or management role would have higher EIDM knowledge, skills, attitudes/beliefs, behaviours compared to frontline staff [61, 62]; 3) those with a higher education level would have higher EIDM knowledge, skills, attitudes/beliefs, behaviour scores [61, 63]; 4) those who completed EIDM training or have had involvement in EIDM projects/work would have higher EIDM knowledge, skills, attitudes/beliefs, behaviour scores [63]; and 5) those who self-report higher organizational support for EIDM would have higher EIDM knowledge, skills, attitudes/beliefs, behaviour scores compared to those with lower organizational support [17, 19].
Reliability.
Reliability was assessed by examining the measure’s internal consistency, which determines how well scale items are correlated with one another in order to yield similar scores [41, 64]. Cronbach’s alphas were computed for each individual subscale [41]. This is the most frequently used statistic for examining reliability in psychometric testing, can be determined with one administration, and is recommended for use with items that have more than two response options [41, 64]. Acceptable internal consistency for each sub-scale was determined as a Cronbach’s alpha (α) of ≥ 0.70 [65].
Results
Demographics
Sixteen Medical Officers of Health agreed to support the study, yielding a response rate of 50% (16/32). Across the 16 participating public health units, 562 registered nurses opened the online survey. Of these, 201 respondents (35.8%) completed and submitted the survey. Participants were largely female (98.5%), primarily employed as a frontline public health nurse (87.2%), Bachelor’s degree prepared (73.1%), and worked across diverse specializations, with the majority having completed EIDM training (66.8%). See Table 2 for detailed demographics.
Acceptability
Completion time.
The average completion times for each EIDM competence subscale were similar in length: knowledge (1 minute and 37 seconds); skills (2 minutes and 11 seconds); attitudes/beliefs (1 minute and 18 seconds); and behaviours (2 minutes and 14 seconds). The mean completion time for the entire original 40-item EIDM Competence Measure was 7 minutes and 20 seconds.
Missing data.
The percentage of nurses fully completing the original 40-item EIDM measure (i.e., no missing data) was 93% (n = 187), with 7% of participants who had data missing for at least one item across the entire 40-item tool. As well, only one participant (0.5%) had >10% of missing data (i.e., more than 4 out of 40 items with data missing).
Validity evidence
Validity based on internal structure.
Item-subscale total correlations for the original 40 items all met the minimum criteria of >0.3 (see Table 3). Ranges of item-item correlations varied across subscales: knowledge (0.81–0.94); skills (0.81–0.92); attitudes/beliefs (0.04–0.87); and behaviours (0.80–0.90). Some item-item correlations fell below the minimum of 0.3 indicating weak relationships while others exceeded 0.8 indicating potential redundancy (see S1 Table for low and high item-item correlations). Based on low or high item-item correlations, along with consideration of conceptual literature, item deletions and item combinations were made within each EIDM subscale: 1) knowledge (deleted items K3, K4, K5, K7 based on redundancy); 2) skills (deleted items S2, S7, S8, S9 and combined items S3 and S4 based on redundancy); 3) attitudes (deleted item A7 due to irrelevance and combined items A4 and A6 due to redundancy); and 4) behaviours (deleted B5 and B6 due to redundancy).
Following the item reduction process, a revised EIDM competence measure was proposed consisting of 27 items.
A PCA with oblique rotation was then performed on the revised 27-item self-report Competence Measure. The Kaiser-Meyer-Olkin test verified sampling adequacy with a value of 0.8597. A four-factor model was extracted with all factors accounting for 90.00% of the variance. Primary loadings were substantial across factors and ranged from 0.46 to 0.92. Items primarily loaded onto factors that aligned with the established conceptual framework of EIDM behaviours (Factor 1), knowledge (Factor 2), skills (Factor 3), and attitudes/beliefs (Factor 4). However, there were three items which loaded onto factors to which they were not conceptually assigned: 1) attitude item #1 (believe can implement EIDM efficiently) loaded onto Factor 3 (skills, factor loading = 0.54); 2) attitude item #2 (believe can engage others to address EIDM barriers) loaded onto Factor #1 (behaviours, factor loading = 0.46); and 3) skills item #5 (ability to develop evaluation indicators) loaded onto Factor #1 (behaviours) with a value of 0.58 (see Table 4 for factor loadings). After reviewing these factor loadings, along with consideration of acceptable item-subscale total correlations (see Table 5) and conceptual literature, these three items were retained in the competence attribute under which they were originally categorized. The final proposed 27-item EIDM competence measure consisted of a varied number of items per subscale: knowledge (7 items); skills (5 items); attitudes (5 items); behaviours (10 items).
Validity based on relationships to other variables.
While some non-significant relationships between EIDM competence attributes and other variables were revealed, there were also statistically significant findings that confirmed many of the hypothesized relationships to establish validity evidence of the EIDM competence measure.
Regarding number of years worked as a registered nurse, a statistically significant positive correlation was found with EIDM behaviours, indicating a weak relationship (r = 0.17; p = 0.008), although no significant relationships were found related to EIDM knowledge, skills, and attitudes (see Table 6). Similarly, statistically significant differences in mean scores were found only for EIDM behaviours between professional role groups of public health nurse (M = 42.69; SD = 11.79), health promoter (M = 47.00; SD = 7.00), supervisor/manager (M = 49.75; SD = 8.87), director (M = 53.50; SD = 13.44), and other (M = 53.57; SD = 12.63); F(4, 187) = 2.80; p = 0.027 (see Table 7). However, a follow-up post-hoc Tukey’s test was performed for EIDM behaviour scores to attempt to pinpoint specific group differences, and no statistically significant relationships were identified.
Higher scores in EIDM knowledge, skills, attitudes/beliefs, and behaviours were found among nurses with master’s degree preparation compared to those with a bachelor’s degree (p<0.0001; see Table 8 for mean scores). Differences between education groups were statistically significant for EIDM knowledge t(194) = 4.80, (p<0.0001); EIDM skills t(196) = 6.33, (p<0.0001); EIDM attitudes t(197) = 4.53, (p<0.0001); and EIDM behaviours t(195) = 5.50, (p<0.0001).
Higher scores across were also found among nurses who completed training in EIDM compared to those who did not across EIDM knowledge t(192) = 6.49, p<0.0001; EIDM skills t(194) = 5.5, (p<0.0001); EIDM attitudes t(195) = 3.69, (p<0.0001); and EIDM behaviours t(193) = 4.86, (p<0.0001). Compared to those with no EIDM experience, participants with involvement in EIDM related work or projects had higher scores in EIDM knowledge t(188.03) = 6.06, p<0.0001; EIDM skills t(191.59) = 6.91, (p<0.0001); EIDM attitudes t(197) = 3.69, (p<0.0001); and EIDM behaviours t(195) = 6.72, (p<0.0001). Statistically significant positive correlations were found between total organizational factor scores and EIDM knowledge (r = 0.29; p<0.000), skills (r = 0.27; p = 0.001), attitudes/beliefs (r = 0.26; p = 0.00), and behaviours (r = 0.22; p = 0.005).
In summary, statistically significant positive relationships were found between EIDM behaviours and numbers of years worked as a registered nurse, as well as professional role group. Higher scores across all four subscales were also found among nurses with higher education levels and among those who completed EIDM training and had EIDM work experience. Positive correlations were also found between higher organizational support scores and EIDM subscale scores.
Discussion
This study reports on the acceptability, reliability, and validity of the new EIDM Competence Measure which assesses EIDM knowledge, skills, attitudes, and behaviours among public health nurses. Generally, existing EIDM measures developed, tested, and used among diverse groups of nurses have focused primarily on the assessment of a single competence attribute [35]. As well, limited attention has been given to EIDM competence assessment of nurses working in public health, compared to other nursing practice areas such as acute care [35]. Of note, only one existing measure, the School-Nursing Evidence Based Practice Questionnaire [49] assesses all four competence attributes of knowledge, skills, attitudes, and behaviours and has been tested among a sample of public health nurses in the United States. However, the measure has conceptual limitations in that it does not measure the quality or competence of these attributes but focuses only on frequency of use. As well, items are designed specifically for the nuanced field of school health nursing, with little applicability to other nursing practice areas in public health. The newly developed EIDM Competence Measure contributes to existing gaps within the EIDM competence literature and has applicability for use across all fields of public health nursing to assess their EIDM competence.
Acceptability
Minimal missing data and the short completion time observed in this study suggest that our EIDM Competence Measure is ‘acceptable’, signifying it is not highly burdensome or challenging to complete [66]. The overall low percentage of missing data for the EIDM competence measure (7%) resembles rates of other EIDM measures that have been used or tested among public health nurses including the EBP Implementation Scale (6.3%) [67] and the School Nurse Evidence-Based Practice Questionnaire (5.2%) [49]. As well, the original 40-item EIDM Competence Measure appears to have a similar completion time to other EIDM measures of shorter length which assess only one competence attribute: 16-item EBP Beliefs Scale (~5–7 minutes) [68, 69]; 18-item EBP Implementation Scale (~6–8 minutes) [68, 69].
Validity evidence
Findings from this study provide beginning validity evidence related to internal structure and relationships to other variables of the self-report EIDM Competence Measure for public health nursing. Principal components analysis results for the EIDM competence measure supported a four-factor model that aligned with the conceptual understanding that EIDM competence is comprised of four attributes of knowledge, skills, attitudes/beliefs, and behaviours. In comparison, results from the psychometric assessment of another self-report measure [36] addressing these same competence attributes yielded to some extent, different factor compositions. Principal component analysis of the 24-item Evidence Based Practice Questionnaire (EBPQ) produced a three-factor model for the measure: practice of evidence-based practice (related to behaviour frequency); attitudes towards evidence-based practice; and knowledge/skills associated with evidence-based practice [36]. EIDM behaviours and attitudes emerged as separate entities in the EBPQ, a similar finding in our EIDM competence measure. As well, in the EBPQ, knowledge and skills items appeared to be highly related, loading together to comprise one factor [36]. In the analysis of our self-report EIDM Competence Measure, knowledge emerged as a distinct factor independent of others. This difference in terms of knowledge surfacing as a distinct factor may have been attributed to the broad nature in which knowledge items were articulated compared to the specificity used to formulate items in the other subscales of skills, attitudes, and behaviours in the EIDM Competence Measure. While, in the EBPQ, both the knowledge and skills items were worded and phrased similarly, possibly contributing to their emergence as one factor. In our study, there were also two instances in which attitude items (i.e., Beliefs in implementing EIDM efficiently and Engaging others to address EIDM barriers) loaded onto factors representing EIDM behaviour or skills. Looking at the phrasing of these two attitude items, since they relate to beliefs/perceptions about personal engagement in EIDM overall and in ability to address EIDM barriers, it seems reasonable that statistically, they might cluster together with items that are phrased similarly to assess ability, participation or performance of EIDM tasks. However, regardless of factor loadings, conceptually, these items are better represented under the ‘attitudes’ attribute given these are defined as the values, perceptions, beliefs or intentions related to EIDM. This may include acceptance of, motivation or self-efficacy in overall EIDM engagement [30, 33], as compared to categorization under EIDM skills (application of knowledge to perform discrete EIDM tasks in a practical setting) [30, 33, 34] or EIDM behaviours (EIDM performance in real-world practice) [30, 33, 34].
Results in this study also showed evidence to support validity based on relationships with other variables for the EIDM competence measures. In our study, there were statistically significant associations between all EIDM competence attributes and education level, EIDM training, and EIDM work experience. These findings are consistent with other literature indicating that EIDM engagement is heavily influenced by personal and professional characteristics such as having advanced level formal education [63], exposure to multifaceted EIDM educational interventions [70, 71], and opportunities to participate in or lead EIDM projects in real-world practice [63, 72]. Our study findings also align with existing literature that consistently demonstrates organizational context as a strong predictor of EIDM uptake. Similar to other studies, EIDM knowledge, skills, attitudes/beliefs and behaviours were all significantly related to work environments in which EIDM priorities were integrated into strategic plans [17, 73], there were identified EIDM champions [74, 75], and critical resources necessary to carrying out EIDM activities were provided [76, 77].
There were however some findings in which the relationships we hypothesized were not validated. Our findings did not show a significant positive correlation between years of experience as a registered nurse and EIDM competence attributes. While many studies have determined that a longer duration of work experience is associated with more developed EIDM competence attributes [78–80], literature has emerged showing conflicting evidence; that there is either no existing relationship between these variables [61] or that those with less nursing experience actually display higher EIDM competence attribute scores [81]. Regarding the influence of professional role, our study findings demonstrated only a significant relationship between role (e.g., public health nurse, supervisor/manager) and EIDM behaviours specifically, but this relationship was not found regarding knowledge, skills, or attitudes. This is in contrast to other studies with reported findings that showed higher level professional roles (e.g., management, advanced practice nurse) were associated with greater scores on EIDM knowledge, skills, attitudes, and behaviours [77, 81, 82]. However, an important note is that these existing studies pertain to an acute care setting where discussions of advanced roles were focused on clinical distinctions (i.e., frontline staff nurse versus educator or nurse practitioner). In comparison, our study, set in a public health context in which these clinical roles are less prominent, explored roles in relation to public health nurses, health promoters, and management, which may have contributed to differences in findings.
This study’s findings add to previously established validity evidence related to test content and response process [39] for the EIDM Competence Measure. Based on the Standards for Educational and Psychological Testing, which suggests that validity is a unified concept consisting of four types of validity evidence (content, internal structure, response process, relationships to other variables), study findings provide cumulative validity evidence for the EIDM Competence Measure.
Reliability
The EIDM Competence Measure also exhibited strong internal reliability with Cronbach’s alphas for all subscales exceeding the minimum of 0.70 for new measures [65]. Given these high alphas, which may be indicative of further redundancy [41], there is opportunity for possible refinement of the tool in further psychometric testing with other populations and contexts. Similarly, original psychometric testing of the EBPQ also yielded high alphas for the EBP practice (behaviour) subscale (α = 0.85) and the combined knowledge/skills subscale (α = 0.91). Assessment of the Quick Evidence-Based Practice (EBP) Values, Implementation and Knowledge (VIK) survey, a 25-item self-report tool addressing EIDM knowledge, attitudes/beliefs, and behaviours, also demonstrated comparable internal consistency with its subscale of knowledge (α = 0.93), although had a lower alpha (0.76) for its implementation subscale (frequency of EIDM behaviours) [83]. This latter discrepancy may be attributed to item content differences in which behavioural items of the Quick-EBP-VIK survey are less specific and have less coverage of all the EIDM steps compared to our EIDM competence measure or the EBPQ. What remains consistent across the psychometric literature are reported alpha values for EIDM attitude/beliefs subscales across multi-dimensional measures: our EIDM competence measure (α = 0.80); EBPQ (α = 0.79) [36]; and the Quick-EBP-VIK survey (α = 0.79) [83].
Limitations
While this study provides supporting evidence of the acceptability, validity, and reliability of a new EIDM competence measure that can be used in public health practice, there are limitations that require consideration. While the proposed study sample size was 400, only 50% (n = 201) of this projected sample was achieved. As such, given the original EIDM competence measure had 40 items, the frequently recommended ratio of 10:1 (subjects to items) in calculating sample size for factor analysis was not met [51]. Given that the principal components analysis was instead run on the reduced 27-item tool, using the 10:1 subject to items ratio, an acceptable sample size would be 270, which still was not met with the sample size of 201 participants. However, the Kaiser-Meyer-Olkin test determined that there was acceptable sampling adequacy to conduct factor analysis in this study. As well, in other literature, a case to variable ratio of 5:1 has also been deemed sufficient to conduct factor analysis [84]. Comrey as cited in Taherdoost et al. [57] identifies 200 as a ‘fair’ sample, compared to increasing samples of 300 classified as ‘good’ and 500 respondents as ‘very good’. As well, given the high non-response rate and the use of convenience sampling, this presents non-response and self-selection bias, impacting the representativeness of the sample. The sample of public health nurses surveyed may not represent the true diversity among the public health nursing population employed throughout Ontario, influencing the generalizability of results. However, given challenges with accessing public health nurse employee lists across Ontario health units to support random sampling, convenience sampling was determined as the most feasible choice. It is important to note that this study is considered exploratory in nature, with an understanding that findings are to be interpreted with caution. A next step will be to use a larger national study sample to conduct confirmatory factor analysis with a split sample approach, which was not feasible to perform in this pilot study given the sample size [85].
Over the course of this study, particularly throughout the study recruitment period, there were pivotal public health events that had substantial bearing on the ability to recruit the proposed sample size. First, the provincial government of Ontario announced public health modernization plans which proposed a critical restructuring of the public health system, reducing the number of operating health units in Ontario from 36 to 10. Second, the beginning stages of the COVID-19 coronavirus pandemic had emerged during this time, with health units dedicating staff resources toward the pandemic response. These two events impacted study participation at both the public health unit and staff level and illustrates some of the challenges of conducting research in a health sector that regularly responds to emerging crises. The changing nature of public health illuminates the need to embed strategies that will mitigate the impact of unforeseen and unavoidable circumstances within the research design.
Conclusions
The 27-item EIDM competence measure provides a comprehensive self-assessment of EIDM knowledge, skills, attitudes/beliefs, and behaviours for use among nurses in public health practice. This instrument has demonstrated beginning validity evidence based on internal structure and relations to other variables, as well as exhibits strong internal reliability. Given its ease of use and short completion time, there is great potential for its use in real-world public health practice for individual nurses, supervisors/managers, and organizations to provide insight into the status of EIDM competence among public health nurses. This sets the stage well for improving clarity around EIDM expectations for nursing and potentially other public health professions and in strategic planning of resources and professional development interventions to facilitate improved EIDM engagement. Given the nature of this study as a pilot, there is opportunity to expand on the psychometric testing conducted which would include confirmatory factor analysis/split sample approach using a national sample of nurses across health units in Canada, continuing to add to the reliability and validity evidence of the EIDM Competence Measure, as well as explore its acceptability across a diverse sample.
Supporting information
S1 Table. Low and high item-item correlations (40 items).
https://doi.org/10.1371/journal.pone.0272699.s002
(DOCX)
Acknowledgments
The authors gratefully acknowledge the many public health nurses who participated in the study, and the public health leaders who agreed to support staff participation in our study.
References
- 1.
Public Health Agency of Canada. Addressing Stigma Towards a More Inclusive Health System The Chief Public Health Officer’s Report on the State of Public Health in Canada 2019. Ottawa, ON: Public Health Agency of Canada, 2019 190383.
- 2.
National Collaborating Centre for Methods and Tools. Evidence-informed public health 2020 [Web Page]. Available from: http://www.nccmt.ca/professional-development/eiph.
- 3. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: A fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201. pmid:19296775
- 4. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: An evolving concept. Am J Prev Med. 2004;27(5):417–21. pmid:15556743
- 5.
Canadian Public Health Association (CPHA). Public health in the context of health system renewal in Canada: Background document. 2019.
- 6. Grande GD, Oliveira CB, Morelhao PK, Sherrington C, Tiedemann A, Pinto RZ, et al. Interventions promoting physical activity among older adults: A systematic review and meta-analysis. Gerontologist. 2020;60(8):e583–e99. pmid:31868213
- 7. Masters R, Anwar E, Collins B, Cookson R, Capewell S. Return on investment of public health interventions: A systematic review. J Epidemiol Community Health. 2017;71(8):827–34. pmid:28356325
- 8. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: Reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53. pmid:29166243
- 9. McAteer J, Di Ruggiero E, Fraser A, Frank JW. Bridging the academic and practice/policy gap in public health: Perspectives from Scotland and Canada. Journal of Public Health. 2019;41(3):632–7. pmid:30053047
- 10. Yeager VA, Wisniewski JM. Factors that influence the recruitment and retention of nurses in public health agencies. Public Health Rep. 2017;132(5):556–62. pmid:28792856
- 11. Dreisinger M, Leet T, Baker E, Gillespie K, Haas B, Brownson R. Improving the public health workforce: Evaluation of a training course to enhance evidence-based decision-making. J Public Health Manag Pract. 2008;14(2):138–43. pmid:18287919
- 12. Gibbert WS, Keating SM, Jacobs JA, Dodson E, Baker E, Diem G, et al. Training the workforce in evidence-based public health: An evaluation of impact among US and international practitioners. Prev Chronic Dis. 2013;10:E148. pmid:24007676
- 13. Barr-Walker J. Evidence-based information needs of public health workers: A systematized review. J Med Libr Assoc. 2017;105(1):69–79. pmid:28096749
- 14. Armstrong R, Waters E, Moore L, Dobbins M, Pettman T, Burns C, et al. Understanding evidence: A statewide survey to explore evidence-informed public health decision-making in a local government setting. Implement Sci. 2014;9:188. pmid:25496505
- 15. Baird LM, Miller T. Factors influencing evidence-based practice for community nurses. Br J Community Nurs. 2015;20(5):233–42. pmid:25993372
- 16. Weum M, Bragstad LK, Glavin K. How public health nurses use sources of knowledge. Norwegian Journal of Clinical Nursing / Sykepleien Forskning. 2018;12(64242).
- 17. Solomons NM, Spross JA. Evidence-based practice barriers and facilitators from a continuous quality improvement perspective: An integrative review. J Nurs Manag. 2011;19(1):109–20. pmid:21223411
- 18. Saunders H, Vehviläinen-Julkunen K. The state of readiness for evidence-based practice among nurses: An integrative review. Int J Nurs Stud. 2016;56:128–40. pmid:26603729
- 19. Williams B, Perillo S, Brown T. What are the factors of organisational culture in health care settings that act as barriers to the implementation of evidence-based practice? A scoping review. Nurse Educ Today. 2015;35(2):e34–41. pmid:25482849
- 20. Melnyk BM, Gallagher-Ford L, Long LE, Fineout-Overholt E. The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: Proficiencies to improve healthcare quality, reliability, patient outcomes, and costs. Worldviews Evid Based Nurs. 2014;11(1):5–15. pmid:24447399
- 21. Dobbins M, Traynor RL, Workentine S, Yousefi-Nooraie R, Yost J. Impact of an organization-wide knowledge translation strategy to support evidence-informed public health decision making. BMC Public Health. 2018;18(1). pmid:30594155
- 22. Peirson L, Ciliska D, Dobbins M, Mowat D. Building capacity for evidence informed decision making in public health: A case study of organizational change. BMC Public Health. 2012;12:137. pmid:22348688
- 23. Ruzafa-Martinez M, Lopez-Iborra L, Moreno-Casbas T, Madrigal-Torres M. Development and validation of the competence in evidence based practice questionnaire (EBP-COQ) among nursing students. BMC Med Educ. 2013;13:19. pmid:23391040
- 24. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of competence in evidence based medicine. BMJ. 2003;326(7384):319–21. pmid:12574047
- 25.
Laibhen-Parkes N. Web-Based evidence based practice educational intervention to improve EBP competence among BSN-prepared pediatric bedside nurses: A mixed methods pilot study: Mercer University; 2014.
- 26. Cheetham G, Chivers G. The reflective (and competent) practitioner: A model of professional competence which seeks to harmonise the reflective practitioner and competence-based approaches. J Eur Ind Train. 1998;22(7):267–76.
- 27. Cheetham G, Chivers G. Towards a holistic model of professional competence. J Eur Ind Train. 1996;20(5):20–30.
- 28. Eraut M. Concepts of competence. J Interprof Care. 1998;12(2):127–39.
- 29.
Eraut M. Developing professional knowledge and competence. Washington, D.C.: Falmer Press; 1994.
- 30. Buchanan H, Siegfried N, Jelsma J. Survey instruments for knowledge, skills, attitudes and behaviour related to evidence-based practice in occupational therapy: A systematic review. Occup Ther Int. 2016;23(2):59–90. pmid:26148335
- 31. Glegg SMN, Holsti L. Measures of knowledge and skills for evidence-based practice: a systematic review. Can J Occup The. 2010;77(4):219–32. pmid:21090063
- 32. Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70(10):2181–95. pmid:24866084
- 33. Tilson JK, Kaplan SL, Harris JL, Hutchinson A, Ilic D, Niederman R, et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011;11:78. pmid:21970731
- 34. Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: A systematic review. JAMA. 2006;296(9):1116–27. pmid:16954491
- 35. Belita E, Squires JE, Yost J, Ganann R, Burnett T, Dobbins M. Measures of evidence-informed decision-making competence attributes: a psychometric systematic review. BMC Nurs. 2020;19(1):44. pmid:32514242
- 36. Upton D, Upton P. Development of an evidence-based practice questionnaire for nurses. J Adv Nurs. 2006;53(4):454–8. pmid:16448488
- 37. Adams S, Barron S. Development and testing of an evidence-based practice questionnaire for school nurses. J Nurs Meas. 2010;18(1):3–25. pmid:20486474
- 38. Chiu YW, Weng YH, Lo HL, Shih YH, Hsu CC, Kuo KN. Impact of a nationwide outreach program on the diffusion of evidence-based practice in Taiwan. Int J Qual Health Care. 2010;22(5):430–6. pmid:20716552
- 39. Belita E, Yost J, Squires J, Ganann R, Dobbins M. Development and content validation of a measure to assess evidence-informed decision-making competence in public health nursing. PLoS One. 2021;16:e0248330. pmid:33690721
- 40.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. The Standards for Educational and Psychological Testing. Washington, D.C.: American Educational Research Association; 2014.
- 41.
Streiner D, Norman G, Cairney J. Health Measurement Scales: A Practical Guide to their Development and Use. 5th ed. Oxford: Oxford University Press; 2015.
- 42. Yong AG, Pearce S. A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutor Quant Methods. 2013;9(2):79–94.
- 43.
Fineout-Overholt E, Melnyk BM. Organizational Culture and Readiness for System Wide Implementation of EBP (OCRSIEP) Scale. Gilbert, AZ: ARCC llc Publishing; 2006.
- 44. Melnyk B. Permission to use Organizational Culture and Readiness for System-Wide Implementation of EBP (OCR-SIEP) scale. 2019.
- 45. Fitzpatrick R, Davey C, Buxton MJ, Jones DR. Evaluating patient-based outcome measures for use in clinical trials. Health Technology Assessment. 1998;2(14):1–86. pmid:9812244
- 46. Kost RG, de Rosa JC. Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys. J Clin Transl Sci. 2018;2(1):31–7. pmid:30393572
- 47. Eigenmann CA, Colagiuri R, Skinner T, Trevena L. Are current psychometric tools suitable for measuring outcomes of diabetes education? Diabet Med. 2009;26:425–36. pmid:19388974
- 48. Squires JE, Estabrooks CA, Newburn-Cook CV, Gierl M. Validation of the conceptual research utilization scale: an application of the standards for educational and psychological testing in healthcare. BMC Health Serv Res. 2011;11(1):107. pmid:21595888
- 49.
Adams SL. Understanding the variables that influence translation of evidence-based practice into school nursing: University of Iowa; 2007.
- 50. Squires JE, Hutchinson AM, Bostrom AM, Deis K, Norton PG, Cummings GG, et al. A data quality control program for computer-assisted personal interviews. Nurs Res Pract. 2012;2012:1–8. pmid:23304481
- 51.
Field A. Discovering Statistics Using SPSS. Thousand Oaks, CA: SAGE Publications Inc.; 2009.
- 52. Beavers A, Lounsbury JW, Richards JK, Huck SW, Skolits GJ, Esquivel SL. Practical considerations for using exploratory factor analysis in educational research. Pract Assess. 2013;18(6):1–13.
- 53.
Thompson B. Exploratory and confirmatory factor analysis: Understanding concepts and applications. Washington, DC: American Psychological Association; 2004.
- 54. Holgado-tello FP, Chacón-moscoso S, Barbero-garcía I, Vila-abad E. Polychoric versus Pearson correlations in exploratory and confirmatory factor analysis of ordinal variables. Quan Qual. 2010;44(1):153–66.
- 55. Ozdemir HF, Toraman C, Kutlu O. The use of polychoric and Pearson correlation matrices in the determination of construct validity of Likert type scales. Turk J Educ. 2019;8(3):180–95.
- 56.
UCLA Institute for Digital Research & Education Statistical Consulting. Factor analysis SAS annotated output 2020. Available from: https://stats.idre.ucla.edu/sas/output/factor-analysis/.
- 57. Taherdoost H, Sahibuddin S, Jalaliyoon N. Exploratory factor analysis; concepts and theory. Adv Pure Appl Math. 2014;375382:1–8.
- 58. Burton LJ, Mazerolle SM. Survey instrument validity part I: Principles of survey instrument development and validation in athletic training education research. Athl Train Educ J. 2011;6(1):27–35.
- 59. Costello AB, Osborne J. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract Assess. 2005;10(7):1–9.
- 60. Carifio J, Perla R. Ten common misunderstandings, misconceptions, persistent myths and urban legends about likert scales and likert response formats and their antidotes. J Soc Sci. 2007;3.
- 61. Eizenberg MM. Implementation of evidence-based nursing practice: nurses’ personal and professional factors? J Adv Nurs. 2011;67(1):33–42. pmid:20969620
- 62.
Sweetapple C. Change Adoption Willingness: Development of a measure of willingness to adopt evidence-based practice in registered nurses. Ann Arbor: Hofstra University; 2015.
- 63. Lizarondo L, Grimmer-Somers K, Kumar S. A systematic review of the individual determinants of research evidence use in allied health. J Multidiscip Healthc. 2011;4:261–72. pmid:21847348
- 64. Devon HA, Block ME, MoyleWright P, Ernst DM, Hayden SJ, Lazzara DJ, et al. A psychometric toolbox for testing validity and reliability. J Nurs Scholarsh. 2007;39(2):155–64. pmid:17535316
- 65.
DeVellis RF. Scale development: Theory and application. 2nd ed. Thousand Oaks, CA: Sage; 2003.
- 66.
Fitzpatrick R, Davey C, Buxton M, Jones D. Evaluating patient-based outcome measures for use in clinical trials. United Kingdom, Europe: NHS R&D HTA Programme; 2007.
- 67.
Baxley M. School nurse’s implementation of evidence-based practice: A mixed method study. Ann Arbor: University of Phoenix; 2016.
- 68. Gallagher-Ford L. Implementing and sustaining EBP in real world healthcare settings: A leader’s role in creating a strong context for EBP. Worldviews Evid Based Nurs. 2014;11(1):72–4. pmid:24460658
- 69. Melnyk BM, Fineout-Overholt E, Mays MZ. The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs. 2008;5(4):208–16. pmid:19076922
- 70. Haggman-Laitila A, Mattila L-R, Melender H-L. Educational interventions on evidence-based nursing in clinical practice: A systematic review with qualitative analysis. Nurse Educ Today. 2016;43:50–9. pmid:27286945
- 71. Young T, Rohwer A, Volmink J, Clarke M. What are the effects of teaching evidence-based health care (EBHC)? Overview of systematic reviews. PLoS One. 2014;9(1):e86706. pmid:24489771
- 72. Lovelace R, Noonen M, Bena JF, Tang AS, Angie M, Cwynar R, et al. Value of, attitudes toward, and implementation of evidence-based practices based on use of self-study learning modules. J Contin Educ Nurs. 2017;48(5):209–16. pmid:28459493
- 73. Dobbins M, Greco L, Yost J, Traynor R, Decorby-Watson K, Yousefi-Nooraie R. A description of a tailored knowledge translation intervention delivered by knowledge brokers within public health departments in Canada. Health Res Policy Syst. 2019;17(1):63. pmid:31221187
- 74. Varnell G, Haas B, Duke G, Hudson K. Effect of an educational intervention on attitudes toward and implementation of evidence-based practice. Worldviews Evid Based Nurs. 2008;5(4):172–81. pmid:19076918
- 75. Harper MG, Gallagher-Ford L, Warren JI, Troseth M, Sinnott LT, Thomas BK. Evidence-based practice and U.S. healthcare outcomes: Findings from a national survey with nursing professional development practitioners. J Nurses Prof Dev. 2017;33(4):170–9. pmid:28441160
- 76. Kaplan L, Zeller E, Damitio D, Culbert S, Bayley KB. Improving the culture of evidence-based practice at a Magnet hospital. J Nurses Prof Dev. 2014;30(6):274–80. pmid:25407970
- 77. Kim SC, Brown CE, Ecoff L, Davidson JE, Gallo A-M, Klimpel K, et al. Regional evidence-based practice fellowship program: Impact on evidence-based practice implementation and barriers. Clin Nurs Res. 2013;22(1):51–69. pmid:22645401
- 78. Ammouri AA, Raddaha AA, Dsouza P, Geethakrishnan R, Noronha JA, Obeidat AA, et al. Evidence-Based Practice: Knowledge, attitudes, practice and perceived barriers among nurses in Oman. Sultan Qaboos Univ Med J. 2014;14(4):e537–45. pmid:25364558
- 79. Hwang JI, Park HA. Relationships between evidence-based practice, quality improvement and clinical error experience of nurses in Korean hospitals. J Nurs Manag. 2015;23(5):651–60. pmid:26140291
- 80. Park JW, Ahn JA, Park MM. Factors influencing evidence-based nursing utilization intention in Korean practice nurses. Int J Nurs Pract. 2015;21(6):868–75. pmid:24689706
- 81. Gonzalez-Torrente S, Pericas-Beltran J, Bennasar-Veny M, Adrover-Barcelo R, Morales-Asencio J, De Pedro-Gomez J. Perception of evidence-based practice and the professional environment of primary health care nurses in the Spanish context: A cross-sectional study. BMC Health Serv Res. 2012;12:227. pmid:22849698
- 82. White-Williams C, Patrician P, Fazeli P, Degges MA, Graham S, Andison M, et al. Use, knowledge, and attitudes toward evidence-based practice among nursing staff. J Contin Educ Nurs. 2013;44(6):246–54; quiz 55–6. pmid:23565602
- 83. Connor L, Paul F, McCabe M, Ziniel S. Measuring nurses’ value, implementation, and knowledge of evidence-based practice: Further psychometric testing of the Quick-EBP-VIK Survey. Worldviews Evid Based Nurs. 2017;14(1):10–21. pmid:28152276
- 84. Streiner DL. Figuring out factors: The use and misuse of factor analysis. Can J Psychiatry. 1994;39(3):135–40. pmid:8033017
- 85.
Harrington D. Confirmatory Factor Analysis. New York, NY: Oxford University Press; 2009.