Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The HDR CARE Scale, Inpatient Version: A validated survey instrument to measure environmental affordance for nursing tasks in inpatient healthcare settings

  • Renae K. Rich ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    renaerich@cox.com (RKR); jeri.brittin@hdrinc.com (JB)

    Affiliation Research, HDR Architecture, Inc., Omaha, Nebraska, United States of America

  • Francesqca E. Jimenez,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Resources, Validation, Writing – review & editing

    Affiliation Research, HDR Architecture, Inc., Seattle, Washington, United States of America

  • Cheryl Bohacek,

    Roles Investigation, Resources, Writing – review & editing

    Affiliation Oncology, Methodist Hospital, Omaha, Nebraska, United States of America

  • Alexandra Moore,

    Roles Investigation, Resources, Writing – review & editing

    Affiliation Oncology, Methodist Hospital, Omaha, Nebraska, United States of America

  • Abigail J. Heithoff,

    Roles Data curation, Formal analysis, Writing – review & editing

    Affiliation Research, HDR Architecture, Inc., Omaha, Nebraska, United States of America

  • Deborah M. Conley,

    Roles Supervision, Writing – review & editing

    Affiliation Geriatrics, Methodist Hospital, Omaha, Nebraska, United States of America

  • Jeri Brittin

    Roles Conceptualization, Funding acquisition, Supervision, Writing – review & editing

    renaerich@cox.com (RKR); jeri.brittin@hdrinc.com (JB)

    Affiliation Research, HDR Architecture, Inc., Boise, Idaho, United States of America

Abstract

Rigorous healthcare design research is critical to inform design decisions that improve human experience. Current limitations in the field include a lack of consistent and valid measures that provide feedback about the role of the built environment in producing desirable outcomes. Research findings about nurses’ efficiency, quality of care, and satisfaction related to inpatient unit designs have been mixed, and there was previously no validated instrument available to quantitatively measure nurses’ ability to work efficiently and effectively in their environment. The objective of this study was to develop, refine, and validate a survey instrument to measure affordance of the care environment to nurse practice, based on various aspects of their work in inpatient units. The HDR Clinical Activities Related to the Environment (CARE) Scale Inpatient Version was developed using item design, refinement, and reliability and validity testing. Psychometric methods from classical test theory and item response theory, along with statistical analyses involving correlations and factor analysis, and thematic summaries of qualitative data were conducted. The four-phase process included (1) an initial pilot study, (2) a content validation survey, (3) cognitive interviews, and (4) a final pilot study. Results from the first three phases of analysis were combined to inform survey scale revisions before the second pilot survey, such as a reduction in the number and rewording of response options, and refinement of scale items. The updated 9-item scale showed excellent internal consistency and improved response distribution and discrimination. The factor analysis revealed a unidimensional measure of nurse practice, as well as potential subscales related to integration, efficiency, and patient care. Within the healthcare design industry, this scale is much needed to generate quantitative and standardized data and will facilitate greater understanding about the aspects of an inpatient healthcare facility that best support nurses’ ability to provide quality patient care.

Introduction

Rigorous healthcare design research is critical to advancing knowledge to inform design decisions that support desirable human outcomes. Safe, high quality healthcare delivery is a complex adaptive system including interactions of agents at multiple levels [1, 2]. Viewed through the lens of human-environment research, the built environment provides an important context to influence the complex adaptive system of healthcare delivery. Current limitations in the design research field include lack of consistent and valid measures for constructs about the role of the built environment in producing desirable outcomes. While the literature includes various means of measuring the relationships between healthcare personnel [3, 4], between healthcare personnel and the patient care process [5] and between healthcare personnel and the organization [6, 7], no scales specifically measure the relationship between healthcare personnel and the built environment in terms of the delivery of quality inpatient care. Research findings about nurses’ efficiency, ability to provide quality care, and satisfaction related to inpatient unit designs have been mixed, and measurement has been heterogenous. To date, there has been no validated instrument available to quantify nurses’ experience of the patient care environment related to unit design. The important link between the built environment, staff experience, and high-quality patient care needs to be explored and measured.

Review of the literature

Nursing researchers have developed scales to measure constructs such as collaboration and the related concepts of teamwork and communication. Dougherty and Larson [3] recommended five such scales: the Collaborative Practice Scale (CPS), Collaboration and Satisfaction about Care Decisions (CSACD), ICU Nurse-Physician Questionnaire, Nurses’ Opinion Questionnaire (NOQ), and the Jefferson Scale of Attitudes Toward Physician Nurse Collaboration. These authors went on to develop the Nurse-Nurse Collaboration (NNC) Scale [4]. Liao et al. [8] also developed a Nurse-Nurse Collaboration Behavior Scale to measure specific behaviors associated with nurse peer group relationships and interpersonal interaction within the process of patient-centered care, and Kenaszchuk et al. [9] developed and validated the Interpersonal Collaboration (IPC) scale to assess collaboration between multiple health provider groups, including nurse-nurse and nurse-provider collaboration.

Development of measurement tools that assess nurse perspective of the patient care process has continually evolved [2]. The Nursing Work Index (NWI) [10] was designed to measure nurse activities and practice. Scales derived from the NWI, such as the widely-used Practice Environment [6], measure the structure of the work environments rather than nurse practice. Kramer and Hafner [10] devised the Essentials of Magnetism (EOM) process measurement tool which was later updated [7]. Versions of the EOM measure characteristics that staff nurses at magnet hospitals identified as essential to quality patient care and a satisfying work environment. These authors and other colleagues also tested the Essentials of Professional Nursing Practice (EPNP) scale to assess the extent to which nurses actually engage in these practices [2]. While these instruments consider physical layout and cleanliness of the hospital, as well as availability of equipment and supplies, they do not fully seek to understand the outcomes elicited through human-environment interaction or causal pathways linking the built environment to the delivery of quality care.

Given the ongoing problem of nurse staffing and retention, there have also been studies addressing nurse job satisfaction, burnout, and engagement. A review of instruments measuring job satisfaction in a hospital environment identified seven scales with adequate reliability and validity [11]. Simpson [12] identified the Work Life Model [13], the Job Demands-Resources Model [14], and the Profession Practice Model as useful for measuring employee work engagement [15]. However, given that environmental variables were not included in these models, this author recommended that researchers should conceptualize beyond current models [12].

A majority of studies engaging nurses to assess healthcare design utilize survey methods that are developed by the researchers with little to no testing or validation of the questions. These may ask about the experience or satisfaction with the environment [1622] or nurse outcomes, such as job satisfaction [2327], job stress and demands [24, 26, 28, 29], efficiency [25, 27, 2931], or teamwork and communication [26, 27, 30, 32]. Some studies make use of survey instruments developed and published by other authors, but routine use of previously tested instruments across the healthcare design field is low. The most commonly employed and cited scales are the Maslach Burnout Inventory [23, 3336], the Nursing Work Index, especially the Practice Environment Subscale [23, 3639], the Perceived Stress Scale [38, 40], and the Physical Comfort subscale of the Work Environment Scale [16, 41, 42]. The use of nurse surveys in healthcare design research is common, but there is very little consistency and replication of instruments across studies.

Conceptual model and research aim

While some literature has posited potential direct associations between the factors and outcomes outside of their connection to nursing tasks [13, 26, 4345], as shown with dashed arrows in Fig 1, it is highly plausible that nurses’ ability to complete tasks related to providing patient care is actually an important mediator in those pathways. Nurse characteristics, organizational factors, and the physical environment may affect nurses’ ability to provide quality patient care, which then affects outcomes for nurses, patients, families, and visitors. These associations are shown in Fig 1 with solid arrows. The means to measure the construct of nurse task affordance is an important step to testing this theory.

thumbnail
Fig 1. Conceptual diagram of theoretical framework.

Hypothesized relationships shown with solid arrows, previously studied direct effects between factors and outcomes as dashed arrows.

https://doi.org/10.1371/journal.pone.0258815.g001

This study operationalized an important link between the built environment, nursing staff experience, and high-quality patient care. The specific aim was to develop, refine, and validate a survey instrument to measure nurses’ ability to work efficiently and effectively in their environment, based on routine tasks to care for patients in inpatient units. The construct was intentionally focused on the effort required by nurses to effectively do their jobs rather than asking respondents to attribute the ease of tasks to their physical environment, and as such, relies on the application of affordance in healthcare architecture. Gibson coined the term affordance to describe the possibilities within the environment which affect perception and constrain action [46]. Affordance theory can help explain the interactions between people (i.e. occupants) and the built environment [47]. This scale was developed to specifically assess how well the healthcare built environment supports nurses’ work interactions and task performance. The survey instrument was developed using item design, refinement, and reliability and validity testing. The present study was necessary to ensure the instrument measures what it was intended to measure and can provide a credible and useful tool to healthcare research and design practitioners. The four-phase development and evaluation process resulted in a reliable, valid, and useful survey scale of environmental affordance for inpatient nurse activities, named the HDR Clinical Activities Related to the Environment (CARE) Scale Inpatient Version.

Materials and methods

Study design

A mixed-methods, multi-phased approach was used for the evaluation, revision, and validation of the survey instrument. Early survey item development was based on a review of existing post occupancy surveys in healthcare settings, and leaned particularly on the Center for the Built Environment Occupant Toolkit [48, 49]. Survey items underwent further iteration in response to research questions explored, and gaps identified, in the literature examining clinical caregivers’ relationship to the healthcare built environment [50, 51]. The initial survey items were written to assess the key nurse activities and interactions identified in the literature, which included nurse walking fatigue, charting, ability to spend time with patients, and teamwork, especially in emergent situations. The instrument was pilot tested for reliability and validity in the initial and updated stages. Content development included a content validity survey completed by subject matter experts and cognitive interviews with sample participants from the target population. The methodology for the development and evaluation of the survey instrument used best practices from the field of survey research and design, and recommendations in survey development publications by industry experts [5255].

In the initial pilot survey phase, we assessed the preliminary survey content and performance and documented baseline measures for comparison to repeated measures in the final pilot survey phase. In the content validity survey phase, we collected quantitative and qualitative feedback about the initial survey items from a panel of experts in inpatient nursing and facilities planning. The aim of the cognitive interview phase was to understand the thought process and interpretation of nurses as they completed the survey questions. Finally, following instrument refinement, the final pilot survey phase assessed the updated survey scale content and performance.

Samples and procedures

The primary study population consisted of nursing staff providing patient care in inpatient hospital units. This includes those in nursing and nurse assistant roles across all types of inpatient care. Subject matter experts in inpatient nursing job types and healthcare built environments were also consulted in the content validity phase of the study.

In the initial pilot study phase, inpatient nursing staff from three hospitals responded to the survey scale in its preliminary state as part of larger studies related to design and construction of new or expanded inpatient hospital facilities. An online survey that included the initial survey instrument was conducted at Great Plains Health (GPH) in May-June 2017, Fremont Health (FH) in June-August 2017, and Parkland Hospital (PH) in June 2018. Respondents in nursing roles who received and answered the initial survey instrument question were included in the scale development study. The survey question asked respondents to specify their level of satisfaction with ten statements representing different nursing tasks (Table 1) using a 7-point response option scale from 1 = Very dissatisfied to 7 = Very satisfied. In addition to the initial nurse instrument, external survey questions for associated factors and outcomes in the pilot study phases consisted of psychometrically-validated scales or scales that were developed, tested, and refined for content validity by HDR Research based on clarity, relevance, and completeness [52].

thumbnail
Table 1. Initial survey instrument used in the initial pilot survey phase.

https://doi.org/10.1371/journal.pone.0258815.t001

In March-April 2019, 167 subject matter experts in nursing and healthcare design were invited to complete the online content validity survey. These included 143 nurse leaders and managers at Nebraska Methodist Hospital Women’s Hospital and 24 clinical or health architectural planners at HDR. Participants were presented with the initial nurse survey instrument and asked to respond to a series of questions about each scale item based on the components of content validity presented by DeVellis [52]. Content validity was evaluated in terms of three attributes: (1) Relevance: how well the items relate to the construct of interest (response options from 1 = Not at all relevant to 4 = Highly relevant), (2) Clarity: how understandable the items are to survey participants (response options from 1 = Very unclear to 4 = Very clear), and (3) Completeness: if all important aspects of the construct are covered by the items (response options 1 = Yes and 0 = No). The construct of interest was stated as “Nurses’ ability to work efficiently and effectively in an inpatient hospital environment” and was further defined as nurse perception of their efficiency, ability to provide quality care, and satisfaction related to the design of the inpatient unit in which they work.

Recruitment for the cognitive interviews was done at Nebraska Methodist Hospital’s main campus (NMH) and Women’s Hospital (MWH), and interviews were conducted at both sites. Inpatient nursing staff from each site were stratified by unit type and randomly selected proportional to strata size to be invited to participate. Participants first completed the online survey instrument on their own, then responded to semi-structured interview questions asked by a team of two researchers—a design researcher from HDR and a nurse researcher from Methodist. Interviews were scheduled for 30 minutes and were conducted in May 2019. Semi-structured interview questions (Table 2) and procedures of the cognitive interviews were based on recommendations for survey evaluation by Fowler [54], DeVellis [52], Dillman et al. [53], and Presser et al. [55], and asked participants to elaborate on the thought process as they completed each survey item, such as how they interpreted the meaning of the statement, whether they had any trouble interpreting it, and the reason behind the response choice they selected. Specific questions also sought to understand the implications of the response options and aspects of nursing that may not have been covered.

thumbnail
Table 2. Semi-structured questions for cognitive interviews.

https://doi.org/10.1371/journal.pone.0258815.t002

Following the analysis of results from the initial three phases, and revisions to the survey instrument based on these results, an online survey that included the revised survey instrument was conducted at NMH and MWH in September and October 2019. Nursing staff at both locations were invited to participate in the survey. In addition to the nurse survey scale, participants responded to demographic questions and other external questions consistent with those asked in the initial pilot phase and used for content validity assessment.

Research protocols that included data collection activities for the first phase of the study (initial pilot survey) were reviewed and deemed exempt by Western Institutional Review Board (IRB) (1-1003733-1, 1-1061894-1). The subsequent phases of the study were approved for continuing review by the Nebraska Methodist Hospital IRB (FWA 00003377). The IRBs waived the need for direct consent for survey participants in all phases, with the assumption that choosing to respond to the survey constituted implied consent. Written informed consent was obtained by all interview subjects prior to participation in the study.

Analysis

Validity and reliability are the standards by which survey-based scales are evaluated. An instrument is considered reliable if it produces consistent results and valid if it accurately measures what it is intended to measure [54, 56], This implies that results from the instrument will not change unless there is an actual change in what is being measured, and thus, any observed differences based on survey results can be attributed to real differences between respondents [52]. In order to achieve this, respondents should understand survey questions in a way that is also consistent with the meaning that was anticipated by the researchers, and associations with related constructs should hold true [54].

Quantitative data analysis of pilot survey results from the initial and final phases included exploratory factor analyses, tests of significant correlations for associations with other constructs, chi-square tests for differences in results between response groups, and exploration of item distribution and nonresponse. Analyses explored the univariate distribution of responses of each the survey items and bivariate correlations between each pair of items. An overall Cronbach’s alpha value and item-total correlation measured the scale’s reliability and individual contribution of each item [52, 57]. Factor analysis was used to determine the possible number of latent variables represented by the set of items in the scale, and factor rotation, as appropriate, improved factor interpretability [52, 57, 58]. Techniques from Item Response Theory were employed to assess survey item discrimination and response option distribution [52, 57]. Phase 1 survey responses were weighted to account for the difference in sample size between the hospitals and weighted responses were used throughout the multi-site analyses.

In the final pilot study, expected associations of nursing tasks and delivery of patient care with physical environment factors and nurse outcomes, as outlined in the conceptual framework, were evaluated to test the reliability and construct validity of the survey instrument based on item correlations with individual scale items and mean scale scores [52, 57]. CARE Scale items were compared to results from six external scales. Nurse outcomes were measured using four validated and published scales of collaboration effectiveness [59], collaboration experience [59], and job satisfaction (working and interpersonal subscales) [60]. Two scales internally developed to measure efficiency of space layout (e.g., ability to locate needed items, patients, and staff quickly) and space availability (e.g., sufficient space for patients and families, job functions, supplies and equipment, and collaboration) were considered as related environmental aspects.

To measure the content validity of the survey instrument, the relevance, clarity, and completeness of the questions were evaluated. Coded content validity survey responses were assessed for indications of items in the survey instrument where improvements or revisions were warranted, based on identification by at least 15% of respondents, and transcriptions of the cognitive interview responses were assessed qualitatively for indications of problems with the survey instrument among at least 15% of participants [54]. In addition, content validity indices were calculated from the content validity survey responses related to both relevance and clarity of the items and the scale overall. Responses were dichotomized, and relevance responses of 3 = Moderately relevant and 4 = Highly relevant were considered “More relevant” and clarity responses of 3 = Somewhat clear and 4 = Very clear were considered “Clear.” An item-level content validity index (I-CVI) equaled the proportion of respondents who rated that item as “More relevant” or “Clear.” Items with an I-CVI greater than 0.79 were determined to be appropriate, between 0.70 and 0.79 needed revision, and less than 0.70 were considered for elimination [61]. The scale-level content validity index (S-CVI) equaled the average of all I-CVIs [62]. With such a large sample of experts, the risk of chance agreement was very low and not calculated. Descriptive results, such as mean, standard deviation, and proportion of specific response options, were also summarized for quantitative data. Qualitative responses were categorized by item and theme and reviewed for consistency across respondents.

SAS v9.4 (SAS Institute, Cary, NC) was used for quantitative and statistical data analysis. Interview responses and open-ended survey comments were managed and analyzed using qualitative data management software NVivo v10 (QSR International Pty Ltd., 2014, Burlington, VT).

Results

Phase 1: Initial pilot survey

A total of 444 respondents who were in nursing roles and completed the nurse survey questions of interest were included in this study, 73 from GPH, 48 from FH, and 323 from PH. A majority of respondents had bedside nursing roles (RN/LPN/LVN), with 17% in other roles, including nurse management, advanced practice nurse, or nursing support. A large majority of respondents were employed full-time (89%) and female (90%). Just more than one-third of respondents (35%) said they supervise others. Detailed demographics by site are in Table 3.

thumbnail
Table 3. Phase 1 continuous participant demographics by site.

https://doi.org/10.1371/journal.pone.0258815.t003

CARE Scale item distributions were skewed to the positive response options (Table 4); all items except one (item 2) had at least 50% positive responses and four had 75% positive responses. Mardia’s test for multivariate normality was applied to obtain statistics of skewness (1270.35, p < 0.001) and kurtosis (34.25, p < 0.001) to measure departure from normality assumptions. The sample was not drawn from a normal distribution [63].

In phase 1, a maximum of 4 out of 444 responses were missing on any individual scale item (0.9%) and 10 respondents missing at least one item across the entire scale (2.25%). Little’s Missing Completely at Random (MCAR) test statistic was calculated to determine whether data were MCAR. Based on the test statistic (81.2, p = 0.0987), the missing data are MCAR [64].

All items were positively inter-correlated (p < .001) with estimated correlation coefficients of at least 0.25. Many items had moderate-to-high bivariate correlations with other items, and two (items 1 and 2) had pair-wise correlations greater than 0.80, which could indicate a problem with multicollinearity. The overall reliability coefficient of 0.89 indicated that a high proportion of the variance in the total scores is attributable to a true score value. As shown in Table 5, item-total correlations ranged from 0.41 to 0.79, and item-deleted alpha values slightly lower than the total alpha value indicated that the deletion of a single item would not have a great effect on the alpha coefficient value. However, item-total correlation for one item (the noise level in patient rooms) was considerably lower than all other items and could be considered for item deletion.

thumbnail
Table 5. Phase 1 Cronbach’s coefficient alpha and item-total correlations.

https://doi.org/10.1371/journal.pone.0258815.t005

Overall and individual Kaiser’s Measure of Sampling Adequacy values between 0.80 and 0.97 indicated that the data were appropriate for factor analysis. With only one eigenvalue greater than one and explaining 88% of the variance, the analysis indicated that a one-factor model might be most appropriate. However, an investigation of the partial residuals and root mean square residuals suggested that more factors may need to be extracted. Two- and three-factor models reduced these values, and thus, were also considered for rotation and theoretical interpretation. The one-factor model reflected the unidimensional aspect of the correlations between all items with loadings ranging from 0.44 to 0.83 and accounting for 87% of the total common variance between the items. The two-factor model reflected the items that relate more to nursing tasks and efficiency, such as integration between coworkers and time spent in different activities, highly loaded on the first factor and items more related to patient care and experience on the second factor (Table 6). The variance explained by each factor was close to even, with the first factor accounting for 56.2% and the second factor accounting for 46.6% of the common variance. The biggest difference in the three-factor compared to the two-factor model was that it separated the first factor from the two-factor model into two different factors for nurse integration and task efficiency (Table 7). The third factor still represented patient care items. The amount of variance explained by each factor was relatively similar, with 42.8% explained by the first factor, 37.4% by the second, and 32.3% by the third. Site responses were also each analyzed separately. While there were slight differences in the site-specific models, there were common themes between them and with the overall model that support the findings in that factor analysis. Factor analyses using polychoric correlations produced very similar factor loadings and the same interpretations.

thumbnail
Table 6. Phase 1 varimax rotated two-factor model loadings.

https://doi.org/10.1371/journal.pone.0258815.t006

thumbnail
Table 7. Phase 1 varimax rotated three-factor model loadings.

https://doi.org/10.1371/journal.pone.0258815.t007

A higher discrimination index is an indication that the item is better able to differentiate between those with the highest and lowest overall scores. Many items had very high discrimination values, with indices ranging from 39.8 to 92.4. The three items (7,8, and 10) with the lowest discrimination indices, between 39.8 and 57.7, were less discriminating among respondents with the lowest overall scores. Similarly, item discrimination curves (Fig 2) showed four items with values above 20% for the lowest CARE Scale mean score decile group, indicating a difficulty in discriminating among lower-scoring respondents. Nearly all items, and five in particular, reached 90% of higher satisfaction well below the highest decile groups. This indicated a need for better discrimination for most items in the upper range of responses, as well. In all category response curves (Fig 3), only three to five response options reached a local maximum across the combined distributions, signifying that there were more response options available to respondents than necessary.

Phase 2: Content validity survey

A total of 60 respondents completed the content validity survey, 21 from HDR and 39 from Methodist. All HDR respondents have experience in inpatient healthcare facilities planning and design (mean = 18.1 years) and some have experience in other areas, including nursing. All Methodist respondents have experience in nursing (mean = 21.0 years), two-thirds have experience in healthcare administration, and some have experience in healthcare planning or design or research. Nearly all HDR respondents have experience with all types of inpatient units listed (ICU/Critical Care, Med Surg/Acute Care, Progressive Care/Step-Down/Telemetry, Womens & Infants). At least some of Methodist respondents have experience in each inpatient unit type, with the highest proportion in Med Surg/Acute Care (69.2%) and lowest proportion in Womens & Infants (17.9%).

On a response scale from 1 = Not at all relevant to 4 = Highly relevant, seven out of ten items had a mean relevance score of 3.5 or higher and all items had a mean score of at least 3.0 (Fig 4). Eight items were rated as moderately or highly relevant by at least 86% of respondents and highly relevant by at least 50%. Nine out of the 10 items had a relevance I-CVI greater than 0.80 (“appropriate”), and item 4 had a I-CVI of 0.733 (“needs revision”). The S-CVI in terms of relevance was 0.925.

thumbnail
Fig 4. Phase 2 relevance of CARE items response.

Frequency, relevance I-CVI, and mean score response.

https://doi.org/10.1371/journal.pone.0258815.g004

On a response scale from 1 = Very unclear to 4 = Very clear, all ten items had a mean clarity score of 3.5 or higher (Fig 5). Eight items were rated as somewhat or very clear by at least 90% of respondents and very clear by at least 75%. All 10 items had a clarity I-CVI greater than 0.80 indicating appropriateness, and the S-CVI in terms of clarity was 0.941. Respondents suggested changes to the item wording that they thought would improve understanding by survey participants. Many thought that the term “quality of care” and types of interactions between coworkers needed to be more specific. One-fourth of respondents thought there was some important aspect of the construct missing from the items; however, there were no consistent suggestions between respondents.

thumbnail
Fig 5. Phase 2 clarity of CARE items response.

Frequency, clarity I-CVI, and mean score response.

https://doi.org/10.1371/journal.pone.0258815.g005

Phase 3: Cognitive interviews

A total of 44 participants from both Methodist hospitals participated in the cognitive interviews, 61% from NMH and 39% from MWH. Participants were all in a nursing role with an average of 6.8 years at the organization and 9.8 years of hospital experience; a majority were RNs (65.9%) and the remainder were CNAs or charge nurses. All except one unit (9N –Cardiac Critical Care) were represented by at least one participant. The responses to nearly all survey items were more positive than the combined weighted responses in the phase 1 survey. Two items (5 and 6) skewed slightly more towards negative responses among interview participants.

Generally, items were interpreted as intended. However, some nuanced interpretations were evident. For the items related to getting and providing help, most nurses thought of non-emergent situations and a general awareness of what was happening on their unit. The term, “interactions with coworkers” was interpreted differently by nurse respondents and ranged from social interaction, like “chit chat” to conversations centered on patients and their needs to a combination of both professional and social communications. There was a holistic view of quality patient care, and many respondents expressed it in terms of the culture of their organization. When asked how they defined quality of care, most nurses explained that it encompassed the wide range patient experience, from medical outcomes to personalized attention. Most respondents expressed that walking was an expected part of their job and some stated that they enjoyed that their job keeps them physically active. Among nurse respondents, time spent with patients was interpreted to mean one-on-one interaction that usually includes some form of medical or medical-related care such as help with activities of daily living, patient education, or walking, but rarely assessments or checking vitals. The time it takes to respond to patients’ needs was most commonly interpreted to mean response to patient call light. The ability to visually monitor patients without disturbing them was seen as important and was interpreted to mean actually laying eyes on patients, not viewing them through a remote monitoring system. Nurse respondents expressed that they spend a lot of time charting because they cannot chart by exception, and their organization’s policies for charting were seen as having the most impact on the time they spend charting.

When asked if the survey questions missed any important aspect of their jobs, more than half of the respondents said no. However, staffing ratios and patient load were seen by some as an important aspect of the job left out of the questions. Further, design details that could impact patient safety were brought up, for example: phone cords that could be a tripping hazard, the placement of patient bathroom doors and access to the headwall and equipment. Some of these concerns may be specific to the study site. There were mixed responses among nurses as to how they felt about the current survey response options. While some said the response options did not affect their answers, other felt like the satisfaction scale swayed them toward more positive responses. The option of responding on an agree-to-disagree or other type of scale, seemed more objective to some respondents.

Survey instrument revision

Prior to the final survey phase, results from the first three phases of data collection and analysis were compiled and considered for updates and revisions to the survey instruments. The study team from HDR and Methodist met to review the results to date and discuss the structure and wording of the final survey scale. In order to reduce potential positive bias with satisfaction responses, the updated survey scale asked respondents to specify the amount of effort it typically takes to incorporate different nursing tasks into their work, using a 5-point response option scale from 1 = Is not possible even with much effort to 5 = Occurs naturally without effort (Table 8). These options more directly relate to the concept of affordance in that the ideal environment should naturally support and facilitate nursing tasks. The question stem and items were also reworded to frame the question around nurses’ work environments and clarify items that respondents found confusing or vague. Due to its lower connection to the other scale items and relevance to the construct, item 10 (noise level in patient rooms) was eliminated from the scale entirely.

Phase 4: Final pilot survey

In the final phase, a total of 357 respondents completed the survey containing the updated CARE Scale, 252 from NMH, and 105 from MWH. A majority of respondents were RNs (63.6%), 23.8% were CNAs, and the remaining 12.6% were in other roles, including nurse management, advanced practice nurse, or nursing support. A large majority of respondents were employed full-time (72.6%) and female (94.3%). One-fourth of respondents (24.8%) said they supervise others. Detailed demographics by site are in Table 9.

While distributions were still slightly skewed to the positive response options (Table 10), all except one item (regarding charting) had a more centralized distribution than in the cognitive interview survey responses. Mardia’s test for multivariate normality was applied and statistics of skewness (388.99, p < 0.001) and kurtosis (9.63, p < 0.001) were obtained. The sample was not drawn from a normal distribution [63].

In the phase 4 dataset, there was a maximum item nonresponse of 3 out of 357 respondents (0.8%) and 5 respondents missing one or more items across the entire scale (1.4%). Item missingness was assessed using Little’s MCAR test (81.2, p = 0.099). Based on the results of this testing, missing data for phase 4 are MCAR [64].

All items were positively intercorrelated with estimated correlation coefficients of at least 0.33, and four out of nine items correlated with at least four other items at a 0.50 level or higher. Although a group of three items related to direct patient care were very highly correlated with each other (items 7, 8, and 9), with a maximum pair-wise correlation of 0.75, no correlations indicated a problem with multicollinearity. The overall reliability coefficient value of 0.89 indicated that a high proportion of the variance in the total scores is attributable to a true score value. Item-total correlations (Table 11) ranged from 0.53 to 0.75 and nearly consistent item-deleted alpha values indicated that the deletion of a single item would not have a great effect on the alpha coefficient value.

thumbnail
Table 11. Phase 4 Cronbach’s coefficient alpha and item-total correlations.

https://doi.org/10.1371/journal.pone.0258815.t011

Overall and individual Kaiser’s Measure of Sampling Adequacy values between 0.84 and 0.94 indicated that the data were very appropriate for a factor analysis. With only one eigenvalue greater than one and explaining 95% of the variance, the analysis indicated that a one-factor model might be most appropriate and the scale might be unidimensional. However, an investigation of the partial residuals and root mean square suggested that more factors may need to be extracted. Two- and three-factor models reduced these values, and thus, were also considered for rotation and theoretical interpretation. The one-factor model reflected the unidimensional aspect of the correlations between all items with loadings ranging from 0.56 to 0.81 and accounting for 94.4% of the total common variance between the items. The two-factor model reflected the items that related more to direct patient care highly loaded on the first factor and items more related to nurse integration and efficiency on the second factor (Table 12). Slightly more variance was explained by the first factor than the second factor, accounting for 64.4% and 46.6%, respectively. The three-factor model separated the lowest-loading items in each of the two factors into a separate third factor, creating three factors representing direct patient care, nurse integration, and task efficiency (Table 13). The first factor accounted for a majority of the variance explained, with 55.5% explained by the first factor, 38.0% by the second, and 26.3% by the third. Factor analyses using polychoric correlations produced very similar factor loadings and the same interpretations.

thumbnail
Table 12. Phase 4 varimax rotated two-factor model loadings.

https://doi.org/10.1371/journal.pone.0258815.t012

thumbnail
Table 13. Phase 4 varimax rotated three-factor model loadings.

https://doi.org/10.1371/journal.pone.0258815.t013

All items had very high discrimination values (indices ranging from 71.4 to 92.6), with much higher proportions of responses in the highest categories (4 = Requires little effort and 5 = Occurs naturally without effort) among those with scores in the CARE Scale mean score upper quartile compared to those with scores in the lower quartile. In considering the item discrimination curves (Fig 6), only two items had proportions of highest category responses at or above 20% for the lowest CARE Scale mean score decile group and no items had proportions above 25% for this group, indicating good discrimination among lower-scoring respondents. Three items reached 90% or higher proportions by the 6th decile group, and four items reached 90% by the 7th decile group. These items do not discriminate as well among the upper ranges of responses. In all category response curves for all items (Fig 7), all five response options reached a local maximum across the combined distributions and in the logical order, indicating that there are an appropriate number of response options that are ordered as expected.

All external scale scores and nearly all items were significantly positively correlated with all CARE Scale items (Table 14). Collaboration experience was most highly correlated with nurses’ ability to get help as soon as needed (estimated correlation coefficient: r = 0.53), followed by ability to know what is happening with coworkers (r = 0.39), efficiently responding to patient needs (r = 0.39), frequently interacting with coworkers (r = 0.37), and spending enough time with patients (r = 0.37). Collaboration effectiveness and interpersonal job satisfaction were similar and also most highly correlated with getting help as soon as needed (r = 0.35 and r = 0.43, respectively). Work job satisfaction was significantly correlated with all CARE items and more similar to interpersonal correlations among items related to patient care and interaction. Wayfinding was most highly correlated to ability to move throughout the unit in an efficient way (r = 0.47), but also in response time to patient needs (r = 0.36), integration of charting into workflow (r = 0.35), and ability to visually monitor patients (r = 0.35). Correlation estimates for the CARE items of space availability especially related to ability to integrate charting in workflow (r = 0.42) and spend enough time with patients (r = 0.41), followed by efficiency of moving throughout the unit (r = 0.39), responding to patient needs (r = 0.37), and getting help when needed (r = 0.37) were all highly significant (p < 0.001).

thumbnail
Table 14. CARE scale item correlation with external scales.

https://doi.org/10.1371/journal.pone.0258815.t014

Discussion

The results of this study have a direct potential impact on the practice of healthcare design and nursing in that the ability to quantitatively measure nurse experience with job and patient care tasks will provide a missing link to developing clear associations between the factors that influence these tasks, including the physical environment they work in, and the connections to important employee outcomes such as job satisfaction, absenteeism, engagement, and collaboration. Patient experience, often measured through surveys such as the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS), may also be linked to nurse ability to effectively provide care. When planning a new healthcare environment, these outcomes are often found in the list of goals and guiding principles. The HDR CARE Scale Inpatient Version will provide a critical link to understanding how the physical environment can influence these key outcomes. Future research conducted using the scale will feed into the cycle of evidence-based design by generating this knowledge and affecting the design strategies that are implemented.

The updated scale response wording in Table 8 was specifically chosen to reflect the focus of environmental affordance for nurse tasks, asking about the degree to which nurses are able to complete their work without hinderance. The scale items themselves do not ask about the physical environment, but rather focus on nursing tasks, only mentioning the environment in the question stem and framing of the question. This was done by intention, to allow researchers to reach causal inferences on the role of the environment through analyses and associations within and between subjects based on aspects of their surroundings, rather than asking respondents to draw these conclusions themselves. The CARE Scale is intended to measure the nurse experience based on the extent of effort required to complete tasks, but alone not meant to explain the reasons for differences in responses.

Although strong in many aspects even in the earlier phases, the performance of the revised survey scale improved upon the initial scale testing. With high intercorrelations and Cronbach’s alpha, the scale displayed high reliability. Significant correlations in expected directions and intensities with several external scales related to collaboration, job satisfaction, layout, and space adequacy, confirmed the validity of scale questions, originally tested and refined based on the results from the content validity and cognitive interview data collection phases. The updated scale also showed improved and satisfactory performance in terms of item discrimination and response.

A limitation of this study was the diversity of the sample of nurses and locations measured. As the hospital environment is common across those who work on the same unit and even between different units in the same hospital, it will be important to continue to test this instrument in diverse settings with a wide range of respondents. It is expected that a more expansive and varied sample will not negate the results found in this study, but will reveal a better distribution of performance and capabilities. From the factor analysis results, it was concluded that the scale is a unidimensional measure of nurse practice affordance, although, subscales such as nurse integration, efficiency, and/or direct patient care, may be evident in larger and more diverse samples. This is an area for future testing and development, including a confirmatory factor study to test the goodness-of-fit of one or multiple factors.

Based on the scope and timing of this study, development of the HDR CARE Scale Inpatient Version was focused on nurses who work on inpatient hospital units. Expanding upon this work, it will be possible to further refine and test related instruments for use in other healthcare settings, such as ambulatory care. This future step would increase the flexibility and adaptability of the instrument and expand its use to more studies of healthcare built environments.

As a tool specifically intended for healthcare design research to quantitatively measure nurses’ experience working in the care environment, the CARE Scale will be valuable in generating quantitative and standardized data, making possible measurement of associations between other factors, such as organizational support and procedures, and comparisons across time and facilities. Within the healthcare design industry, this capability can lead to greater knowledge about the aspects of a healthcare facility that best support nurses’ work and that are associated with their ability to provide quality patient care. The development of this method to capture this information quantitatively is vital to future research that begins to address causal pathways between the healthcare built environment and human outcomes.

Supporting information

S2 Dataset. Phase 2 content validity survey data.

https://doi.org/10.1371/journal.pone.0258815.s002

(XLSX)

S3 Dataset. Phase 3 cognitive interviews survey data.

https://doi.org/10.1371/journal.pone.0258815.s003

(XLSX)

S4 Dataset. Phase 3 cognitive interviews coding summary by node.

https://doi.org/10.1371/journal.pone.0258815.s004

(XLSX)

S5 Dataset. Phase 3 cognitive interviews content node count.

https://doi.org/10.1371/journal.pone.0258815.s005

(XLSX)

Acknowledgments

The authors are grateful for the support of the following registered nurses at Methodist Health System who assisted with the recruitment and interviews of participants for phases 2–4: Kayleen Parys, Kelly Groux, Susan Rogers, and Sarah Fietz; and the nursing and research leadership at Parkland Health and Hospital System, Great Plains Health, and Fremont Health, who helped with the study development and participant recruitment of the data used in phase 1: Lonnie Roy, Jackline G. Opollo, Susan Partridge, Lori Schoenholtz, and Melinda Kentfield. We would also like to thank Susan E. Puumala for her contributions to the development and review of the study analysis methods and results.

References

  1. 1. Deutsch ES. More than complicated, healthcare delivery is complex, adaptive, and evolving. Pennsylvania Patient Safety Advisory. 2016;13(1).
  2. 2. Kramer M, Brewer BB, Halfer D, Hnatiuk CN, MacPhee M, Schmalenberg C. The evolution and development of an instrument to measure essential professional nursing practices. JONA J Nurs Adm. 2014 Nov 1;44(11):569–76. pmid:25340921
  3. 3. Dougherty MB, Larson E. A review of instruments measuring nurse-physician collaboration. JONA J Nurs Adm. 2005 May 1;35(5):244–53. pmid:15891488
  4. 4. Dougherty MB, Larson EL. The Nurse-Nurse Collaboration Scale. JONA J Nurs Adm. 2010 Jan 1;40(1):17–25. pmid:20010373
  5. 5. Cossette S, Cara C, Ricard N, Pepin J. Assessing nurse–patient interactions from a caring perspective: Report of the development and preliminary psychometric testing of the caring Nurse–Patient Interactions Scale. Int J Nurs Stud. 2005 Aug 1;42(6):673–86. pmid:15982465
  6. 6. Lake ET. Development of the practice environment scale of the nursing work index. Res Nurs Heal. 2002 Jun;25(3):176–88. pmid:12015780
  7. 7. Schmalenberg C, Kramer M. Essentials of a productive nurse work environment. Nurs Res. 2008 Jan 1;57(1):2–13. pmid:18091287
  8. 8. Liao C, Qin Y, He Y, Guo Y. The Nurse-Nurse Collaboration Behavior Scale: Development and psychometric testing. Int J Nurs Sci. 2015 Dec 1;2(4):334–9.
  9. 9. Kenaszchuk C, Reeves S, Nicholas D, Zwarenstein M. Validity and reliability of a multiple-group measurement scale for interprofessional collaboration. BMC Health Serv Res. 2010;10(83). pmid:20353577
  10. 10. Kramer M, Hafner LP. Shared values: Impact on staff nurse job satisfaction and perceived productivity. Nurs Res. 1989;38(3):172–177. pmid:2717441
  11. 11. Van Saane N, Sluiter JK, Verbeek JH, Frings-Dresen MH. Reliability and validity of instruments measuring job satisfaction—a systematic review. Occup Med. 2003 May 1;53(3):191–200. pmid:12724553
  12. 12. Simpson MR. Engagement at work: A review of the literature. Int J Nurs Stud. 2009 Jul 1;46(7):1012–24. pmid:18701104
  13. 13. Leiter MP, Maslach C. Six areas of worklife: a model of the organizational context of burnout. J Health Hum Serv Adm. 1999 Apr 1;21(4):472–89. pmid:10621016
  14. 14. Bakker AB, Demerouti E. The job demands-resources model: State of the art. Journal of Managerial Psychology. 2007;22(3):309–28.
  15. 15. Aiken LH, Patrician PA. Measuring organizational traits of hospitals: the Revised Nursing Work Index. Nurs Res. 2000 May 1;49(3):146–53. pmid:10882319
  16. 16. Berry LL, Parish JT. The impact of facility improvements on hospital nurses. Heal Environ Res Des J. 2008 Jan 1;1(2):5–13. pmid:21161892
  17. 17. Chaudhury H, Mahmood A, Valente M. Nurses’ perception of single-occupancy versus multioccupancy rooms in acute care environments: an exploratory comparative assessment. Appl Nurs Res. 2006 Aug 1;19(3):118–25. pmid:16877190
  18. 18. Friese CR, Grunawalt JC, Bhullar S, Bihlmeyer K, Chang R, Wood M. Pod nursing on a medical/surgical unit. J Nurs Adm. 2014;44(4):207–11. pmid:24662689
  19. 19. Hadi K, Zimring C. Design to improve visibility: Impact of corridor width and unit shape. Heal Environ Res Des J. 2016 Jul;9(4):35–49. pmid:26747840
  20. 20. Lo Verso VR, Caffaro F, Aghemo C. Luminous environment in healthcare buildings for user satisfaction and comfort: An objective and subjective field study. Indoor Built Environ. 2016 Aug;25(5):809–25.
  21. 21. Davis RG, McCunn LJ, Wilkerson A, Safranek S. Nurses’ satisfaction with patient room lighting conditions: A study of nurses in four hospitals with differences in the environment of care. Heal Environ Res Des. 2020 Jul;13(3):110–24. pmid:31906715
  22. 22. Gharaveis A, Yekita H, Shamloo G. The perceptions of nurses about the behavioral needs for daylighting and view to the outside in inpatient facilities. Heal Environ Res Des J. 2020 Jan;13(1): 191–205. pmid:31122079
  23. 23. Aiken LH, Clarke SP, Sloane DM, Lake ET, Cheney T. Effects of hospital care environment on patient mortality and nurse outcomes. J Nurs Adm. 2008 May;38(5):223–9. pmid:18469615
  24. 24. Campos-Andrade C, Hernández-Fernaud E, Lima ML. A better physical environment in the workplace means higher well-being? A study with healthcare professionals. Psyecology. 2013 Jan 1;4(1):89–110.
  25. 25. Fay L, Carll-White A, Schadler A, Isaacs KB, Real K. Shifting landscapes: The impact of centralized and decentralized nursing station models on the efficiency of care. Heal Environ Res Des J. 2017 Oct;10(5):80–94. pmid:28359162
  26. 26. Hua Y, Becker F, Wurmser T, Bliss-Holtz J, Hedges C. Effects of nursing unit spatial layout on nursing team communication patterns, quality of care, and patient safety. Heal Environ Res Des J. 2012 Oct;6(1):8–38. pmid:23224841
  27. 27. Xuan X, Chen X, Li Z. Impacts of nursing unit design on visibility and proximity and its influences on communication, privacy, and efficiency. Heal Environ Res Des J. 2020 Apr;13(2):200–17.
  28. 28. Dendaas N. Environmental congruence and work-related stress in acute care hospital medical/surgical units: A descriptive, correlational study. Heal Environ Res Des J. 2011 Oct;5(1):23–42. pmid:22322634
  29. 29. France D, Throop P, Joers B, Allen L, Parekh A, Rickard D, et al. Adapting to family-centered hospital design: changes in providers’ attitudes over a two-year period. Heal Environ Res Des J. 2009 Oct;3(1):79–96.
  30. 30. Maben J, Griffiths P, Penfold C, Simon M, Anderson JE, Robert G, et al. One size fits all? Mixed methods evaluation of the impact of 100% single-room accommodation on staff and patient experience, safety and costs. BMJ Qual Saf. 2016 Apr 1;25(4):241–56. pmid:26408568
  31. 31. Shepley MM. Predesign and postoccupancy analysis of staff behavior in a neonatal intensive care unit. Child Heal Care. 2002 Sep 1;31(3):237–53.
  32. 32. Real K, Fay L, Isaacs K, Carll-White A, Schadler A. Using systems theory to examine patient and nurse structures, processes, and outcomes in centralized and decentralized units. Heal Environ Res Des J. 2018 Jul;11(3):22–37.
  33. 33. Alimoglu MK, Donmez L. Daylight exposure and the other predictors of burnout among nurses in a University Hospital. Int J Nurs Stud. 2005 Jul 1;42(5):549–55. pmid:15921986
  34. 34. Tyson GA, Lambert G, Beattie L. The impact of ward design on the behaviour, occupational satisfaction and well-being of psychiatric nurses. Int J Ment Health Nurs. 2002 Jun;11(2):94–102. pmid:12430190
  35. 35. Terzi B, Azizoğlu F, Polat Ş, Kaya N, İşsever H. The effects of noise levels on nurses in intensive care units. Nurs Crit Care. 2019 Sep;24(5):299–305. pmid:30815931
  36. 36. Mihandoust S, Pati D, Lee J, Roney J. Exploring the relationship between perceived visual access to nature and nurse burnout. Heal Environ Res Des J. 2021 Jul 1;14(3):258–73. pmid:33678050
  37. 37. Lake ET, Friese CR. Variations in nursing practice environments: relation to staffing and hospital characteristics. Nurs Res. 2006 Jan 1;55(1):1–9. pmid:16439923
  38. 38. Pati D, Harvey TE Jr, Barach P. Relationships between exterior views and nurse stress: An exploratory examination. Heal Environ Res Des J. 2008 Jan;1(2):27–38.
  39. 39. Lake ET, Hallowell SG, Kutney-Lee A, Hatfield LA, Del Guidice M, Boxer B, et al. Higher quality of care and patient safety associated with better nicu work environments. J Nurs Care Qual. 2016 Jan;31(1):24–32. pmid:26262450
  40. 40. Applebaum D, Fowler S, Fiedler N, Osinubi O, Robson M. The impact of environmental factors on nursing stress, job satisfaction, and turnover intention. J Nurs Adm. 2010 Jul;40(0):323–8.
  41. 41. Djukic M, Kovner C, Budin WC, Norman R. Physical work environment: Testing an expanded model of job satisfaction in a sample of registered nurses. Nurs Res. 2010 Nov 1;59(6):441–51. pmid:21048486
  42. 42. Djukic M, Kovner CT, Brewer CS, Fatehi F, Greene WH. Exploring direct and indirect influences of physical work environment on job satisfaction for early-career registered nurses employed in hospitals. Res Nurs Heal. 2014 Aug;37(4):312–25. pmid:24985551
  43. 43. Hall ET. The Hidden Dimension. Garden City, NY: Doubleday; 1966.
  44. 44. Hua Y, Loftness V, Heerwagen JH, Powell KM. Relationship between workplace spatial settings and occupant-perceived support for collaboration. Environ Behav. 2011 Nov 1;43(6):807–26.
  45. 45. Ulrich RS, Berry LL, Quan X, Parish JT. A conceptual framework for the domain of evidence based design. Heal Environ Res Des J. 2010 Oct;4(1):95–114. pmid:21162431
  46. 46. Gibson JJ. The theory of affordances. In: Shaw RE, Bransford J, editors. Perceiving, acting and knowing: toward an ecological psychology. Hilldale, NJ: Lawrence Erlbaum Associates, Inc.; 1977. p. 67–82.
  47. 47. Maier JR, Fadel GM, Battisto DG. An affordance-based approach to architectural theory, design, and practice. Des Stud. 2009 Jul 1;30(4):393–414.
  48. 48. The Center for the Built Environment [Internet]. Berkeley (CA). University of California, Berkeley; c2021 [cited 2021 Sep 11]. Occupant Survey Toolkit; [about 3 screens]. Available from https://cbe.berkeley.edu/resources/occupant-survey/.
  49. 49. Graham LT, Parkinson T, Schiavon S. Lessons learned from 20 years of CBE’s occupant surveys. Build Cities. 2021 Feb 11;2(1):166–84.
  50. 50. Jimenez FE, Puumala SE, Apple M, Bunker-Hellmich LA, Rich RK, Brittin J. Associations of patient and staff outcomes with inpatient unit designs incorporating decentralized caregiver workstations: a systematic review of empirical evidence. Heal Environ Res Des J. 2019 Jan;12(1):26–43. pmid:30892962
  51. 51. Ulrich RS, Zimring C, Zhu X, DuBose J, Seo HB, Choi YS, et al. A review of the research literature on evidence-based healthcare design. Heal Environ Res Des J. 2008 Apr;1(3):61–125. pmid:21161908
  52. 52. DeVellis RF. Scale development: theory and applications. 4th ed. Thousand Oaks, CA: SAGE Publications, Inc.; 2016.
  53. 53. Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed-mode surveys: the tailored design method. 4th ed. John Wiley & Sons, Inc.; 2014.
  54. 54. Fowler FJ Jr. Improving survey questions: design and evaluation. SAGE Publishing; 1995.
  55. 55. Presser S, Couper MP, Lessler JT, Martin J, Rothgeb JM. Methods for testing and evaluating survey questions. Public Opin Q. 2004 Mar 1;68(1):109–30.
  56. 56. Krosnick JA, Fabrigar LR. Designing rating scales for effective measurement in surveys. In: Lyberg L, Biemer P, Collins M, De Leeuw E, Dippo C, Schwarz N, et al. Survey measurement and process quality. John Wiley & Sons, Ltd; 1997. p. 141–64.
  57. 57. Cappelleri JC, Lundy JJ, Hays RD. Overview of classical test theory and item response theory for quantitative assessment of items in developing patient-reported outcome measures. Clin Ther. 2014 May 1;36(5):648–62. pmid:24811753
  58. 58. Pett MA, Lackey NR, Sullivan JJ. Making sense of factor analysis: The use of factor analysis for instrument development in health care research. SAGE Publications, Inc.; 2003.
  59. 59. Hua Y. A model of workplace environment satisfaction, collaboration experience, and perceived collaboration effectiveness: A survey instrument. Int J Facil Manag. 2010 Oct;1(2):1–17.
  60. 60. Chang CS. Moderating effects of nurses’ organizational support on the relationship between job satisfaction and organizational commitment. West J Nurs Res. 2015 Jun;37(6):724–45. pmid:24733230
  61. 61. Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar AR. design and implementation content validity study: development of an instrument for measuring patient-centered communication. J caring Sci. 2015 Jun 1;4(2):165–78. pmid:26161370
  62. 62. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007 Aug;30(4):459–67. pmid:17654487
  63. 63. Mardia KV. Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. Indian J Stat. 1974 May 1;36(2):115–28.
  64. 64. Little RJ. A test of missing completely at random for multivariate data with missing values. J Am Stat Assoc. 1988 Dec 1;83(404):1198–202.