Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Reliability and validity study of the Spanish adaptation of the “Educational Practices Questionnaire” (EPQ)

  • Mariona Farrés-Tarafa,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Supervision, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain, Member Research Group GRISIMula (Grupo emergente 2017 SGR 531; Grupo en Recerca Enfermera en Simulación), Barcelona, Spain, Secretaria Research Group GRISCA (Grupo en Recerca Enfermera en Simulación en Cataluña y Andorra), Barcelona, Spain

  • Juan Roldán-Merino ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Writing – original draft, Writing – review & editing

    jroldan@santjoandedeu.edu.es

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain, Research Group GEIMAC (Consolidated Group 2017–1681: Group of Studies of Invarianza of the Instruments of Measurement and Analysis of Change in the Social and Health Areas), Barcelona, Spain, Coordinator Research Group GIRISAME (International Researchers Group of Mental Health Nursing Care), Barcelona, Spain

  • Urbano Lorenzo-Seva,

    Roles Formal analysis, Writing – review & editing

    Affiliation Universitat Rovira I Virgili, Tarragona, Spain

  • Barbara Hurtado-Pardos,

    Roles Data curation, Investigation, Resources, Writing – review & editing

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain, Member Research Group GRIN (Grupo Consolidado de Recerca Infermeria, SRG:664), Barcelona, Spain

  • Ainoa Biurrun-Garrido,

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain

  • Lorena Molina-Raya,

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain

  • Maria-Jose Morera-Pomarede,

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliations Campus Docent, Sant Joan de Déu—Fundació Privada, School of Nursing, University of Barcelona, Barcelona, Spain, Research Group GIES (Grupo de investigación en Enfermería, Educación y Sociedad), Barcelona, Spain

  • David Bande,

    Roles Investigation, Methodology, Writing – review & editing

    Affiliation Anesthesiologist, Servicio Anestesiología, Reanimación y Tratamiento del Dolor, Parc de Salut Mar, Barcelona, Spain

  • Marta Raurell-Torredà,

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliations Universidad de Barcelona, Barcelona, Spain, Presidenta Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC), Madrid, Spain, President Research Group GRISIMula (Grupo emergente 2017 SGR 531; Grupo en Recerca Enfermera en Simulación), Barcelona, Spain

  • Irma Casas

    Roles Investigation, Methodology, Resources, Writing – review & editing

    Affiliations Universitat Autònoma de Barcelona, Barcelona, Spain, Preventive Medicine Service, Hospital Germans Trias i Pujol, Barcelona, Spain, Research Group Innovation in Respiratory Infections and Tuberculosis Diagnosis (Group Consolidat 2017 SGR 494)

Abstract

The Educational Practices Questionnaire is an instrument for assessing students perceptions of best educational practices in simulation. As for other countries, in Spain, it is necessary to have validated rubrics to measure the effects of simulation. The objective of this study was to carry out a translation and cultural adaptation of the Educational Practices Questionnaire into Spanish and analyze its reliability and validity. The study was carried out in two phases: (1) adaptation of the questionnaire into Spanish. (2) Cross-sectional study in a sample of 626 nursing students. Psychometric properties were analyzed in terms of reliability and construct validity by confirmatory and exploratory factor analysis. The exploratory and confirmatory factor analyses showed that the one-dimensional model is acceptable for both scales (presence and importance). The results show that the participants' scores can be calculated and interpreted for the general factor and also for the four subscales. Cronbach's alpha and the Omega Index were also suitable for all the scales and for each of the dimensions. The Educational Practices Questionnaire is a simple and easy-to-administer tool to measure how nursing degree students perceive the presence and importance of best educational practices.

Introduction

Using clinical simulation as a tool for teaching both new nursing professionals during their university education and existing professionals during continuing education has grown exponentially in recent years [1]. Simulation is also regarded as an effective educational method for the delivery of clinical scenarios [2].

The literature states that, in order to obtain optimal learning results through simulation associated with the competencies that nurses must master in their clinical practice, it is necessary to use a common international language and these activities must be incorporated throughout the entire Nursing Degree curriculum [3]. In addition, the International Nursing Association for Clinical Simulation and Learning (INACLS), and several studies [4, 5] affirm that it is necessary to establish best practices for the simulation methodology to be effective. In 2005, Pamela Jeffries developed a guide on simulation methodology for nursing education through the National League for Nursing (NLN) along with Laerdal; this consisted of 5 basic components necessary to conduct a simulation session: educational practices, the facilitator, participants, simulation design features and expected results [6].

Based on the seven principles of good practice defined by Chickering & Gamson, educational practices related to simulation were defined [7]. These practices are: 1) Active learning: through simulation, students actively learn, as they have the opportunity to participate directly in the activity, both when performing the scenario and in subsequent debriefing [8]; 2) Feedback: simulation offers immediate feedback, from the instructor and classmates, as well as from the human patient simulator (HPS), on the knowledge and skills demonstrated and the decisions made [9]; 3) Interaction: intercommunication between the university and the student fosters a climate of trust between the instructor and the students. Together they can discuss and reflect on the learning process, in addition to designing individualized improvement action plans according to the needs of each one.[10]; 4) Collaborative learning: simulation promotes collaborative learning as it provides a reality-like environment where all participants work together for the same purpose and share the decision-making process [11]. This can offer several advantages, allowing participants to learn from different disciplines and learn about teamwork and, if they have different levels, novice nurses can even be given the opportunity to learn from experts [12]; 5) High expectations: it is important that the expectations before performing the simulation are high, so that both students and instructors feel empowered to achieve greater learning in a safe environment [13]; 6) Diversity in learning: people have different learning needs depending on their personal characteristics. It is important to implement different teaching methodologies in the curricula, including simulation [14] and 7) Time: through simulation we can give training in techniques to reduce times in real clinical practice [15].

Subsequently, in order to understand how educational practices on high fidelity simulation are perceived by participants, the NLN, along with Laerdal, developed the “Educational Practices Questionnaire” (EPQ) to evaluate perceptions of best educational practices in simulation [16].

In the Spanish educational context there are several valid and reliable tools that can be used to know the level of satisfaction of students about new teaching methodologies; these include the tutorial action [17, 18], or simulation plans [19, 20]. There are also tools to measure nursing competences in a simulation scenario [2123], to evaluate the debriefing through the DASH report [24]. However, in Spain, as occurs in other countries, there are no validated instruments that evaluate perceptions of best educational practices in simulation [25]. Therefore, it is essential to have validated rubrics in Spain in order to be able to evaluate the effects of simulation activities.

The objective of this study was to carry out a translation and cultural adaptation of the Educational Practices Questionnaire into Spanish and analyze its reliability and validity.

Methods

Design

A two-phase study was conducted. In the first phase, the instrument was adapted to Spanish; and in the second phase, the metric properties of the EPQ questionnaire translated into Spanish were analyzed.

Participants and setting

The study sample consisted of 626 nursing students from the 2018–19 academic year at the Campus Docent Sant Joan de Déu Fundació Privada [Sant Joan de Déu Private Foundation Teaching Campus], a center affiliated with the Universidad de Barcelona [University of Barcelona]. Non-probability convenience sampling was used. Students who had performed a clinical simulation during the course were included and only those who were not present at the time of the simulation were excluded.

To calculate the sample size, the recommendations of Comrey and Lee [26] for validation studies were followed, which consider a good sample to be anything more than 500 participants.

Variables and source of information

All items related to the EPQ questionnaire were collected as variables. It is a questionnaire made up of 16 items that are grouped into four dimensions (active learning, collaboration, different ways of learning, and expectations). For each item, the same questionnaire makes it possible to assess both the presence of best educational practices and the importance of best practices integrated in clinical simulation [16].

Each item is evaluated using a scale with five possible answers, where 1) is strongly disagree, 2) disagree, 3) undecided, 4) agree, and 5) strongly agree. The sum of the scores of all the items represents a greater recognition of best educational practices in simulation.

In the study conducted by Jeffries and Rizzolo [16], 'the presence of educational best practices' obtained a Cronbach's alpha of 0.86 and 'importance of best practices embedded in simulation' obtained 0.91. This reliability was similar to what was found in another study, which was 0.95 [27].

Other variables were also collected such as: age, sex, teaching shift, whether they were working, work shift and whether they had previous work experience in the health field.

Procedure

The study was conducted in two phases, the first of which consisted of translating and adapting the English version into Spanish through an independent bilingual English—Spanish, Spanish—English committee.

Fig 1 shows the entire translation and back-translation process that was performed following the Standards for Educational and Psychological Testing [28]. Table 1 shows the semantic equivalence of items from English to Spanish.

thumbnail
Fig 1. Description of phase: Adaptation to Spanish of the “Educational Practices Questionnaire” (EPQ).

https://doi.org/10.1371/journal.pone.0239014.g001

thumbnail
Table 1. Shows the semantic equivalence of items from English to Spanish that were metrically validated.

https://doi.org/10.1371/journal.pone.0239014.t001

Finally, a pilot test was done, and the participants (n = 15) concluded that it required little time for completion (5 to 10 minutes) and that it was easy to understand. At second phase, the questionnaire was administered to the nursing students who had performed a simulation to analyze the psychometric properties of the Spanish version. The Spanish version was named EPQ-Sp.

Statistical analysis

Confirmatory factorial analysis (CFA) models were estimated using structural equation modelling on the polychoric correlation matrix (EQS 6.2 for Windows, Multivariate Software, Inc., Encino, CA, USA). It is important to note that Presence and Importance in Simulation scales were analyzed as separated scales (i.e., factorial analyses were computed first for the 16 items related to Presence scale, and then for the 16 items related to the Importance in Simulation scale).

A CFA was performed to analyze the validity of the construct using the generalized least squares method. The goodness of the fit was examined in terms of the standardized Chi-square, defined as the ratio between the value of the Chi-square and the number of degrees of freedom (χ2/df), Adjusted Goodness of Fit Index (AGFI), Goodness of Fit Index (GFI), Comparative Fit Index (CFI), Bentler Bonnet Non-Normed Fit Index (BBNNFI), Bentler Bonnet Normed Fit Index (BNNFI), Root Mean Standard Error Standardized (RMRS) and Root Mean Standard Error of Approximation (RMSEA). A good overall adjustment was considered if the adjustment values were: X2/df between 2 and 6 [29]; AGFI, GFI, CFI, BBNNFI, BBNFI ≥ .90 and RMRS, RMSEA ≤ .06 [3032]. Reliability was analyzed using Cronbach's alpha coefficient [33] as well as the Omega Index [34] since this latter index makes it possible to analyze the degree of internal consistency based on the factorial loads and does not depend on the number of items, as the alpha coefficient does, which is appropriate given that three of the four dimensions (D2. Collaboration, D3. Diverse ways of learning and D4. High expectations) have two items.

The following were considered acceptable values: Cronbach's alpha values greater than 0.70 [33, 35] and Omega Index values greater than 0.80 [34].

The inspection of the estimated parameters of the CFA suggested that a unidimensional factor solution could also be a plausible option. In order to assess whether the scale could be considered as essentially unidimensional, we computed Explained Common Variance (ECV) and Unidimensional Congruence (UniCo) indices to assess the degree of dominance of the general factor or closeness to unidimensionality [36]. The ECV index essentially measures the proportion of common variance of the item scores that can be accounted for by the first canonical factor (i.e. the factor that explains most common variance). The UniCo index is the congruence between the actual loading matrix and the loading matrix that would be obtained if the unidimensional model is true: the closer to the value of 1, the more the actual loading matrix looks like the unidimensional loading matrix. To conclude that a scale is essentially unidimensional, ECV and UniCo values should be larger than 0.850 for ECV [36], and 0.950 for UniCo [37]. Finally, we also computed Optimal Implementation of Parallel Analysis (PA) [38].

In order to explore the loading values of the items in a unidimensional solution, an exploratory factor analysis (EFA) was computed. Item scores were treated as ordered-categorical variables and the EFA was fitted to the inter-item polychoric correlation matrix [39]. The chosen fitting function was robust unweighted least squares, with mean-and-variance corrected fit statistics [40]. A single factor was extracted.

We were also interested in assessing a bifactor model for the Presence scale. Factor analysis applications to item are generally based on one of these two models: (a) the unidimensional (Spearman) model or (b) the correlated-factors model. The bifactor model combines both previous models: it allows the hypothesis of a general dimension to be maintained, while the additional common variance among the scores is modelled using group factors that are expected to approach a simple structure [41]. In particular, we computed Pure Exploratory Bifactor (Pebi) proposed by Lorenzo-Seva & Ferrando and implemented in Factor software [42]. As the scale was aimed at measuring four factors that were expected to approach a simple structure, these factors were rotated using Robust Promin rotation [42].

Ethical considerations

The study was approved by the Clinical Investigation Ethics Committee of the Sant Joan de Déu Foundation with CEIC research code PIC-42-19. All participants were informed of the purpose of the study and they freely gave their verbal and written consent to participate in the study as volunteers. The translation has been completed with the consent of the National League for Nursing (NLN), but NLN is not responsible for its accuracy. NLN holds the copyright to the original (English language) and the translated instrument in Spanish. Any request related to the translated instrument in Spanish must be addressed to NLN. More information about research instruments and copyright is available in NLN website [http://www.nln.org/professional-development-programs/research/tools-and-instruments]

Results

Demographic characteristics

Finally, a total of 626 nursing students were included in the study. The mean age was 22.9 (SD 5.1), 83.4% were women. More than half of the students (57.7%) were enrolled in the morning study schedule. 74.4% of the students declared that they were working at that time and of these 62.4% had temporary employment (Table 2).

thumbnail
Table 2. Sociodemographic characteristics of the study population.

https://doi.org/10.1371/journal.pone.0239014.t002

Construct validity

In the following subsections, we present the different analyses that we computed to assess construct validity: Confirmatory Factor Analysis (CFA), Essential Unidimensionality, and Exploratory Bifactor (PEBI).

Confirmatory Factor Analysis (CFA).

The confirmatory factorial analysis was used to verify the internal structure of the questionnaire, in which a 4-dimensional model identical to the structure of the original version of the questionnaire was proposed. Parameter estimation was performed using the least squares method. This method is usually used primarily for ordinal measurement items and has the same properties as the maximum likelihood method, although under less stringent multivariate normality considerations [43].

Dimensions 2, 3 and 4 have the highest factor loads or saturations for both assessments (presence and importance of simulation). All saturations were greater than 0.50. The correlations between the factors for the presence and importance of the simulation were high (Figs 2 and 3, respectively).

thumbnail
Fig 2. Standardized model parameters for the presence of educational best practices.

https://doi.org/10.1371/journal.pone.0239014.g002

thumbnail
Fig 3. Standardized model parameters for the importance of educational best practices.

https://doi.org/10.1371/journal.pone.0239014.g003

The Chi square test was statistically significant but the fit ratio was 4.17 (presence of good practices) and 5.32 (importance), so if it is between 2–6 the fit is reasonably good [43]. Likewise, the rest of the indices analyzed present the same trend, so it can be concluded that the model correctly fits (Table 3).

thumbnail
Table 3. Indices of goodness of fit of the confirmatory model.

https://doi.org/10.1371/journal.pone.0239014.t003

Essential unidimensionality.

As can be observed in Figs 2 and 3, the CFA model was adjusted for a model where the factors were strongly correlated between themselves. This outcome was observed for both the Presence and Importance in Simulation scales. As the correlations were large, a unidimensional factor model could also be expected to fit properly. To evaluate this hypothesis, we computed an analysis to assess essential unidimensionality. For the Presence scale, the ECV and UniCo values were 0.845 and 0.973, respectively. For the Importance in Simulation scale, the ECV and UniCo values were 0.849 and 0.979, respectively. The values of both indices suggested that there is a dominant factor running through all the 16 items in both scales. In addition, the first eigenvalue of the Presence and Importance in Simulation scales accounted for 50.9% and 58.2% of the common variance, respectively. PA suggested that the unidimensional solution is the most replicable in both cases.

Goodness of fit indices for the single factor model are printed in Table 4. As can be observed in the table, the fit is not so good as the multidimensional model tested in CFA, but it is still acceptable. Finally, Expected at Posteriori reliability [44] of the single factor was 0.926 and 0.947 for the Presence and Importance in Simulation scales, respectively.

thumbnail
Table 4. Indices of goodness of fit of the exploratory unidimension to the model.

https://doi.org/10.1371/journal.pone.0239014.t004

Exploratory bifactor (PEBI).

While the unidimensional solution is acceptable for both the Presence and Importance in Simulation scales, the fit indices inspected in the previous subsection show that the fit to the unidimensional solution is slightly worse in the case of the Presence scale. This means that, while the general factor for Presence is strong, the four group factors can still play a substantial role in the factor model. To test this hypothesis, we fitted an exploratory bifactor model related to the Presence scale. Goodness of fit indices for the bifactor model are printed in Table 5. As can be observed in the table, the fit was actually very good. In addition, all items had a salient loading in the general factor and in the expected group factor (see the loading matrix printed in Table 6). It must be pointed out that some items expected to measure Active learning (i.e., items 4, 5, 7 and 9) also loaded in High expectation. The correlation among group factors ranged from 0.04 to 0.20. Finally, Orion reliabilities of factors [44] ranged from 0.712 (for High expectation) to 0.920 (for Active learning). The general factor showed an Orion reliability of 0.881.

thumbnail
Table 5. Indices of goodness of fit of the exploratory bifactor model of Presence scale.

https://doi.org/10.1371/journal.pone.0239014.t005

thumbnail
Table 6. Loading matrix related to the exploratory bifactor solution.

https://doi.org/10.1371/journal.pone.0239014.t006

Conclusion of construct validity analyses.

The conclusion is that both models (unidimensional and multidimensional) are acceptable. From a practical point of view, this means that researchers can compute the overall scale score (i.e., the score that is obtained using the responses to the all items), but also the score in four subscales when a more detailed description of participant responses may be needed.

Reliability

Cronbach's alpha internal consistency coefficient for the total of the Assess perceptions of educational best practices presence and importance and simulation questionnaire was 0.894 and 0.915, respectively. The Omega (ω) coefficient for the questionnaire total was 0.922 (presence) and 0.945 (Importance and simulation). All values obtained for each dimension and each coefficient were greater than 0.762 (Table 7).

thumbnail
Table 7. Internal consistency coefficient (Cronbach’s alpha and Omega) for the Educational Practices Questionnaire (EPQ).

https://doi.org/10.1371/journal.pone.0239014.t007

Discussion

This study describes the adaptation to Spanish and the psychometric analysis of the “Educational Practices Questionnaire” (EPQ). It is a questionnaire made up of 16 items designed to evaluate both the presence of best educational practices and the importance of best practices integrated in simulation. The results show that the Spanish EPQ has adequate psychometric properties in terms of internal consistency and the validity of the construct. Internal consistency calculated with Cronbach's alpha coefficient was adequate (α ≥ 0.70) for the total of the questionnaire and for each of the dimensions [35]. The highest alpha value was found for dimension D1. Active Learning. For the rest of the dimensions (D2. Collaboration, D3 Learning diversity and D4. High expectation) the alpha varied between 0.762 and 0.836. Since several dimensions (D2, D3 and D4) only have two items, the Omega index was also calculated. Internal consistency according to McDonald (2013) was adequate (ω ≥ 0.85). This instrument has been translated into different languages and countries (Turkish and Portuguese) and reported values similar to those found in our study [27, 45, 46].

The CFA revealed an adequate adjustment of the 4-factor structure consistent with the original version [16].

In our study, a confirmatory factorial analysis was carried out using the generalized least squares method in order to determine whether the scores reproduced the four-dimensional structure on which the original questionnaire is based. The confirmatory factorial analysis showed that all the items presented an adequate factorial load. With respect to the fit indices analyzed for the model, both the absolute fit indices: GFI, RMSR, RMSEA, and the incremental fit indices: AGFI, BBNFI, BBNNFI, CFI and the parsimony indices such as the normed Chi-square all present an acceptable fit. The fit of the model was adequate in relation to the study by Franklin et al. (2014) [27]. In addition, we computed an exploratory factor analysis and observed that the unidimensional factor solution is also acceptable for both scales (Presence and Importance in Simulation) of the test. In the case of the Presence scale, it is interesting that a bifactor model can be fitted: this means that, while the scale seems to be essentially unidimensional, the four group factors still play a substantial role in the factor model. Our outcomes reinforce the idea that participants’ scores can be computed and interpreted for the general factor, but also for the four subscales.

Limitations

Our study has several limitations. First of all, we selected a sample of convenience from a single university in Barcelona, and therefore, it is possible that our results cannot be generalized to all nursing students. However, the socio-demographic and work characteristics of the students in this study are similar to other universities in Spain and Europe.

Secondly, there is response bias. In other words, the power of the facilitator over the nursing student may also have an impact on the response. This bias has been minimized by conducting the questionnaire anonymously and additionally none of the investigators participated in the simulation activity.

Finally, future studies should investigate the predictive capacity (sensitivity and specificity) of the EPQ-Sp questionnaire, as well as its temporal stability.

Conclusions

The Educational Practices Questionnaire is a simple and easy-to-administer tool to measure the perception of nursing degree students of the presence of best educational practices and the importance of best practices integrated into clinical simulation. The statistical techniques used in this study enable the addition of solid evidence to support the use of the EPQ questionnaire in Spanish and ensure that simulation judgments are reliable and valid.

Acknowledgments

We would like to thank all of the nursing students who participated in the study.

References

  1. 1. Tosterud R, Petzäll K, Hedelin B, Hall-Lord ML. Psychometric testing of the norwegian version of the questionnaire, student satisfaction and self-confidence in learning, used in simulation. Nurse Educ Pract. 2014;14(6):704–708. pmid:25458231
  2. 2. Nakayama N, Arakawa N, Ejiri H, Matsuda R, Makino T. Heart rate variability can clarify students’ level of stress during nursing simulation. PLoS One. 2018;13(4).
  3. 3. Rizzolo MA, Kardong-Edgren S, Oermann MH, Jeffries PR. The national league for nursing project to explore the use of simulation for high-stakes assessment: Process, outcomes, and recommendations. Nurs Educ Perspect. 2015;36(5):299–303. pmid:26521498
  4. 4. Groom JA, Henderson D, Sittner BJ. National League for Nursing-Jeffries Simulation Framework State of the Science Project: Simulation Design Characteristics. Clin Simul Nurs. 2013;10(7):337–344.
  5. 5. Zhu FF, Wu LR. The effectiveness of a high-fidelity teaching simulation based on an NLN/Jeffries simulation in the nursing education theoretical framework and its influencing factors. Chinese Nurs Res. 2016;3(3):129–132.
  6. 6. Jeffries PR. A framework for designing, implementing, and evaluating: Simulations used as teaching strategies in nursing. Nurs Educ Perspect. 2005;26(2):96–103. pmid:15921126
  7. 7. Chickering AW, Gamson ZF. Seven Principles for Good Practice in Undergraduate Education Seven Principles of Good Practice. AAHE Bull. 1987;(March):3–7.
  8. 8. Olaussen C, Heggdal K, Tvedt CR. Elements in scenario-based simulation associated with nursing students’ self-confidence and satisfaction: A cross-sectional study. Nurs Open. 2020;7(1):170–179.
  9. 9. Jeffries PR. Simulation in Nursing Education: From Conceptualization to Evaluation. Washington: National League for Nursing; 2012. http://books.google.fr/books/about/Simulation_in_Nursing_Education.html?id = f87EMQEACAAJ&pgis = 1
  10. 10. Morrow MR. Monograph Review: The NLN Jeffries Simulation Theory (2016), edited by Pamela R. Jeffries. Nurs Sci Quart. 2018;31(4):392.
  11. 11. Reese CE, Jeffries PR, Engum SA. Learning together: Using simulations to develop nursing and medical student collaboration. Nurs Educ Perspect. 2010;31(1):33–37.
  12. 12. Lubbers J, Rossman C. Satisfaction and self-confidence with nursing clinical simulation: Novice learners, medium-fidelity, and community settings. Nurse Educ Today. 2017;48:140–144. pmid:27810632
  13. 13. Rodriguez KG, Nelson N, Gilmartin M, Goldsamt L, Richardson H. Simulation is more than working with a mannequin: Student’s perceptions of their learning experience in a clinical simulation environment. J Nurs Educ Pract. 2017;7(7):30.
  14. 14. Lioce L, Meakim CH, Fey MK, Chmil JV, Mariani B, Alinier G. Standards of Best Practice: Simulation Standard IX: Simulation Design. Clin Simul Nurs. 2015;11(6):309–315.
  15. 15. Jeffries PR. Simulations Take Educator Preparation. Nurs Educ Perspect. 2008;29(2):70–73. pmid:18459620
  16. 16. Jeffries PR, Rizzolo MA. Designing and Implementing Models for the Innovative Use of Simulation to Teach Nursing Care of Ill Adults and Children: A National, Multi-Site, Multi-Method Study. Washington: National League for Nursing; 2006.
  17. 17. Roldán-Merino J, Roca-Capara N, Miguel-Ruiz D, Rodrigo-Pedrosa O. Development and psychometric properties of the assessment questionnaire for the process of the tutorial action plan. Nurse Educ Today. 2019;76:109–117. pmid:30776532
  18. 18. Roldán-Merino J, Miguel-Ruiz D, Roca-Capara N, Rodrigo-Pedrosa O. Personal tutoring in nursing studies: A supportive relationship experience aimed at integrating, curricular theory and professional practice. Nurse Educ Pract. 2019;37:81–87. pmid:31129529
  19. 19. Alconero-Camarero AR, Romero AG, Sarabia-Cobo CM, Arce AM. Clinical simulation as a learning tool in undergraduate nursing: Validation of a questionnaire. Nurse Educ Today. 2016;39:128–134. pmid:27006044
  20. 20. Díaz JL, Ramos-Morcillo AJ, Amo FJ, Ruzafa-Martínez M, Hueso-Montoro C, Leal-Costa C. Perceptions about the self-learning methodology in simulated environments in nursing students: A mixed study. Int J Environ Res Public Health. 2019;16(23):4646.
  21. 21. Díaz Agea JL, Megías Nicolás A, García Méndez JA, Adánez Martínez M de G, Leal Costa C. Improving simulation performance through Self-Learning Methodology in Simulated Environments (MAES©). Nurse Educ Today. 2019;76:62–67. pmid:30771611
  22. 22. Sánchez Expósito J, Leal Costa C, Díaz Agea JL, Carrillo Izquierdo MD, Jiménez Rodríguez D. Ensuring relational competency in critical care: Importance of nursing students’ communication skills. Intensive Crit Care Nurs. 2018;44:85–91. pmid:28969955
  23. 23. Roldán-Merino J, Farrés-Tarafa M, Estrada-Masllorens JM, Hurtado-Pardos B, Miguel-Ruiz D, Nebot-Bergua C, et al. Reliability and validity study of the Spanish adaptation of the “Creighton Simulation Evaluation Instrument (C-SEI).” Nurse Educ Pract. 2019;35:14–20. pmid:30640046
  24. 24. Muller-Botti S, Maestre JM, del Moral I, Fey M, Simon R. Linguistic Validation of the Debriefing Assessment for Simulation in Healthcare in Spanish and Cultural Validation for 8 Spanish Speaking Countries. Simul Healthc. 2020. Ahead of print.
  25. 25. Raurell-Torredà M, Olivet-Pujol J, Romero-Collado À, Malagon-Aguilera MC, Patiño-Masó J, Baltasar-Bagué A. Case-Based Learning and Simulation: Useful Tools to Enhance Nurses’ Education? Nonrandomized Controlled Trial. J Nurs Scholarsh. 2015;47(1):34–42. pmid:25346329
  26. 26. Comrey AL, Lee HB. A First Course in Factor Analysis. 2nd ed. London: Routledge; 2016.
  27. 27. Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–1304. pmid:25066650
  28. 28. Frey BB. Standards for Educational and Psychological Testing. In: The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation. California:SAGE Publications, Inc; 2018. p. 1–1657
  29. 29. Hu L, Bentler P. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Modeling [Internet]. 1999;6(1):1–55. Available from: http://www.tandfonline.com/doi/abs/10.1080/10705519909540118
  30. 30. Kline RB. Methodology in the Social Sciences. Principles and practice of structural equation modeling. 3rd ed. New York: Guilford Press; 2011.
  31. 31. Byrne BM. Structural Equation Modeling With AMOS. Structural Equation Modeling With AMOS. Basic Concepts, Applications, and Programming. 3rd ed. London: Routledge; 2016.
  32. 32. Brown TA. Confirmatory factor analysis for applied research. 2nd ed. New York: The Guildford Press; 2015.
  33. 33. Cronbach LJ. Coeffiecient alpha and the internal structure of test. Psychometrika. 1951;16:297–334.
  34. 34. McDonald RP. Reliability Theory for Total Test Scores. In: Test Theory: A Unified Treatment. United Kingdom: Psychology Press; 2013. p. 62–75.
  35. 35. Nunnally JC, Bernstein IH. The theory of measurement error. In: Psychometric Theory. 3rd ed. New York: McGraw-Hill;1994. p. 209–247.
  36. 36. Ferrando PJ, Lorenzo-Seva U. On the Added Value of Multiple Factor Score Estimates in Essentially Unidimensional Models. Educ Psychol Meas. 2019;79(2):249–271. pmid:30911192
  37. 37. Lorenzo-Seva U, ten Berge JMF. Tucker’s congruence coefficient as a meaningful index of factor similarity. Methodology. 2006;2(2):57–64.
  38. 38. Timmerman ME, Lorenzo-Seva U. Dimensionality assessment of ordered polytomous items with parallel analysis. Psychol Methods. 2011;16(2):209–220. pmid:21500916
  39. 39. Ferrando PJ, Lorenzo-Seva U. Unrestricted item factor analysis and some relations with item response theory. Technical Report. Tarragona: Universitat Rovira i Virgili; 2013. Available from: https://psico.fcep.urv.cat/utilitats/factor/documentation/technicalreport.pdf
  40. 40. Ferrando PJ, Lorenzo-Seva U. Program FACTOR at 10: Origins, development and future directions. Psicothema. 2017;29(2):236–240. pmid:28438248
  41. 41. Holzinger KJ, Swineford F. The Bi-factor method. Psychometrika. 1937;2(1):41–54.
  42. 42. Lorenzo-Seva U, Ferrando PJ. A General Approach for Fitting Pure Exploratory Bifactor Models. Multivariate Behav Res. 2019;54(1):15–30. pmid:30160535
  43. 43. Rial A, Varela J, Abalo J, Lévy JP. El análisis factorial confirmatorio. In: Lévy JP, Varela J. Modelización con estructuras de covarianzas en ciencias sociales : temas esenciales, avanzados y aportaciones especiales. A Coruña: Gesbiblo S.L.; 2006. p. 119–154.
  44. 44. Ferrando PJ, Lorenzo-Seva U. A note on improving EAP trait estimation in oblique factor-analytic and item response theory models. Psicologica. 2016;37(2):235–247.
  45. 45. Guimaraes R, Mazzo A, Amado JC, Negrao RC, Berchelli F, Mendes IA. Validation to Portuguese of the scale of student satisfaction and self-confidence in learning. Rev Lat Am Enfermagem. 2015;23(6):1007–1013. pmid:26625990
  46. 46. Unver V, Basak T, Watts P, Gaioso V, Moss J, Tastan S, et al. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire. Contemp Nurse. 2017;53(1):60–74. pmid:28084900