Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Actor feedback and rigorous monitoring: Essential quality assurance tools for testing behavioral interventions with simulation

  • Martha A. Abshire ,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    mabshir1@jhu.edu

    Affiliation School of Nursing, Johns Hopkins University, Baltimore, Maryland, United States of America

  • Xintong Li,

    Roles Formal analysis, Writing – review & editing

    Affiliation Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, United States of America

  • Pragyashree Sharma Basyal,

    Roles Investigation, Project administration, Writing – review & editing

    Affiliations Outcomes After Critical Illness and Surgery (OACIS) Group, Johns Hopkins University, Baltimore, Maryland, United States of America, Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, United States of America

  • Melissa L. Teply,

    Roles Conceptualization, Data curation, Writing – review & editing

    Affiliation Division of Geriatrics, Gerontology, and Palliative Medicine, Department of Internal Medicine, University of Nebraska Medical Center, Omaha, Nebraska, United States of America

  • Arun L. Singh,

    Roles Conceptualization, Data curation, Writing – review & editing

    Affiliation Division of Pediatric Palliative Medicine, Prisma Health Children’s Hopsital – Upstate, University of South Carolina School of Medicine – Greenville, Greenville, South Carolina, United States of America

  • Margaret M. Hayes,

    Roles Conceptualization, Methodology, Writing – review & editing

    Affiliations Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, United States of America, Carl J.Shapiro Institute for Education and Research at Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America

  • Alison E. Turnbull

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliations Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, United States of America, Outcomes After Critical Illness and Surgery (OACIS) Group, Johns Hopkins University, Baltimore, Maryland, United States of America, Division of Pulmonary and Critical Care Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, United States of America

Abstract

Introduction

Simulation is a powerful tool for training and evaluating clinicians. However, few studies have examined the consistency of actor performances during simulation based medical education (SBME). The Simulated Communication with ICU Proxies trial (ClinicalTrials.gov NCT02721810) used simulation to evaluate the effect of a behavioral intervention on physician communication. The purpose of this secondary analysis of data generated by the quality assurance team during the trial was to assess how quality assurance monitoring procedures impacted rates of actor errors during simulations.

Methods

The trial used rigorous quality assurance to train actors, evaluate performances, and ensure the intervention was delivered within a standardized environment. The quality assurance team evaluated video recordings and documented errors. Actors received both timely, formative feedback and participated in group feedback sessions.

Results

Error rates varied significantly across three actors (H(2) = 8.22, p = 0.02). In adjusted analyses, there was a decrease in the incidence of actor error over time, and errors decreased sharply after the first group feedback session (Incidence Rate Ratio = 0.25, 95% confidence interval 0.14–0.42).

Conclusions

Rigorous quality assurance procedures may help ensure consistent actor performances during SBME.

Introduction

Simulation based medical education (SBME), a current mainstay of medical education, provides opportunities for clinical training and evaluation in a safe, controlled environment. It also improves communication, teamwork, and patient outcomes such as time to first compressions in CPR. [1,2] SBME can include low fidelity simple task trainers, high fidelity mannequins, and at the highest fidelity, ‘standardized patients’. [35] Typically, standardized patients are actors who are trained to replicate a specific clinical scenario. When simulation is used to test whether an intervention affects clinician behavior in the setting of a randomized trial, consistent actor performances are essential. Because the actors are given the same scenario or script, it is often assumed that their performances are similar. But without excellent consistency, it becomes impossible to determine whether differences in clinician behavior are attributable to the intervention, or to differences in actor performances. In this situation, a rigorous quality assurance process is needed to ensure each actor’s performance adheres closely to the study scenario. While the importance of quality assurance is recognized, few studies have examined the consistency of actor performances over time, and across actors [3,68]

The Simulated Communication with ICU Proxies (SCIP) study was a double-blind randomized controlled trial (RCT) of an intervention designed to influence ICU physician communication behaviors (ClinicalTrials.gov NCT02721810). [9] The trial tested if ICU physicians (intensivists) randomized to document prognosis for a hypothetical patient at high risk of death were more likely to discuss the option of comfort care in a simulated family meeting. Actors in the trial were required to react and respond to enrolled intensivists according to a detailed script during simulations. Deviations from the script were treated as errors. The aims of this secondary analysis of quality assurance data collected during the SCIP trial was to evaluate the simulation quality assurance program, by 1) comparing the incidence of errors across three actors, 2) determining if the incidence of errors changed over time, and 3) assessing whether the rate of actor errors decreased after group feedback sessions.

Methods

Auditions and hiring actors

The SCIP trial hypothesized that physicians who record prognosis for a patient at high risk of death or severely impaired functional recovery prior to a simulated family meeting are more likely to disclose prognosis and offer the option of care focused on comfort during the meeting. The Johns Hopkins Medicine Institutional Review Board approved the study (IRB 00082272). The actors portraying the hypothetical patient’s proxy decision-maker during the simulation, are referred to as Standardized Family Members (SFM). The study team collaborated with the Johns Hopkins Simulation Center to identify experienced, female, African-American SFM between the ages of 50 and 70. During auditions, candidate SFMs participated in an early version of the study scenario. They were also interviewed to discuss availability for rehearsals and study visits, compensation, and previous experiences as a patient or patient proxy. Critical to the auditioning process was the ability of the SFMs to receive critical feedback and incorporate it into their subsequent performances. Selected SFMs were invited to a second audition to test their ability to adapt their performance based on initial feedback. New doctors displaying different behaviors participated in the second round of auditions to demonstrate the diversity of physician behaviors the actors might encounter during the trial. Finally, three SFMs were hired based on performances, responsiveness to feedback, and scheduling availability. All hired SFMs lived in the Baltimore metro-region and had personal experiences advocating for hospitalized family members. Three SFMs were hired to ensure availability at a convenient time for study participants including evenings and weekends which was critical to meeting the trial’s recruitment target. However, hiring multiple SFMs created a need for vigilance regarding standardization of the performance and fidelity to the study script.

Developing the patient scenario

The scenario described an 81-year-old, African-American male with acute respiratory distress syndrome (ARDS) complicated by septic shock on day 3 of treatment in a medical ICU. The patient’s probability of in-patient mortality at admission was estimated as 64% using APACHE III. [10] By ICU day 3 his probability of in-patient mortality had climbed to 88% according to the Mortality Probability Model II-72 hours. [11,12] Although these model-based estimates were not provided to study participants, the scenario provided clear indicators that the patient was at high risk of in-hospital mortality and likely to require 24-hour nursing care in a residential facility if he survived hospitalization. The scenario was presented to physicians in a paper chart that included flow sheets, physical examination findings, past medical history, laboratory values, ventilator settings, radiology report, and summary assessment and plans from the first three days of ICU admission. All documents and data were de-identified and adapted from the medical record of an actual patient to enhance realism of the scenario.

Developing the simulation script

A high fidelity simulation was essential to ensure valid trial results. The intervention being tested was expected to have the greatest effect on physician behavior during meetings with passive family members with low health literacy. Therefore, the simulation script called for the SFM to portray a daughter who had minimal understanding of her father’s clinical condition and was unaware that her father was sick enough to die at the meeting’s outset. The script called for the daughter to be passive, deferential, and to neither question nor challenge information provided by physicians. If the physician disclosed that the patient was sick enough to die, the scripted response was surprise and emotion, followed by a single clarifying question: “What do you think is most likely to happen?”

Study investigators created a list of 20 questions and statements that enrolled ICU physicians were likely to exhibit based on previous studies of ICU physician behavior, [1315] and clinical experience. Responses to each statement and question were then drafted based on published research, [16,17] and clinical experience. To further ensure a high degree of realism, quotes from transcripts of actual ICU family meetings recorded during a prior study [18], were included in the script as examples of authentic proxy responses. The script also included background information on the psychosocial and health history of the hypothetical patient and his daughter (see Table 1, S1 and S2 Tables). Finally, the script was reviewed by the SFMs who edited responses for authenticity to the local Baltimore colloquialisms/vernacular.

thumbnail
Table 1. Association between additional simulation performances and error incidence rate.

https://doi.org/10.1371/journal.pone.0233538.t001

Actor training

The first step in actors’ training was a read-through and discussion of the script with the study team. Second, each actor participated in a video-recorded rehearsal simulation with a physician who was ineligible for the study. Third, the study team reviewed the rehearsal videos with the actors as a group and provided feedback to help the actors identify physician statements requiring a scripted response. Steps 2 and 3 (rehearsal simulation and feedback session) were repeated 3 times until the study team was confident that actors could portray realistic emotional responses, adhere to the script, and provide appropriate scripted responses to key physician statements.

Roles and responsibilities of the quality assurance team

Two clinical fellows were recruited to serve as a Quality Assurance (QA) team tasked with monitoring actor performances during study simulations. In addition to helping draft the script, the QA team participated in SFM training, reviewed each simulation to identify actor errors, and decided if each error could have influenced the enrolled physician’s behavior during the simulation. Errors which the QA team believed had the potential to influence one of the main study outcomes were designated as “major” errors. For example, when a SFM introduced herself and said “I guess I’m the decision-maker” this was marked as a minor error and the SFM was instructed to reply “I’m his only child” instead. While referring to oneself as a decision-maker was unlikely to change how the physician behaved, the study script called for the SFM to portray a family member who is unaware of how proxy decision makers are identified and unfamiliar with medical jargon. In contrast, the SFM response “Umm…I don’t know what all that means,” was marked as a major error since the study script instructed SFMs not to seek clarification, and doing so could have resulted in the physician providing clear information about prognosis which was a main study outcome. An operations manual for reviewing simulation videos was developed by the QA team to ensure consistency over the year-long course of the study.

Both members of the quality assurance team independently reviewed the video recordings as well as the written transcription of each simulated family meeting. They used a Research Electronic Data Capture (REDCap) form to record performance evaluations via a standardized questionnaire. [19] Then, the Data Comparison Tool for Double Data Entry in REDCap was used to compare responses and identify discrepancies. Discrepancies were resolved via discussion between the QA Team, Principal Investigator, and Research Study Coordinator.

Performance feedback

Actors received feedback on their performances throughout the trial in three ways (Fig 1). First, each simulation was run by a study team member (principal investigator, study coordinator, or research assistant) who provided immediate feedback to the actor based on real-time observation of the simulation.

thumbnail
Fig 1. The three modes of feedback used to standardized actor (SFM) performances.

Abbreviations: PI, Principal Investigator; QA, Quality Assurance; SFM, Standardized Family Members.

https://doi.org/10.1371/journal.pone.0233538.g001

Second, the written report generated by the QA team was shared with the actor, study coordinator, and principal investigator. Each written report highlighted exactly where in the simulation transcript any errors occurred, and suggested how the actor ideally should have responded (see Table 2,S1 and S2 Tables). After receiving these reports, actors had the opportunity to review the video and transcripts of their performances and discuss the report with the Principal Investigator or Research Study Coordinator. To minimize the risk of bias and preserve anonymity, QA team members did not communicate directly with SFMs. Finally, three 60-minute group feedback sessions involving all actors, the Principal Investigator, and the Research Study Coordinator were scheduled. Sessions were conducted after 9 simulations (group feedback 1), 32 simulations cumulatively (group feedback 2) and 77 cumulative simulations (group feedback 3). The first group feedback session was scheduled to ensure each actor had completed at least 2 study simulations prior to meeting.

thumbnail
Table 2. Association between study periods and error incidence rate.

https://doi.org/10.1371/journal.pone.0233538.t002

Group feedback sessions provided a forum for actors to discuss improvised responses to unexpected questions not addressed in the study script and offer each other suggestions for handling challenging situations. In addition, feedback sessions were used to review summarized feedback provided by the participating doctors on perceived conflict and realism of the simulation, to update the actors regarding additions or amendments to the study script, and to point out trends or concerns identified by the QA team. These group sessions also served as an opportunity for actors to share their experiences with one another and discuss techniques they had developed for recognizing intensivist behaviors requiring specific responses.

Statistical analyses

The primary outcome of interest was the number of SFM errors identified by the QA team in each simulation. All analyses were repeated using the subset of errors identified by the QA team as having potential to influence an enrolled physician’s behavior during the simulation. This subset of errors were designated as “major errors.” Error rates across the three SFMs were visualized using violin plots, and then compared using the Kruskal-Wallis rank sum test. For exploratory purposes, the incidence of errors over time was displayed by plotting the number of errors in each simulation sequentially for each SFM (Fig 2), and then versus calendar date (Fig 3). Fig 2 includes loess-smoothed plots of each SFM’s performance errors, and 95% confidence intervals calculated using bootstrapped standard errors.

thumbnail
Fig 2. Number of errors in sequential simulations performed by each of the three standardized family members (SFM).

The SFM are represented by their first initials B, H, and K. Points are jittered to improve visualization. Trend for each SFM is shown using loess smoothing with 95% confidence intervals.

https://doi.org/10.1371/journal.pone.0233538.g002

thumbnail
Fig 3. Number of errors in each simulation by calendar date.

Point colors correspond to the three standardized family members who are represented by their first initials B, H, and K.

https://doi.org/10.1371/journal.pone.0233538.g003

To assess whether the risk of a SFM committing errors decreased with practice, we regressed total errors on the number of simulations each SFM had performed. To assess whether group feedback sessions decreased the incidence rate of errors, errors were regressed on an variable indicating the number of group feedback sessions which had occurred. Calendar dates were grouped into four periods corresponding to dates before any group feedback sessions (Period 0), dates between the first and second group feedback session (Period 1), etc. Dates before the first group feedback session were used as the reference period.

Poisson regression was used to estimate incidence rate ratios [20, 21] with standard errors computed via the Delta method. Models were adjusted for the SFM performing in the simulation and whether the physician disclosed that the patient was sick enough to die. Physician disclosure that the patient was sick enough to die was included in adjusted models because SFMs were required to perform an emotional response in response to disclosure and ask the scripted clarifying question which increased the likelihood of error. The null hypothesis of equidispersion (mean equal to variance) was tested for each model. Analyses were repeated using major errors as the dependent variable as a sensitivity analysis. All analyses were performed with R 3.6.0 (R Core Team (2019). Vienna, Austria).

Results

The three SFMs performed in 44, 47, and 25 simulations respectively due to physician scheduling and differences in SFM availability. There was a statistically significant difference between the total error rates of the three SFMs (H(2) = 8.22, p = 0.02), and the rates of major errors committed (H(2) = 8.33, p = 0.02) (see Fig 1, S1 and S2 Tables). The number of errors in each sequential simulation stratified by SFM is displayed in Fig 2 with the greatest decline apparent during the first 12 simulations performed by each SFM. The timing of group feedback sessions and errors during simulations is presented in Fig 3 and shows that frequency of errors decreased after the first group feedback session and then remained relatively constant.

In adjusted analyses, each additional performance by a SFM was associated with a statistically significant decrease in the risk of committing an error (Incidence Rate Ratio[IRR] = 0.95, 95% confidence interval [CI] 0.93–0.97, P<0.001) and in the risk of commiting a major error (IRR = 0.96, 95% CI 0.93–0.98, P<0.001) (Table 1). The incidence of errors in simulations performed after one, two, or three group feedback sessions (periods 1–3) were also significantly lower than the incidence prior to the first group feedback session (Table 2). This improvement was most evident after the first group feedback session when the adjusted incidence rate of major errors decreased by 75% (IRR = 0.25, 95% CI = 0.14–0.42).

Discussion

SBME is a well-established tool used for formative assessment of healthcare providers and is gaining popularity as a summative assessment tool in some settings. [22] SBME improves development of technical procedural skills, critical thinking, and communication strategies without threat of patient harm. [8,22] However, consistent actor performances are critical in trials of behavioral interventions. In this secondary analysis of the SCIP trial we have presented a detailed QA protocol for assuring consistent SFM performances in the setting of a randomized trial. Error rates varied between SFMs, but improved with repitition, and improved significantly after the first group feedback session.

Without consistent, high-fidelity actor performances, simulation may be less effective for both teaching and testing interventions. In the setting of a randomized trial, it is not possible to know during the design phase what magnitude of effect an intervention will have, or how errors and inconsistencies in a simulation will affect the outcome. For an intervention with a small effect size, the bias introduced by variability in actor performances may be sufficient to change how the trial results are interpreted. Therefore, we viewed developing a rigorous QA program as a worthwhile opportunity to reduce variability in the study environment. In this way, reducing errors is similar to calibrating laboratory equipment.

While significant effort has gone toward fostering consistency in the way learners are equaluated, comparatively little research has been conducted into best practices for standardizing SBME performances. [68,23,24] In the most recent guidelines for reporting on SBME simulation, no standards were provided for describing the simulation aside from theoretical, conceptual, and situation-specific exposures. The guidelines also did not include recommendations for methods to address intervention fidelity, nor to address reproducibility, training, and assessment of actor performances. [23] The protocol outlined above provides an example framework for QA design and reporting, although further contributions to methodologic considerations for simulation are needed.

Simulation programs often struggle with maintaining a highly qualified and thoroughly trained workforce. [3] This is attributable to the intermittant nature of work, variations in institutional support of programs, and the high demand for simulation experiences as part of educational curricula and hospital-based learning. Many programs have informal monitoring methods, but financial constraints often limit the time and resources available for QA and feedback. Our findings suggest that rigorous QA is a useful tool for standardizing simulations and this may be particularly important when actors are assigned to perform in multiple scenarios, or have minimal experience. We recognize many programs will not be able to support a dedicated, 2-person QA team in addition to immediate feedback from the study or teaching team. Therefore we suggest designing a feasible QA program tailored to each program’s specific sitaution. Based on our experience, we suggest that the minimal components for a feasible QA program include a group feedback session after most actors have performed in at least 10 simulations and individualized feedback on a random sample of 10% of all simulation performances evaluated by a dedicated reviewer throughout a study or project.

Although our data did not allow us to estimate the impact of immediate feedback on error rates, we did demonstrate that a group feedback session was effective to reduce actor error. In our study, we offered this first session after each actor completed a rigorous training and a small number of simulations. The relatively high observed error rate prior to this first group feedback session suggest that rehearsal simulations did not completely prepare the study actors for some of the situations encountered within the trial. Based on our experiences, we believe providing feedback to actors is similar to providing feedback to colleagues in the clinical environment or participants in a simulation. [25] We propose strategies to maximize the effectiveness of a simulation QA program in Box 1.

Box 1. Strategies to maximize the effectiveness of a simulation QA program.

  • Set clear expectations about the nature and frequency of feedback
  • Critique the performance, not the person
  • Be self-aware while providing feedback–manage tone, body language, biases etc.
  • Use pronouns that are inclusive (e.g. We are working together)
  • Ask open-ended questions to understand barriers to performance
  • Establish consistency and credibility in the observation approach
  • Provide actors with an approved way to challenge feedback when they disagree
  • Always explain what approach or action is preferred, not just what went wrong

This study has several limitations. Although, the QA methodology for this study was designed a priori, this is a secondary analysis of data from a randomized trial that was not designed specifically to evaluate the causal effect of any specific component of our QA program. There were also only 3 actors involved in the study, and simulations were assigned to actors based on availability and scheduling. Finally, we used separate models to describe changes in error rates over time and in relation to feedback sessions, and did not attempt to simultaneously model the impacts of these two exposures. The study’s strengths include a rigorous quantitative evaluation of a QA plan deployed over more than 100 simulations in the course of a year in the setting of a randomized trial where consistency was essential. In conclusion, we have provided a methodology for QA in SBME and trials utilizing simulation and demonstrated that such methods can reduce errors in actor performances in a small sample. Future studies should continue to employ and test QA methods to strengthen the value of simulation in healthcare.

Supporting information

S1 Table. Background information on the hypothetical patient and proxy provided to actors.

https://doi.org/10.1371/journal.pone.0233538.s001

(DOCX)

S2 Table. Example of written QA review including dialogue, errors and feedback.

https://doi.org/10.1371/journal.pone.0233538.s002

(DOCX)

S1 Fig. Distribution of total errors and major errors by standardized family member.

https://doi.org/10.1371/journal.pone.0233538.s003

(TIF)

Acknowledgments

This work should be attributed to the Division of Pulmonary and Critical Care Medicine, Department of Medicine, Johns Hopkins University.

References

  1. 1. Rosen MA, Hunt EA, Pronovost PJ, Federowicz MA, Weaver SJ. In Situ Simulation in Continuing Education for the Health Care Professions: A Systematic Review. J Contin Educ Health Prof [Internet]. 2012 [cited 2018 Nov 26];32(4):243–54. Available from: http://content.wkhealth.com/linkback/openurl?sid=WKPTLP:landingpage&an=00005141-201232040-00004
  2. 2. Hunt EA, Walker AR, Shaffner DH, Miller MR, Pronovost PJ. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: highlighting the importance of the first 5 minutes. Pediatrics [Internet]. 2008 Jan 1 [cited 2018 Nov 26];121(1):e34–43. Available from: http://pediatrics.aappublications.org/cgi/doi/10.1542/peds.2007-0029
  3. 3. Nestel D, Tabak D, Tierney T, Layat-Burn C, Robb A, Clark S, et al. Key challenges in simulated patient programs: An international comparative case study. BMC Med Educ [Internet]. 2011 Dec 25 [cited 2019 Jan 25];11(1):69. Available from: https://bmcmededuc.biomedcentral.com/articles/10.1186/1472-6920-11-69
  4. 4. Munshi F, Lababidi H, Alyousef S. Low- versus high-fidelity simulations in teaching and assessing clinical skills. J Taibah Univ Med Sci [Internet]. 2015 Mar 1 [cited 2019 Feb 4];10(1):12–5. Available from: https://www.sciencedirect.com/science/article/pii/S1658361215000141
  5. 5. Sheri Howard, Ph.D., RN C. Increasing Fidelity and Realism in Simulation [Internet]. Lippincott Nursing Education Blog. 2018. http://nursingeducation.lww.com/blog.entry.html/2018/09/19/increasing_fidelity-zEj0.html
  6. 6. Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs [Internet]. 2010 Jan 1 [cited 2018 Nov 26];66(1):3–15. Available from: http://doi.wiley.com/10.1111/j.1365-2648.2009.05240.x
  7. 7. Schaefer JJ, Vanderbilt AA, Cason CL, Bauman EB, Glavin RJ, Lee FW, et al. Literature Review: instructional design and pedagogy science in healthcare simulation. Simul Healthc J Soc Simul Healthc [Internet]. 2011 Aug [cited 2018 Nov 26];6(7):S30–41. Available from: https://insights.ovid.com/crossref?an=01266021-201108001-00006
  8. 8. Kaplonyi J, Bowles K-A, Nestel D, Kiegaldie D, Maloney S, Haines T, et al. Understanding the impact of simulated patients on health care learners’ communication skills: a systematic review. Med Educ [Internet]. 2017 Dec 1 [cited 2019 Jan 25];51(12):1209–19. Available from: http://doi.wiley.com/10.1111/medu.13387
  9. 9. Turnbull AE, Hayes MM, Brower RG, Colantuoni E, Basyal PS, White DB, et al. Effect of Documenting Prognosis on the Information Provided to ICU Proxies: A Randomized Trial. Crit Care Med. 2019;47(6):757–764.
  10. 10. Knaus WA, Wagner DP, Draper EA, Zimmerman JE, Bergner M, Bastos PG, et al. The APACHE III Prognostic System: Risk Prediction of Hospital Mortality for Critically III Hospitalized Adults. Chest [Internet]. 1991 Dec 1 [cited 2018 Dec 3];100(6):1619–36. Available from: https://www.sciencedirect.com/science/article/pii/S0012369216528049
  11. 11. Lemeshow S, Klar J, Teres D, Avrunin JS, Gehlbach SH, Rapoport J, et al. Mortality probability models for patients in the intensive care unit for 48 or 72 hours: a prospective, multicenter study. Crit Care Med [Internet]. 1994 Sep [cited 2018 Dec 3];22(9):1351–8. Available from: http://www.ncbi.nlm.nih.gov/pubmed/8062556
  12. 12. Lemeshow S, Teres D, Klar J, Avrunin JS, Gehlbach SH, Rapoport J. Mortality Probability Models (MPM II) Based on an International Cohort of Intensive Care Unit Patients. JAMA J Am Med Assoc [Internet]. 1993 Nov 24 [cited 2018 Dec 3];270(20):2478. Available from: http://jama.jamanetwork.com/article.aspx?doi=10.1001/jama.1993.03510200084037
  13. 13. White DB, Engelberg RA, Wenrich MD, Lo B, Curtis JR. The Language of Prognostication in Intensive Care Units. Med Decis Mak [Internet]. 2010 Jan 27 [cited 2018 Dec 3];30(1):76–83. Available from: http://journals.sagepub.com/doi/10.1177/0272989X08317012
  14. 14. Douglas SL, Daly BJ, Lipson AR. Neglect of quality-of-life considerations in intensive care unit family meetings for long-stay intensive care unit patients. Crit Care Med [Internet]. 2012 Feb [cited 2018 Dec 3];40(2):461–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21963580
  15. 15. Uy J, White DB, Mohan D, Arnold RM, Barnato AE. Physicians’ decision-making roles for an acutely unstable critically and terminally ill patient. Crit Care Med [Internet]. 2013 Jun [cited 2018 Dec 3];41(6):1511–7. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23552510
  16. 16. Fine E, Reid MC, Shengelia R, Adelman RD. Directly Observed Patient–Physician Discussions in Palliative and End-of-Life Care: A Systematic Review of the Literature. J Palliat Med [Internet]. 2010 May 22 [cited 2018 Dec 3];13(5):595–603. Available from: http://www.liebertpub.com/doi/10.1089/jpm.2009.0388
  17. 17. Barnato AE, Arnold RM. The effect of emotion and physician communication behaviors on surrogates’ life-sustaining treatment decisions: a randomized simulation experiment. Crit Care Med [Internet]. 2013 Jul [cited 2018 Dec 3];41(7):1686–91. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23660727
  18. 18. Chiarchiaro J, Ernecoff NC, Scheunemann LP, Hough CL, Carson SS, Peterson MW, et al. Physicians Rarely Elicit Critically Ill Patients’ Previously Expressed Treatment Preferences in Intensive Care Units. Am J Respir Crit Care Med [Internet]. 2017 Jul 15 [cited 2018 Sep 4];196(2):242–5. Available from: http://www.atsjournals.org/doi/10.1164/rccm.201611-2242LE
  19. 19. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform [Internet]. 2009 Apr 1 [cited 2018 Dec 3];42(2):377–81. Available from: https://www.sciencedirect.com/science/article/pii/S1532046408001226
  20. 20. Frome EL, Checkoway H. Use Of Poisson Regression Models In Estimating Incidence Rates and Ratios. Am J Epidemiol. 1985 Feb 1 [cited 2018 Dec 3];121(2):309–23. Available from: https://academic.oup.com/aje/article/113959/USE
  21. 21. Hilbe JM. Modeling Count Data [Internet]. Cambridge: Cambridge University Press; 2014 [cited 2018 Dec 3]. Available from: http://ebooks.cambridge.org/ref/id/CBO9781139236065
  22. 22. Soffler MI, Claar DD, McSparron JI, Ricotta DN, Hayes MM. Raising the Stakes: Assessing Competency with Simulation in Pulmonary and Critical Care Medicine. Ann Am Thorac Soc [Internet]. 2018 Sep [cited 2019 Jan 25];15(9):1024–6. Available from: https://www.atsjournals.org/doi/10.1513/AnnalsATS.201802-120PS
  23. 23. Motola I, Devine LA, Chung HS, Sullivan JE, Issenberg SB. Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82. Med Teach [Internet]. 2013 Oct 13 [cited 2019 Jan 25];35(10):e1511–30. Available from: http://www.tandfonline.com/doi/full/10.3109/0142159X.2013.818632
  24. 24. Bakogiannis A, Darling JC, Dimitrova V, Roberts TE. Simulation for communication skills training in medical students: Protocol for a systematic scoping review. Int J Educ Res [Internet]. 2018 Dec 1 [cited 2019 Jan 25]; Available from: https://www.sciencedirect.com/science/article/pii/S0883035518313405?via%3Dihub
  25. 25. Newman LR, Roberts DH, Frankl SE. Twelve tips for providing feedback to peers about their teaching. Med Teach [Internet]. 2018 Nov 26 [cited 2019 Jan 25];1–6. Available from: https://www.tandfonline.com/doi/full/10.1080/0142159X.2018.1521953