Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Electronic charts do not facilitate the recognition of patient hazards by advanced medical students: A randomized controlled study

  • Friederike Holderried,

    Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Software, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Anaesthesiology, University Hospital Tübingen, Tübingen, Baden-Württemberg, Germany

  • Anne Herrmann-Werner,

    Roles Data curation, Investigation, Methodology, Project administration, Resources, Software, Writing – review & editing

    Affiliation Department of Internal Medicine VI, Psychosomatic Medicine, University Hospital Tübingen, Baden-Württemberg, Tübingen, Germany

  • Moritz Mahling ,

    Roles Formal analysis, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Diabetology, Endocrinology, Nephrology, Section of Nephrology and Hypertension, University Hospital Tübingen, Tübingen, Baden-Württemberg, Germany

  • Martin Holderried,

    Roles Funding acquisition, Methodology, Software, Supervision, Writing – review & editing

    Affiliation Department of Quality Management, Medical and Business Development, University Hospital of Tübingen, Tübingen, Baden-Württemberg, Germany

  • Reimer Riessen,

    Roles Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliation Department of Internal Medicine VIII, Intensive Care Unit, University Hospital Tübingen, Tübingen, Baden-Württemberg, Germany

  • Stephan Zipfel,

    Roles Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliation Department of Internal Medicine VI, Psychosomatic Medicine, University Hospital Tübingen, Baden-Württemberg, Tübingen, Germany

  • Nora Celebi

    Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

    Affiliation PHV Dialysis Center Waiblingen, Waiblingen, Germany


Chart review is an important tool to identify patient hazards. Most advanced medical students perform poorly during chart review but can learn how to identify patient hazards context-independently. Many hospitals have implemented electronic health records, which enhance patient safety but also pose challenges. We investigated whether electronic charts impair advanced medical students’ recognition of patient hazards compared with traditional paper charts. Fifth-year medical students were randomized into two equal groups. Both groups attended a lecture on patient hazards and a training session on handling electronic health records. One group reviewed an electronic chart with 12 standardized patient hazards and then reviewed another case in a paper chart; the other group reviewed the charts in reverse order. The two case scenarios (diabetes and gastrointestinal bleeding) were used as the first and second case equally often. After each case, the students were briefed about the patient safety hazards. In total, 78.5% of the students handed in their notes for evaluation. Two blinded raters independently assessed the number of patient hazards addressed in the students’ notes. For the diabetes case, the students identified a median of 4.0 hazards [25%–75% quantiles (Q25–Q75): 2.0–5.5] in the electronic chart and 5.0 hazards (Q25–Q75: 3.0–6.75) in the paper chart (equivalence testing, p = 0.005). For the gastrointestinal bleeding case, the students identified a median of 5.0 hazards (Q25–Q75: 4.0–6.0) in the electronic chart and 5.0 hazards (Q25–Q75: 3.0–6.0) in the paper chart (equivalence testing, p < 0.001). We detected no improvement between the first case [median 5.0 (Q25–Q75: 3.0–6.0)] and second case [median, 5.0 (Q25–Q75: 3.0–6.0); p < 0.001, test for equivalence]. Electronic charts do not seem to facilitate advanced medical students’ recognition of patient hazards during chart review and may impair expertise formation.


Patient safety incidents are very common and likely to result in avoidable harm, especially diagnostic and prescribing incidents [1]. According to an analysis of malpractice claims by Saber Tehrani et al. [2], diagnostic errors account for 28% of all medical errors and are most likely to lead to permanent disability or death. Most problems arise from history-taking, examination, or failing to order diagnostic tests for further work-up [3].

Other major sources of patient harm are prescribing errors and adverse drug events [4, 5]. The reported incidence of prescription errors among inpatients is about 6% of overall prescriptions, and the incidence of adverse drug events is 19% of admissions; one-third of these adverse drug events are considered preventable [5, 6].

Evidence-based practices to prevent patient harm include preventing infections, implementing measures to reduce medication errors, monitoring patient safety problems and establishing patient safety practices targeted at diagnostic errors, and using checklists [4, 79]. In particular, patient chart review promotes patient safety and is a major tool in the recognition of patient hazards such as diagnostic errors, prescription errors, and adverse drug events [10].

Patient charts have traditionally been on paper; however, most hospitals have now adopted electronic health records [11, 12]. With respect to adverse drug events, electronic charts prevent a major source of medication errors: unintelligible abbreviations [4]. Apart from better legibility, electronic health records offer other features that promote patient safety. Evidence-based practices to prevent diagnostic errors include technology-based alerting systems and computerized clinical decision support systems [1315].

However, there is currently no evidence that electronic patient charts actually reduce adverse drug events [4]. Palojoki et al. [16] even found significantly more patient safety incidents in a fully digitalized environment, and most were attributed to difficulties in human–computer interactions.

One reason for this phenomenon may be that a huge proportion of chart review and prescribing in hospitals is performed by novice physicians [5, 17]. In surveys, advanced medical students have reported a feeling of unpreparedness to manage patients safely [1821]. According to a cross-sectional study by Pearce et al. [22], the majority of Foundation Year 1 doctors in the UK performed ward rounds alone and unsupervised, and only 7% reported that they felt prepared for this task.

Most patient hazards do not arise from single drugs or diseases, but rather from the context [17, 23]. Practice and experience are required to develop the expertise to manage patients with multiple morbidities [17, 23]. However, studies have shown that with practice and training, advanced students can acquire the skills necessary to identify patient hazards during chart review and to safely and context-independently prescribe medications [2427]. Significant effects on the detection of patient hazards during chart review were shown even after reviewing a single chart [24].

An integrated adult learning model outlines the following steps in the formation of expertise [28]:

  • Dissonance: the learner is confronted with gaps in knowledge or skill
  • Refinement and organization: the learner gathers new information and restructures it with the existing body of knowledge
  • Feedback: the newly learned information is articulated or tested in new situations
  • Consolidation: the learner reflects on the whole learning cycle; e.g., while applying the knowledge in a different context

The bottleneck for the crucial refinement and organization phase is the cognitive load [29, 30]. The cognitive load comprises the internal cognitive load, which is basically the inherent complexity of the task at hand, the extraneous cognitive load, which comprises external factors complicating the task; and the germane load, which is determined by the efficacy of the learner’s processing [2931].

Revising patient charts is a very complex task that is seldom practiced spontaneously during internships [3234]. Students usually perform very poorly during simulated chart reviews and ward rounds [24, 34, 35]. Various skills and competencies have been suggested to be implemented in medical schools to promote patient safety, including patient safety curricula and simulation trainings [4, 3638]. In chart reviews, many of these skills must be applied simultaneously; therefore, knowledge about the patient’s diagnoses, comorbidities, diagnostic and therapeutic procedures, and treatment protocols as well as pharmaceutical knowledge must be integrated with patient safety skills revolving around the avoidance of diagnostic errors and safe prescribing [36]. Chart review thus has a very high intrinsic cognitive load that may limit the refining and organization phase during expertise-building [28, 30]. Case complexity increases the cognitive load, which is reduced by growing knowledge and more advanced illness scripts (a more efficient organization of knowledge with mental “shortcuts” between relevant items of information) [3943].

However, when handling the electronic health record poses an additional challenge, this may represent an extraneous cognitive load [30]. The increase in cognitive load would further enhance the difficulty of the already challenging task of chart review [2931, 44]. This might counteract the advantages of electronic charts. Moreover, the increased cognitive load might further interfere with expertise-building [2931, 44].

In the present study, we investigated whether fifth-year medical students identify more or less patient hazards when a standardized patient case is presented in an electronic chart as opposed to a paper chart and whether the number of recognized patient hazards changes when students review a paper chart after reviewing an electronic chart or vice versa. We hypothesized that the advantages and disadvantages of electronic charts compensate each other in comparison to paper charts and thus checked for equality.


Study design

In this randomized prospective trial, we used 2 standardized patient cases (diabetes mellitus and upper gastrointestinal bleeding) with 12 standardized patient hazards each. All fifth-year medical students (n = 135) were randomized into two equally sized groups using lots. One group was assigned to review a paper chart first, then review an electronic chart. The other group reviewed an electronic chart first, then reviewed a paper chart. Both groups were further divided into equally sized subgroups so that one half reviewed the diabetes mellitus case first and the upper gastrointestinal bleeding case second, and the other half reviewed the cases in reverse order (Fig 1).

Fig 1. Randomization of study participants.

E, electronic chart; P, paper chart; DM, diabetes mellitus case; UGI upper gastrointestinal bleeding case.

One group reviewed a standardized patient case on a paper chart first, then reviewed the other case on an electronic chart. The other group reviewed a standardized case on an electronic chart first, then reviewed the other case on a paper chart. Each group was divided into two equally sized subgroups so that each assessment case served as the first or second case for half of the group, respectively. In total, 78.5% of the students voluntarily submitted their notes for evaluation.

Participants and setting

The training and assessment were integrated in the fifth-year practical internal medicine class. Our university has a conventional 6-year curriculum. The fifth year is the last year at the university, during which the students attend classes with formal lectures. The sixth and final year comprises internships at university or teaching hospitals. Thus, the students had already completed their theoretical training in pharmacology and internal medicine. While the course was mandatory, the pseudonymized notes were only evaluated if the students had volunteered them and after the students had provided written consent. The participants were blinded to the study question.

On the first day of the course, all students attended a 45-minute lecture on patient hazards. During this lecture, all of the hazard patterns represented in the standardized charts were discussed extensively. All students then attended a 120-minute standardized instruction on how to operate the electronic health record. This instruction was provided by a trained instructor according to the teaching manual for medical personnel. This training comprised all of the elements suggested by Goveia et al. [45], including classroom training, computer-based training, and feedback.

Next, the students were given questionnaires on their demographic data. The students reviewed the first standardized chart within 30 minutes, ordering diagnostic tests and prescribing medication as they saw fit. We then briefed the students about the expected patient hazards. The second standardized chart was presented on the second day, followed by a briefing of the second standardized chart.

Electronic chart

We used the electronic health record used in our university hospital (MEONA version 77.551_e; Meona GmbH, Freiburg, Germany). It contains a section in which the diagnoses of the patient are recorded; a page with the patient’s history and physical examination findings; a page with the actual chart including vital signs, medications, and planned diagnostic or therapeutic procedures (chart); a page with laboratory results; and a section on test results such as imaging or endoscopy.

The electronic health record contains highlighted fields in which allergies, general warnings, and infections can be recorded. While prescribing, a patient safety module alerts the prescriber to a known allergy. Information about the drugs, including indications, contraindications, and dosage recommendations, can be accessed via the electronic health record.

Paper chart

For the paper chart, we used the system that was employed by our university hospital prior to implementation of the electronic health record and that is still used in many German hospitals. It comprises a sheet summarizing the diagnoses, history, physical examination findings, and test results (i.e., imaging, endoscopy); the main chart, which contains vital signs, medications, and diagnostic tests and that is both used by nursing staff and doctors; and a separate file for laboratory results.

The students were given paper-based prescription aids listing drug indications, contraindications, and dosage recommendations when reviewing the paper patient charts.


For the assessment, 2 fictional patient cases were constructed with 12 standardized common patient hazards:

  • One indicated medication is missing
  • One medication is not indicated
  • One medication has the incorrect dosage
  • One risk situation is present for an unauthorized medication
  • One medication has adverse effects
  • One medication is contraindicated
  • One incidental actionable diagnostic finding is present
  • One diagnostic test for the main problem is missing
  • The monitoring for the main problem is incomplete
  • One infectious complication is present
  • Diet/fluid management is incorrect
  • The documentation is incomplete

Both cases were presented either as a paper chart or as an electronic chart containing identical information. The cases were developed by two physicians and reviewed by two other physicians to ensure face validity. An example of an assessment case is shown in Table 1.

Table 1. Example of an assessment case with standardized patient hazards.

The participants marked their charts with pseudonyms comprising the first letter of the mother’s given name, the first letter of the place of birth, the second letter of the own given name, the first letter of the month of birth, and the day of birth.

After completion of the assessment, the paper charts were then transferred into an electronic chart, and both the electronic and paper charts were printed so that they were indistinguishable to the raters. The charts were sorted alphabetically according to the pseudonyms.

Two blinded raters assessed the charts independently according to predefined rating criteria using a checklist. One rater rated them from the top down and the other rated them from the bottom up to minimize observer drift. Whenever one patient hazard was addressed in the student’s prescription, the raters awarded 1 point irrespective of whether the reaction was adequate. During the rating process, both raters noted when predefined rating criteria were unclear for a specific chart; the raters then discussed and redefined the rating criteria and rerated the disputable charts independently.


Statistical analyses were performed using R [1], and graphics were drawn using Prism Version 7.0e (GraphPad Software, San Diego, CA, USA). We assumed a Gaussian distribution because of the large sample size and numeric rating scores. Data are described as median and 25%–75% quantiles (Q25–Q75) and presented as box plots (Tukey whiskers).

To assess the inter-rater reliability, we calculated the intraclass correlation coefficient (ICC) using R [1] and the irr package [2]. We assessed the total checklist score for consistency using a one-way model (ICC type “3,1” according to Shrout and Fleiss [46]).

We planned to pool data from the first and second case that the students reviewed and thus assumed them to be independent because each participant had a change in presentation (paper vs. electronic) as well as disease (diabetes mellitus vs. upper gastrointestinal bleeding). We therefore performed an effect screening using least squares and the following potential influences: time (first case vs. second case), presentation (paper vs. electronic), and disease (diabetes mellitus vs. upper gastrointestinal bleeding).

Our intention was to test for statistical equivalence instead of statistical differences. We assumed that a score difference of 2 out of 12 recognized hazards as clinically equivalent (i.e., a clinically insignificant difference). We performed a retrospective power analysis and equivalence testing using two one-sided tests (TOST) equivalence testing [1].


The institutional review board (Ethik-Kommission an der Medizinischen Fakultät der Eberhard-Karls-Universität und am Universitätsklinikum Tübingen) approved this study (decision number 2602016BO2). The need for detailed review by the board was waived since no patients were involved and study participation for the students was voluntary and pseudonymized. We informed the students at the beginning of the study about the voluntary nature of their participation and obtained written informed consent nonetheless. The students could decline to provide consent at any given time without giving a reason and without disadvantages for the course.


Demographic data

Of all 135 fifth-year medical students, 106 (78.5%) voluntarily submitted their notes for evaluation. These students’ demographic data are presented in Table 2.

Table 2. Demographic data of students who submitted their notes on the standardized patient charts for evaluation.

Power analysis and inter-rater reliability

The retrospective power analysis for equivalence testing resulted in a power of 0.999.

The inter-rater reliability assessed using the ICC was 0.997 (0.996–0.998) for the upper gastrointestinal bleeding case and 0.988 (0.982–0.992) for the diabetes mellitus case, indicating excellent inter-rater reliability for both charts [3].

Screening for effects

We screened time (first case vs. second case), presentation (paper vs. electronic), and diseases (diabetes mellitus vs. upper gastrointestinal bleeding) with respect to their influence on the number of recognized hazards. None of the screened effects displayed a significant impact on the number of recognized hazards. We therefore pooled data from the first and second case presentations (i.e., electronic chart and diabetes mellitus as the first case and electronic chart and diabetes mellitus as the second case).

Effect of presentation as electronic or paper chart on recognized patient hazards

The number of recognized patient hazards for electronic charts versus paper charts is shown in Fig 2. For diabetes mellitus (Fig 2A), the participants recognized a median of 4.0 hazards (Q25–Q75: 2.0–5.5) in the electronic chart and a median of 5.0 hazards (Q25–Q75: 3.0–6.75) in the paper chart (equivalence testing, p = 0.005, indicating statistically equivalent groups). When we presented the upper gastrointestinal bleeding case to the participants, they recognized a median of 5.0 hazards (Q25–Q75: 4.0–6.0) in the electronic chart and a median of 5.0 hazards (Q25–Q75: 3.0–6.0) in the paper chart (equivalence testing, p < 0.001, indicating statistically equivalent groups).

Fig 2. Number of recognized patient hazards.

(a) Diabetes mellitus case. (b) Upper gastrointestinal bleeding case. Box plots (Tukey whiskers) for presentation as electronic chart or paper chart. *p < 0.05 for statistical equivalence.

In Fig 3, we display the types of patient hazards recognized by chart.

Fig 3. Type of recognized patient hazard by case and chart.

F1: One diagnostic test for the main problem is missing, F2: One incidental actionable diagnostic finding is present, F3: One medication is contraindicated, F4: One medication has the incorrect dosage, F5 One indicated medication is missing, F6 One medication has adverse effects, F7: One medication is not indicated, F8: One infectious complication is present, F9: The monitoring for the main problem is incomplete, F10: Diet/fluid management is incorrect, F 11 The documentation is incomplete, F12 One risk situation is present for an unauthorized medication.

Transition between electronic chart and paper chart

A total of 77 students handed in both cases for evaluation. For these students, we tested whether an improvement had occurred between the first and second case.

Overall, we detected no improvement when the students reviewed the second case presented in a different chart, either electronic or paper [median: 5.0 (Q25–Q75: 3.0–6.0) in the first case vs. median: 5.0 (Q25–Q75: 3.0–6.0) in the second case, p < 0.001, test for equivalence] (Fig 4).

Fig 4. Overall recognition of patient hazards when presented in different charts (electronic or paper).

There was no measurable improvement in the second case despite an extensive briefing after the first case. *p < 0.05 for statistical equivalence.

When transitioning from a paper chart to an electronic chart, the students recognized a median of 5.0 hazards (Q25–Q75: 3.25–6.0) versus 5.0 hazards (Q25–Q75: 3.5–6.0) (p < 0.001 for statistical equivalence). When transitioning from an electronic chart to a paper chart, the students recognized a median of 4.0 hazards (Q25–Q75: 3.0–5.0) versus 5.0 hazards (Q25–Q75: 3.0–6.0) (p < 0.001 for statistical equivalence) (Fig 5).

Fig 5. Numbers of patient hazards recognized by fifth-year medical students.

(a) Transitioning from a paper chart to another case presented in an electronic chart. (b) Transitioning from an electronic chart to another case presented in a paper chart. *p < 0.05 for statistical equivalence.


In this study, we investigated whether electronic patient charts facilitate the recognition of patient hazards during chart reviews by advanced medical students. We found that the students did not recognize more patient hazards when the data were presented in an electronic chart as opposed to a paper chart. Moreover, we detected no improvement when the students reviewed another case presented in the other medium despite the fact that the students had received a lecture on which patient hazards to expect in advance and had received an extensive briefing after each case. In our previous study with a similar methodology, the students were presented three different case scenarios on a paper chart, and the students identified significantly more patient hazards with every additional case presented [24]. The number of recognized patient hazards was comparable to the baseline finding in our previous study, we attributed the low scoring to the fact that chart review is only seldomly practiced during clerkships and for most of the students this was the first chart review ever they performed [24, 34].

Despite the advantages that electronic health records confer in comparison to paper charts, such as legibility and electronic alert flags or decision-making tools, Palojoki et al. [4, 16] actually found more patient safety incidents in a fully electronic environment than in historical controls. In Japanese hospitals, productivity decreased with the implementation of electronic health records [47]. This may be partially explained by usability and navigation issues [48, 49]. According to a study by Kaipio et al. [50], usability of electronic health records was rated poorly in Finland and did not improve from 2010 to 2014. A study by Clarke et al. [51] showed no relevant difference in task completion amongst expert and novice users of electronic health records, indicating that usability problems may persist even after longer exposure to the electronic health record. However, another study showed improvements in the use of electronic health records with prolonged exposure [52].

If usability and navigation issues with the electronic health record had been the only problem, we would have expected to detect an improvement in the recognition of patient hazards in the transition from the electronic chart to the paper chart and stagnation with transition from the paper chart to the electronic chart. However, we found no improvement in either direction despite an extensive briefing between the two cases. There are indications that chart review on paper is partially perceived as a different task than chart review with an electronic health record. In their qualitative study, Borycki et al. [53] found that nursing students sought different information when presented a case on a paper chart versus on a hybrid paper/electronic chart. Additionally, physicians documented different physical examination findings in a paper-based record than in an electronic health record [54]. Skills such as safe prescribing and the diagnostic process can be partially taught in a context-independent manner [55, 56]. This requires the transfer of concepts, which is usually very difficult and requires several examples; the closer the cases are, the easier the transfer is achieved [5760]. Thus, when a transfer between media is required in addition to the transfer between contexts, this might increase the cognitive load, impair transfer, and thus impair expertise formation. This might explain why there was no measurable improvement between the two cases in comparison to our previous study when all cases were presented in the same, paper-based format. In our study, there was a difference in the recognized patient hazards, the students identified more adverse effects of drugs in the paper chart than in the electronic health record in both cases. However, difficulty of the cases is highly context-dependent, so more cases and scenarios would have to be analyzed before drawing a conclusion.

Our study has several limitations. First, it was a small monocentric study investigating the performance of one cohort of medical students only. We did not directly measure the cognitive load but instead relied on measurement of the number of recognized patient hazards.

Because the students handed in their notes for evaluation on a voluntary basis, we cannot exclude sampling bias despite the fact that the majority of the cohort participated. However, we would expect a distortion of the data toward better performance under the assumption that the more confident students would hand in their notes.

Because all students were briefed about which patient hazards were to be expected, we do not assume that our findings were attributable to a lack of knowledge, although we cannot exclude this possibility. Another alternative explanation might be a ceiling effect; however, because the students identified only an average of 4 to 5 out of 12 patient hazards, we assume that this is unlikely.

Although the students were trained for 120 minutes on the handling of the electronic patient records, including documentation and prescribing exercises, usability problems might have led to underperformance when reviewing the electronic chart.

Future studies should be performed to assess whether our findings can be replicated with a larger number of cases, in different contexts, and with different electronic health records and to determine how many cases are needed to improve the recognition of patient hazards when presented with an electronic health record. When developing or purchasing an electronic health record, responsible persons should pay attention to usability in order to facilitate the recognition of patient hazards, and when designing a curriculum on patient hazards and chart review, responsible persons should bear in mind that this difficult skill has to be practiced, especially when using an electronic health record.


Electronic charts do not seem to facilitate advanced medical students’ identification of patient hazards compared with paper charts and probably interfere with expertise building.

Supporting information

S1 Dataset. Original dataset for data availability.



We thank Angela Morben, DVM, ELS, from Edanz Group (, for editing a draft of this manuscript. We acknowledge support by Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of University of Tübingen.


  1. 1. Panesar SS, deSilva D, Carson-Stevens A, Cresswell KM, Salvilla SA, Slight SP, et al. How safe is primary care? A systematic review. BMJ Qual Saf. 2016;25(7): 544–553. pmid:26715764
  2. 2. Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the National Practitioner Data Bank. BMJ Qual Saf. 2013;22(8): 672–680. pmid:23610443
  3. 3. Singh H, Giardina TD, Meyer AN, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6): 418–425. pmid:23440149
  4. 4. Shekelle PG, Wachter RM, Pronovost PJ, Schoelles K, McDonald KM, Dy SM, et al. Making health care safer II: an updated critical analysis of the evidence for patient safety practices. Evid Rep Technol Assess (Full Rep). 2013(211): 1–945. pmid:24423049
  5. 5. Lewis PJ, Dornan T, Taylor D, Tully MP, Wass V, Ashcroft DM. Prevalence, incidence and nature of prescribing errors in hospital inpatients: a systematic review. Drug Saf. 2009;32(5): 379–389. pmid:19419233
  6. 6. Laatikainen O, Miettunen J, Sneck S, Lehtiniemi H, Tenhunen O, Turpeinen M. The prevalence of medication-related adverse events in inpatients-a systematic review and meta-analysis. Eur J Clin Pharmacol. 2017;73(12): 1539–1549. pmid:28871436
  7. 7. Morello RT, Lowthian JA, Barker AL, McGinnes R, Dunt D, Brand C. Strategies for improving patient safety culture in hospitals: a systematic review. BMJ Qual Saf. 2013;22(1): 11–18. pmid:22849965
  8. 8. Snowdon DA, Leggat SG, Taylor NF. Does clinical supervision of healthcare professionals improve effectiveness of care and patient experience? A systematic review. BMC Health Serv Res. 2017;17(1): 786. pmid:29183314
  9. 9. Boyd J, Wu G, Stelfox H. The impact of checklists on inpatient safety outcomes: a systematic review of randomized controlled trials. J Hosp Med. 2017;12(8): 675–682. pmid:28786436
  10. 10. Madden C, Lydon S, Curran C, Murphy AW, O'Connor P. Potential value of patient record review to assess and improve patient safety in general practice: a systematic review. Eur J Gen Pract. 2018;24(1): 192–201. pmid:30112925
  11. 11. Adler-Milstein J, DesRoches CM, Kralovec P, Foster G, Worzala C, Charles D, et al. Electronic health record adoption in US hospitals: progress continues, but challenges persist. Health Aff (Millwood). 2015;34(12): 2174–2180. pmid:26561387
  12. 12. Adler-Milstein J, Holmgren AJ, Kralovec P, Worzala C, Searcy T, Patel V. Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide. J Am Med Inform Assoc. 2017;24(6): 1142–1148. pmid:29016973
  13. 13. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, Lonhart J, Schmidt E, Pineda N, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013;158(5 Pt 2): 381–389. pmid:23460094
  14. 14. Marasinghe KM. Computerised clinical decision support systems to improve medication safety in long-term care homes: a systematic review. BMJ Open. 2015;5(5): e006539. pmid:25967986
  15. 15. Kane-Gill SL. Innovations in medication safety: services and technologies to enhance the understanding and prevention of adverse drug reactions. Pharmacotherapy. 2018;38(8): 782–784. pmid:30033608
  16. 16. Palojoki S, Mäkelä M, Lehtonen L, Saranto K. An analysis of electronic health record-related patient safety incidents. Health Informatics J. 2017;23(2): 134–145. pmid:26951568
  17. 17. Lim AG, North N, Shaw J. Beginners in prescribing practice: experiences and perceptions of nurses and doctors. J Clin Nurs. 2018;27(5–6): 1103–1112. pmid:29076584
  18. 18. Schmitz K, Lenssen R, Rosentreter M, Gross D, Eisert A. Wide cleft between theory and practice: medical students’ perception of their education in patient and medication safety. Pharmazie. 2015;70(5): 351–354. pmid:26062307
  19. 19. Heaton A, Webb DJ, Maxwell SR. Undergraduate preparation for prescribing: the views of 2413 UK medical students and recent graduates. Br J Clin Pharmacol. 2008;66(1): 128–134. pmid:18492128
  20. 20. Geoghegan SE, Clarke E, Byrne D, Power D, Moneley D, Strawbridge J, et al. Preparedness of newly qualified doctors in Ireland for prescribing in clinical practice. Br J Clin Pharmacol. 2017;83(8): 1826–1834. pmid:28244609
  21. 21. Kennedy MB, Haq I, Ferns G, Williams SE, Okorie M. The role of undergraduate teaching, learning and a national prescribing safety assessment in preparation for practical prescribing: UK medical students’ perspective. Br J Clin Pharmacol. 2019. [Epub ahead of print] pmid:31288298
  22. 22. Pearce J, Govan S, Harlinska A, Tremain R, Gajebasia S, Redman M. WORKFORCE: Newly graduated doctors’ experiences of conducting medical ward rounds alone: a regional cross-sectional study. Future Healthc J. 2019;6(1): 47–51. pmid:31098586
  23. 23. Panagioti M, Stokes J, Esmail A, Coventry P, Cheraghi-Sohi S, Alam R, et al. Multimorbidity and patient safety incidents in primary care: a systematic review and meta-analysis. PLoS One. 2015;10(8): e0135947. pmid:26317435
  24. 24. Holderried F, Heine D, Wagner R, Mahling M, Fenik Y, Herrmann-Werner A, et al. Problem-based training improves recognition of patient hazards by advanced medical students during chart review: a randomized controlled crossover study. PLoS One. 2014;9(2): e89198. pmid:24586591
  25. 25. Celebi N. Reducing common prescription errors—a modular lecture. GMS Z Med Ausbild. 2011;28(1): Doc10.
  26. 26. Karpa KD, Hom LL, Huffman P, Lehman EB, Chinchilli VM, Haidet P, et al. Medication safety curriculum: enhancing skills and changing behaviors. BMC Med Educ. 2015;15: 234. pmid:26711130
  27. 27. Smith SD, Henn P, Gaffney R, Hynes H, McAdoo J, Bradley C. A study of innovative patient safety education. Clin Teach. 2012;9(1): 37–40. pmid:22225891
  28. 28. Taylor DC, Hamdy H. Adult learning theories: implications for learning and teaching in medical education: AMEE Guide No. 83. Med Teach. 2013;35(11): e1561–e1572. pmid:24004029
  29. 29. Sewell JL, Maggio LA, Ten Cate O, van Gog T, Young JQ, O'Sullivan PS. Cognitive load theory for training health professionals in the workplace: a BEME review of studies among diverse professions: BEME Guide No. 53. Med Teach. 2019;41(3): 256–270. pmid:30328761
  30. 30. Young JQ, Van Merrienboer J, Durning S, Ten Cate O. Cognitive Load Theory: implications for medical education: AMEE Guide No. 86. Med Teach. 2014;36(5): 371–384. pmid:24593808
  31. 31. Rana J, Burgin S. Teaching & Learning Tips 2: cognitive load theory. Int J Dermatol. 2017;56(12): 1438–1441. pmid:29130491
  32. 32. Remmen R, Derese A, Scherpbier A, Denekens J, Hermann I, van der Vleuten C, et al. Can medical schools rely on clerkships to train students in basic clinical skills? Med Educ. 1999;33(8): 600–605. pmid:10447847
  33. 33. Celebi N, Tsouraki R, Engel C, Holderried F, Riessen R, Weyrich P. Does doctors’ workload impact supervision and ward activities of final-year students? A prospective study. BMC Med Educ. 2012;12: 24. pmid:22540897
  34. 34. Celebi N, Wagner R, Weyrich P, Heine D, Fenik Y, Holderried F, et al. Clerkships do not improve recognition of patient hazards by advanced medical students during chart review. Med Teach. 2012;34(12): 1087.
  35. 35. Nikendei C, Kraus B, Schrauth M, Briem S, Jünger J. Ward rounds: how prepared are future doctors? Med Teach. 2008;30(1): 88–91. pmid:18278658
  36. 36. Moran KM, Harris IB, Valenta AL. Competencies for patient safety and quality improvement: a synthesis of recommendations in influential position papers. Jt Comm J Qual Patient Saf. 2016;42(4): 162–169. pmid:27025576
  37. 37. Myung SJ, Shin JS, Kim JH, Roh H, Kim Y, Kim J, et al. The patient safety curriculum for undergraduate medical students as a first step toward improving patient safety. J Surg Educ. 2012;69(5): 659–664. pmid:22910166
  38. 38. McLellan L, Tully MP, Dornan T. How could undergraduate education prepare new graduates to be safer prescribers? Br J Clin Pharmacol. 2012;74(4): 605–613. pmid:22420765
  39. 39. Young JQ, van Dijk SM, O'Sullivan PS, Custers EJ, Irby DM, Ten Cate O. Influence of learner knowledge and case complexity on handover accuracy and cognitive load: results from a simulation study. Med Educ. 2016;50(9): 969–978. pmid:27562896
  40. 40. Schuwirth LW, van der Vleuten CP. General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach. 2011;33(10): 783–797. pmid:21942477
  41. 41. Schmidt HG, Norman GR, Boshuizen HP. A cognitive perspective on medical expertise: theory and implication. Acad Med. 1990;65(10): 611–621. pmid:2261032
  42. 42. Charlin B, Tardif J, Boshuizen HP. Scripts and medical diagnostic knowledge: theory and applications for clinical reasoning instruction and research. Acad Med. 2000;75(2): 182–190. pmid:10693854
  43. 43. Haji FA, Cheung JJ, Woods N, Regehr G, de Ribaupierre S, Dubrowski A. Thrive or overload? The effect of task complexity on novices’ simulation-based learning. Med Educ. 2016;50(9): 955–968. pmid:27562895
  44. 44. Haji FA, Rojas D, Childs R, de Ribaupierre S, Dubrowski A. Measuring cognitive load: performance, mental effort and simulation task complexity. Med Educ. 2015;49(8): 815–827. pmid:26152493
  45. 45. Goveia J, Van Stiphout F, Cheung Z, Kamta B, Keijsers C, Valk G, et al. Educational interventions to improve the meaningful use of electronic health records: a review of the literature: BEME Guide No. 29. Med Teach. 2013;35(11): e1551–e1560. pmid:23848402
  46. 46. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2): 420–428. pmid:18839484
  47. 47. Kaneko K, Onozuka D, Shibuta H, Hagihara A. Impact of electronic medical records (EMRs) on hospital productivity in Japan. Int J Med Inform. 2018;118: 36–43. pmid:30153919
  48. 48. Roman LC, Ancker JS, Johnson SB, Senathirajah Y. Navigation in the electronic health record: a review of the safety and usability literature. J Biomed Inform. 2017;67: 69–79. pmid:28088527
  49. 49. Zahabi M, Kaber DB, Swangnetr M. Usability and safety in electronic medical records interface design: a review of recent literature and guideline formulation. Hum Factors. 2015;57(5): 805–834. pmid:25850118
  50. 50. Kaipio J, Lääveri T, Hyppönen H, Vainiomäki S, Reponen J, Kushniruk A, et al. Usability problems do not heal by themselves: national survey on physicians’ experiences with EHRs in Finland. Int J Med Inform. 2017;97: 266–281. pmid:27919385
  51. 51. Clarke MA, Belden JL, Kim MS. Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR). J Eval Clin Pract. 2014;20(6): 1153–1161. pmid:25470668
  52. 52. Clarke MA, Belden JL, Kim MS. How does learnability of primary care resident physicians increase after seven months of using an electronic health record? A longitudinal study. JMIR Hum Factors. 2016;3(1): e9. pmid:27025237
  53. 53. Borycki EM, Lemieux-Charles L, Nagle L, Eysenbach G. Novice nurse information needs in paper and hybrid electronic-paper environments: a qualitative analysis. Stud Health Technol Inform. 2009;150:913–7. pmid:19745445
  54. 54. Yadav S, Kazanji N, K C N, Paudel S, Falatko J, Shoichet S, et al. Comparison of accuracy of physical examination findings in initial progress notes between paper charts and a newly implemented electronic health record. J Am Med Inform Assoc. 2017;24(1):140–4. pmid:27357831
  55. 55. Brennan N, Mattick K. A systematic review of educational interventions to change behaviour of prescribers in hospital settings, with a particular emphasis on new prescribers. Br J Clin Pharmacol. 2013;75(2):359–72. pmid:22831632
  56. 56. Kamarudin G, Penm J, Chaar B, Moles R. Educational interventions to improve prescribing competency: a systematic review. BMJ Open. 2013;3(8):e003291. pmid:23996821
  57. 57. Norman G, Dore K, Krebs J, Neville AJ. The power of the plural: effect of conceptual analogies on successful transfer. Acad Med. 2007;82(10 Suppl):S16–8.
  58. 58. Pan SC, Rickard TC. Transfer of test-enhanced learning: Meta-analytic review and synthesis. Psychol Bull. 2018;144(7):710–56. pmid:29733621
  59. 59. Diemers AD, van de Wiel MW, Scherpbier AJ, Baarveld F, Dolmans DH. Diagnostic reasoning and underlying knowledge of students with preclinical patient contacts in PBL. Med Educ. 2015;49(12):1229–38. pmid:26611188
  60. 60. Thistlethwaite JE, Davies D, Ekeocha S, Kidd JM, MacDougall C, Matthews P, et al. The effectiveness of case-based learning in health professional education. A BEME systematic review: BEME Guide No. 23. Med Teach. 2012;34(6):e421–44. pmid:22578051