Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Readiness to prescribe: Using educational design to untie the Gordian Knot

  • Ciara Lee ,

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Conceptualization, Data curation, Formal analysis, Writing – original draft, Writing – review & editing

    Current address: Department of General Practice and Rural Health, University of Otago, Dunedin, NZ

    Affiliation Centre for Medical Education, Queen’s University Belfast, Belfast, United Kingdom

  • Richard McCrory ,

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Formal analysis, Writing – review & editing

    Affiliation Centre for Medical Education, Queen’s University Belfast, Belfast, United Kingdom

  • Mary P. Tully ,

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Division of Pharmacy and Optometry, University of Manchester, Manchester, United Kingdom

  • Angela Carrington ,

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Investigation, Writing – review & editing

    Affiliation Belfast Health and Social Care Trust, Belfast, United Kingdom

  • Rosie Donnelly ,

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Investigation, Writing – original draft

    Affiliation South-Eastern Health and Social Care Trust, Belfast, United Kingdom

  • Tim Dornan

    Contributed equally to this work with: Ciara Lee, Richard McCrory, Mary P. Tully, Angela Carrington, Rosie Donnelly, Tim Dornan

    Roles Conceptualization, Data curation, Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Centre for Medical Education, Queen’s University Belfast, Belfast, United Kingdom



Junior residents routinely prescribe medications for hospitalised patients with only arms-length supervision, which compromises patient safety. A cardinal example is insulin prescribing, which is commonplace, routinely delegated to very junior doctors, difficult, potentially very dangerous, and getting no better. Our aim was to operationalise the concept of ‘readiness to prescribe’ by validating an instrument to quality-improve residents’ workplace prescribing education.


Guided by theories of behaviour change, implementation, and error, and by empirical evidence, we developed and refined a mixed-methods 24-item evaluation instrument, and analysed numerical responses from Foundation Trainees (junior residents) in Northern Ireland, UK using principal axis factoring, and conducted a framework analysis of participants’ free-text responses.


255 trainees participated, 54% women and 46% men, 80% of whom were in the second foundation year. The analysis converged on a 4-factor solution explaining 57% of the variance. Participants rated their capability to prescribe higher (79%) than their capability to learn to prescribe (69%; p<0.001) and rated the support to their prescribing education lower still (43%; p<0.001). The findings were similar in men and women, first and second year trainees, and in different hospitals. Free text responses described an unreflective type of learning from experience in which participants tended to 'get by' when faced with complex problems.


Operationalising readiness to prescribe as a duality, comprising residents’ capability and the fitness of their educational environments, demonstrated room for improvement in both. We offer the instrument to help clinical educators improve the two in tandem.


From the moment they qualify, doctors carry out safety-critical tasks, which can compromise patient safety.[1] Prescribing exemplifies this. Foundation trainees (FTs: junior residents) write 70% of UK hospital prescriptions. The prevalence of errors in these prescriptions is 8% in first-year and 10% in second-year FTs.[1] Patients are not the only ones harmed. FTs are anxious about the possibility of making errors and can be ‘second victims’ when they make one.[2,3] Prescribing insulin poses a particularly great challenge because it is an everyday task whose rate of errors is even higher than for prescriptions in general.[4] These errors really do harm patients: hypoglycaemia, and/or hyperglycaemia occur on four out of seven hospital-days and one in 25 patients develops ketoacidosis after admission.[4] Given that a patient with diabetes occupies one in six United Kingdom (UK) hospital beds, this is a huge problem.[4] The propensity of new doctors to make costly errors and the toll of errors on both them and patients make learning to prescribe, particularly insulin, an important target for quality improvement.

We should not be surprised that these errors are made because FTs consistently report feeling inadequately prepared to prescribe [3,520], most particularly to prescribe insulin.[3] The education provided by medical schools contributes to FTs’ unpreparedness. Curricula and assessments focus mainly on acquiring knowledge and skills ‘off-the-job’,[14] which does not fully emulate the ‘on-the-job’ complexity of prescribing that leads to errors.[1,21] This type of education led, in one study, to more graduates rating themselves ready to write prescriptions than to prescribe safely.[22] Induction programmes and assistantships make the transition from off-the-job learning in medical school to on-the-job learning after qualification less stressful [2,3], particularly when students are supported by pharmacists,[19] but evidence that assistantships improve prescribing safety is lacking.[16]

The net result is that new graduates undergo an abrupt transition from medical students, highly supervised and, in many countries, protected from workplace pressures, to FTs performing a key safety-critical task whilst fully exposed to workplace pressures under sometimes limited supervision. Confidence rises quickly so that, within just 8 months of qualifications, 60% of FTs report feeling confident to prescribe insulin independently.[23] Their error rate, cited above, shows that their confidence is not always matched by their competence. FTs have been reported to use guesswork rather than knowledge and skills learned in medical school to guide prescribing and do not always check the accuracy of their prescriptions.[15] This conforms to a wry definition of clinical experience as ‘making the same mistakes with increasing confidence over an impressive number of years’.[24]

Current literature, in sum, shows limitations in the ability of undergraduate education to prepare new graduates to prescribe and to lay down patterns of safe practice when education shifts to practice-based learning. It shows a dearth of pedagogic research and development to guide learning during this all-important phase in FTs’ professional development. In order to advance the field, researchers have developed two instruments purporting to measure how well learners are prepared to prescribe: one for FTs;[8] and one for medical students.[20] One was psychometrically validated [20] but the other was not.[8] Both evaluate what students were taught but not what they learned. They do not have clear theoretical underpinnings and treat preparedness as a final outcome rather than a process that continues throughout doctors’ working lives. This creates a pressing need to theorise and operationalise readiness to prescribe.[16]

Scholars tell us that members of all trades and professions are readied to work by engaging in work activities, observing attentively, gaining supervised experience of the physical and social settings where work is done and, ultimately, being productive members of workforces.[25,26] They learn by participating in the talk of practice.[27] Learning is not a one-way process [19,28] because, as Lave and Wenger argued,[29] learners’ participation changes communities of practice as well as the reverse. These theoretical insights define readiness for practice as an attribute that cannot be fully assured by off-the-job education alone. It seems timely to examine how prescribing education can benefit from a conceptualisation, which acknowledges the intimate relationship between individual readiness and the readiness of work environments to support on-the-job learning.

Regehr urged education researchers to ‘represent complexity well’.[30] He advocated responding to the ‘imperative of understanding’ as opposed to the ‘imperative of proof’.[30] Shedding light on complex educational systems requires novel methodologies, of which design-based research (DBR) is one.[31] ‘Designs’ are programmatic approaches to complex problems, which researchers apply iteratively, evaluating their effects, progressively refining the designs, and thereby understanding and gaining traction on the problems. DBR harnesses the ‘observer effect’ according to which observing a phenomenon changes that phenomenon. It does not force dichotomies between observation and intervention, or between theory and practice. We chose insulin therapy as a cardinal example of unreadiness to prescribe and launched the first iteration of a design-based research programme to address it. We framed the research question: How can readiness to prescribe be conceptualised and operationalised? To answer the question, we designed, implemented, and evaluated a mixed methods investigative instrument.


Research ethics approval

The Joint Research Ethics Committee of the School of Medicine, Dentistry, and Biomedical Sciences of Queen’s University Belfast approved the project (approval no 17.33v3). The Committee judged that the project had limited potential to cause harm and did not require written consent because no personal details were recorded. The questionnaire had a cover page, which explained participants’ right not to complete it but invited them to give implicit consent by doing so.

Theoretical orientation

Since the goal of our programme of DBR was to understand and improve FTs’ clinical behaviour, we chose a leading behaviour change theory to guide the design of our investigative instrument: Michie and colleagues’ Capability-Opportunity-Motivation-Behaviour (COM-B) theory.[32] According to this, learners adopt desired behaviour when motivated to do so. There are two types of motivation: reflective (conscious) and automatic motivation (habit). Learners’ motivation is influenced by their psychological capability and the social opportunities their practice environments provide. Learners’ increasing capability exercises positive feedback on their learning environments by helping them get the best out of their fellow workers and available educational resources. COM-B is compatible with our theoretical orientation towards complexity because it regards behaviour change as an open system in which conditions and processes feed back on one another through a web of actions and interactions. Since patient safety depends on this, we regarded it as axiomatic that capability should mean prescribing when psychologically capable to do so independently and calling for supervisory help when not. Since receiving help on earlier occasions makes trainees more psychologically capable to prescribe independently on subsequent occasions, readiness is a shift towards increasingly independent behaviour rather than an endpoint.

Because our goal was to operationalise the concept of readiness, we drew insights from another theoretical framework that explains complex relationships between contexts and processes, the Consolidated Framework for Implementation Research (CIFR).[33] This provides empirically grounded constructs that predict the success of interventions and complement COM-B: reflective motivation in COM-B, for example, corresponds to self-efficacy in CIFR. We transcribed relevant constructs from COM-B and CIFR into an Excel (Microsoft, Redmond USA) spreadsheet and cross-mapped them. In addition, we triangulated our interpretation of these theories against Reason’s theory of error causation [34] and empirical research into the causes of FDs’ prescribing errors [21,35]. The members of our team, which comprised a medicines governance pharmacist, a diabetes specialist pharmacist, a junior doctor, a senior doctor, and a lay member (administrator) reflexively clustered the items and progressively reduced them to a comprehensive set of items without obvious redundancy. In summary, we conceptualised readiness as a multi-dimensional construct representing learners’ capability and motivation to prescribe within the constraints and opportunities provided by the contexts in which they learn to practise and/or to recruit additional resources in order to assure patient safety and learn by doing so.


The project was conducted in Northern Ireland, a geographically bounded province of the UK, which has a population of 2 million people. Healthcare is provided by five Health and Social Care Trusts, in which 500 FTs are educated. The 2-year Foundation Programme is managed by a single Training Authority. Trainees complete 4–6 month hospital placements, including medicine, surgery, and a range of other secondary and tertiary care specialties. Some second-year FTs also gain experience in general practice.

Study design, recruitment, and participants

The study had a survey design and an opportunistic sampling strategy. All FTs were eligible to participate without any exclusion criteria. The goal was to recruit as many of them as possible from all five Trusts over a 12-month period. Recruitment was low despite team members assiduously attending locality teaching sessions because FTs attended teaching poorly. In light of this, we recruited at regional quality improvement teaching, which is mandatory, and attended by second-year FTs (FT2s) from all Trusts. A team member briefly explained the study during a drinks break or at the end of the session and non-coercively invited FTs to complete the questionnaire. She did not stand over them as they completed it, and the form required no personal details. The teaching sessions were not directly related to insulin safety. The recruitment process led, inescapably, to over-representation of FT2s in the sample. The ratio of the number of FT1s to the number of items (2.1:1) was too small for FT1s to be analysed as an independent sample, so we pooled all respondents for statistical analysis.


The spreadsheet containing cross-mapped constructs from the underpinning theories provided items for the questionnaire, which we further refined during several further phases of piloting and revision. The questionnaire asked participants to rate their agreement with 24 short statements using anchored 7-point Likert scales. It also contained comment boxes, inviting participants to give free-text reasons for their numerical ratings. Guided by analysis of pilot data, we organised the numerical items into five clusters and placed a comment field after each of these. The instrument as used here (it has been refined subsequently in response to these findings–see Discussion) is included as S1 Fig.

Statistical analysis

Data cleaning and removal of multivariate outliers.

We excluded the responses of four participants who answered fewer than half the items. Three percent of the remaining scores were missing, which we imputed using the Expectation Maximisation algorithm in SPSS v25. This changed the mean value of only one item by 1%, which suggested imputation had not significantly distorted the findings. The Mahalanobis Distance technique at a critical alpha value of 0.001 [36] identified seven questionnaires with multivariate outliers. Those two stages of data cleaning reduced the dataset from 266 to 255 participants.

Exploratory factor analysis.

The Kaiser-Meyer-Olkin measure of sampling adequacy (0.75) and Bartlett’s test (p<0.001) suggested a factorable solution was possible for the 24 items. Since the items were theoretically linked and statistically inter-correlated (r values varying between 0.02–0.71 for bivariate correlations) analysis was by Principal Axis Factoring with oblique (Direct Oblimin) rotation. The scree plot showed an inflexion point below the fourth factor so we examined possible solutions around this factor value. For interpretive purposes, we considered an item relevant to a specific factor if the absolute value of the standardized loading was greater than 0.3, and the factor loading was at least 0.2 higher than other loadings. A four-factor solution accounted for the highest estimated variance in the sample and produced more substantial loadings for each variable than other solutions. We used Cronbach’s Alpha, inter-item and item-total correlations to examine the internal consistency of these subscales. There were three items whose alpha-if-deleted coefficients were greater than the alpha coefficient for the factor they loaded onto, which we deleted before computing the properties of a final 20-item version, whose items are shown in Table 1.

Descriptive and correlation analysis.

The analysis had three final stages: 1) Calculating an aggregate mean of the items loading to each factor for each participant (expressed as percentage of the scale maximum), computing group grand means, and testing for differences between them using a Kruskal-Wallis test. The instrument includes several negatively-worded items, whose valence we reversed by subtracting the median and quartile values from 100. 2) Examining the relationship between those factors and year of training (FY1 or FY2) and sex. 3) Selecting the four hospitals with over 30 participants attached to them and analysing relationships between hospital and factor scores.

Qualitative analysis

One hundred and seventy-eight participants gave free-text responses to at least one item. Having entered these into a spreadsheet, the first author and senior author read them, jointly devised, and then applied a simple coding framework informed by our theorisation of readiness to prescribe. This included attributes of: FTs as individuals; their job; other individuals with whom they worked or were supervised; the culture in which they worked; and the tools that were available to support their work. Responses could code to more than one of those categories. The first author wrote a detailed precis of the contents of the table. She and the senior author then condensed the data across categories and domains.


Two hundred and fifty-five FTs participated (54% women and 46% men; 80% FT2s and 20% FT1s). Participants worked in thirteen hospitals and several general practices, representing all five Trusts.

Principal axis factoring and quantitative comparisons (Table 1)

The analysis converged in 11 iterations on a 4-factor solution, explaining 57% of the variance in the data. Participants rated their ability to prescribe higher (79%) than their ability to learn to prescribe (69%; p<0.001). They rated the support to their prescribing education rather low (43%; p<0.001 compared with their capability to prescribe or their capability to learn).

The one negatively worded item—Tensions—was also low (33%), suggesting that tensions with doctors or other health professionals were not a main influence on participants’ prescribing education. We found no significant differences between the factor scales when comparing male and female participants, FT1s and FT2s, and hospitals.

The qualitative findings are summarised in Table 2.

What made FDs more capable

People and practice communities.

Supportive learning environments increased capability. Whilst the support of senior and specialist doctors was important, the expertise and supportive behaviour of diabetes specialist nurses (DSNs) was particularly valued. Their support was, however, least available out of hours when it was most needed and not always available even within working hours. Ward pharmacists were more a safety net than someone to consult with prior to prescribing. Participants sometimes valued patients’ involvement in shared decision-making, although confused and very unwell patients could not do this. Feedback was most helpful when it was constructive rather than critical, face to face, and directly related to individual prescriptions.


Many participants said experience would make them more capable though they did not specify how it would do so.


Comments about teaching were equally unclear other than that participants needed help to develop a good working knowledge of different insulin types.


Well-designed prescription charts and clear documentation of patients’ usual doses of insulin and changes to management plans increased capability. Whilst local protocols and guidelines increased capability, these were not always available when needed.

What made FDs less capable

Difficult clinical problems.

Clinical complexity made FDs less capable. There was an obvious contradiction between participants’ stated belief that experience would make them more capable, and their statements that difficult situations made them less capable because they were not well supported in managing these.

Being busy and under pressure.

Working in busy learning environments with heavy workloads, being pressured to prescribe quickly, frequent distractions, and insufficient time to engage with patients reduced participants’ capability. Being expected to prescribe away from patients’ bedsides made this worse and relationships with nurses became strained, particularly when nurses feared that prescribed doses would make patients hypoglycaemic. Poor documentation of patients’ insulin doses and management plans further reduced participants’ capability.

Unhelpful criticism.

Unconstructive or frankly destructive criticism reduced participants’ learning. Participants were unsure how to respond to this and were left with doubts about their own clinical judgement. They found it unhelpful when other prescribers changed their prescriptions without explaining why.

Copying what others had done.

Participants resolved their uncertainty about what to prescribe by copying earlier prescriptions written by peers. They were aware that this might perpetuate poor practice but, in the absence of expert supervision to hand, had nothing better to guide them.

What was missing


Participants would have appreciated constructive feedback but rarely received this: Feedback was least available for out-of-hours work, which left participants wondering if they were doing ‘the right thing’.

Other factors.

Praise was missing. Heavy workloads and time pressures prevented participants knowing what happened to patients they had prescribed for. Encouragement to reflect on prescribing was rare and usually only followed adverse events. Even when educational resources were available, time constraints limited their use. Participants might have made greater use of patients’ expertise. They chose only to discuss insulin prescriptions with ‘well informed’, ‘competent’ patients, ‘who normally look after their own regimen and are confident in doing so’. Some comments verged on dismissing patients’ roles in their own care, rather than having conversations with patients from which they might have learned: ‘I ask sensible patient what they want prescribed’; ‘Many patients don't understand their insulin’.


The design of our investigative instrument was guided by conceptualising readiness to prescribe as an interaction between FTs and their learning environments, which leads them to prescribe when capable and recruit additional resources when not capable. The responses of 255 FTs provided validity evidence for the conceptualisation and preliminary evidence about participants’ prescribing education. They rated their readiness to prescribe higher than their readiness to learn to prescribe. They identified shortcomings in their learning environments–notably, a lack of support to critical reflection. Participants’ free text responses described an unreflective type of learning from experience in which they uncritically copied what others had done before and learned to 'get by' when faced with complex problems unsupported. Workload pressures, for example being presented with several prescription charts away from the bedside and being expected to prescribe quickly without assessing patients, coupled with pressure not to make patients hypoglycaemic, may have encouraged unreflective behaviour.

The findings are important, given that the first two years of practice play a crucial role in doctors' prescribing education. The findings show that our novel instrument meets several validity criteria.[37] Its items were derived from robust conceptual frameworks and bear a logical relationship to the domain being measured. The response process left relatively few numerical items missing, which could be compensated for by imputation. The internal structure had acceptable reliability. We have not yet shown a relationship between our measure and other variables and cannot yet say what impact it will have, though using best available theory to guide its design increases the likelihood that it will have consequential validity. The inclusion of free text items also contributes to the consequential validity of the instrument by identifying ways of improving learning environments, as well as measuring their quality. The quantitative findings identified possibilities for improvement, most important of which is to foster a positive educational culture that values good prescribing, encourages constructive feedback and learning, and promotes greater collaboration with fellow patients and professionals.


One limitation was the relative under-representation of FT1s, which prevented us exploring differences between them and FT2s. Our earlier research, however, suggests that the causes of error are very similar in FT1s and FT2s.[1] The lack of difference between hospitals may suggest the instrument is insensitive. An alternative explanation is the relative homogeneity of the research setting, within a single UK region served by a single deanery and single medical school. These may not, however, represent foundation trainees more widely. Another limitation was using relatively fragmentary qualitative data, rather than in-depth analysis of interviews or focus groups.


The main implication is to healthcare quality improvement. The instrument addresses an important problem, has been rigorously validated in the contexts where it will be applied, and is therefore fit for wider implementation and evaluation. It has already identified ways of improving foundation trainees' practice and learning to provide safe and effective insulin therapy, which could have significant impact on patient safety. Since measuring the status quo, of itself, tends to change the status quo, we suggest the instrument should be used to audit FTs’ readiness to prescribe insulin in further DBR cycles. To broaden the applicability of this research, we are now evaluating a 20-item version of the instrument that is not specific to insulin and can evaluate the readiness of nurses and pharmacists, as well as doctors, to prescribe.


We conclude, pending further research, that there is value in conceptualising readiness to prescribe as a complex construct, which comprises the attributes of individuals, features of their learning environments, and interactions between the two. We have provided validity evidence for a set of numerical items that explore interrelationships between these factors, and free text items that provide information to guide improvement efforts. This research suggests that the Gordian knot of prescribing error might be untied by paying at least as much attention to the social and material environments in which FTs learn to prescribe as on training them individually in prescribing skills. Our findings show room for improvement in the prescribing cultures in which FTs learn and a need to encourage a more reflective approach to foundation education.

Supporting information


We thank the participants for their time and thoughtful answers.


  1. 1. Dornan T, Ashcroft D, Heathfield H, Lewis P, Miles J, Taylor D, et al. An in depth investigation into causes of prescribing errors by foundation trainees in relation to their medical education. EQUIP study. London: General Medical Council; 2009.
  2. 2. Berridge EJ, Freeth D, Sharpe J, Roberts CM. Bridging the gap: Supporting the transition from medical student to practising doctor—A two-week preparation programme after graduation. Med Teach. 2007;29: 119–127. pmid:17701621
  3. 3. Van Hamel C, Jenner LE. Prepared for practice? A national survey of UK foundation doctors and their supervisors. Med Teach. 2015;37: 181–188. pmid:25155154
  4. 4. NaDIA advisory group. National Diabetes Inpatient Audit England and Wales. 2018. Available:
  5. 5. Clack GB. Medical graduates evaluate the effectiveness of their education. Med Educ. 1994;28: 418–431. pmid:7845261
  6. 6. Jones A, McArdle PJ, O’Neill PA. How well prepared are graduates for the role of pre-registration house officer? A comparison of the perceptions of new graduates and educational supervisors. Med Educ. 2001;35: 578–584. pmid:11380861
  7. 7. Jones A, McArdle PJ, O’Neill PA. Perceptions of how well graduates are prepared for the role of preregistration house officer: a comparison of outcomes from a traditional and an integrated PBL curriculum. Med Educ. 2002;36: 16–25. pmid:11849520
  8. 8. Han WH, Maxwell SR. Are medical students adequately trained to prescribe at the point of graduation? Scott Med J. 2006;51: 27–32.
  9. 9. Heaton A, Webb DJ, Maxwell SRJ. Undergraduate preparation for prescribing: The views of 2413 UK medical students and recent graduates. Br J Clin Pharmacol. 2008;66: 128–134. pmid:18492128
  10. 10. Illing J, Morrow G, Kergon C, Burford B, Spencer J, Peile E, et al. How prepared are medical graduates to begin practice? A comparison of three diverse UK medical schools. Rep to Educ Comm Gen Med Counc. 2008. Available:
  11. 11. Tallentire VR, Smith SE, Skinner J, Cameron HS. Understanding the behaviour of newly qualified doctors in acute care contexts. Med Educ. 2011;45: 995–1005. pmid:21916939
  12. 12. Tallentire VR, Smith SE, Skinner J, Cameron HS. The preparedness of UK graduates in acute care: A systematic literature review. Postgrad Med J. 2012;88: 365–371. pmid:22167809
  13. 13. Illing JC, Morrow GM, Rothwell nee Kergon CR, Burford BC, Baldauf BK, Davies CL, et al. Perceptions of UK medical graduates’ preparedness for practice: a multi-centre qualitative study reflecting the importance of learning on the job. BMC Med Educ. 2013;13: 34. pmid:23446055
  14. 14. Burford B, Whittle V, Vance GHS. The relationship between medical student learning opportunities and preparedness for practice: a questionnaire study. BMC Med Educ. 2014;14: 223. pmid:25331443
  15. 15. Kellett J, Papageorgiou A, Cavenagh P, Salter C, Miles S, Leinster SJ. The preparedness of newly qualified doctors—Views of Foundation doctors and supervisors. Med Teach. 2015;37: 949–954. pmid:25308805
  16. 16. Monrouxe L V., Grundy L, Mann M, John Z, Panagoulas E, Bullock A, et al. How prepared are UK medical graduates for practice? A rapid review of the literature 2009–2014. BMJ Open. 2017;7: 7:e013656. pmid:28087554
  17. 17. Prince KJAH, Boshuizen HPA, van der Vleuten CPM, Scherpbier AJJA. Students’ opinions about their preparation for clinical practice. Med Educ. 2005;39: 704–712. pmid:15960791
  18. 18. Prince CJAH. Problem-based learning as a preparation for professional practice. Maastricht: Universitaire Pers Maastricht; 2006.
  19. 19. Noble C, Billett S. Learning to prescribe through co-working: junior doctors, pharmacists and consultants. Med Educ. 2017;51: 442–451. pmid:28164385
  20. 20. Lai PSM, Sim SM, Chua SS, Tan CH, Ng CJ, Achike FI, et al. Development and validation of an instrument to assess the prescribing readiness of medical students in Malaysia. BMC Med Educ. 2015;15: 153. pmid:26391883
  21. 21. Lewis PJ, Ashcroft DM, Dornan T, Taylor D, Wass V, Tully MPMP. Exploring the causes of junior doctors’ prescribing mistakes: a qualitative study. Br J Clin Pharmacol. 2014;78: 310–9. pmid:24517271
  22. 22. Bleakley A, Brennan N. Does undergraduate curriculum design make a difference to readiness to practice as a junior doctor? Med Teach. 2011;33: 459–467. pmid:21609175
  23. 23. Tobaiqy M, McLay J, Ross S. Foundation year 1 doctors and clinical pharmacology and therapeutics teaching. A retrospective view in light of experience. Br J Clin Pharmacol. 2007;64: 363–372. pmid:17506779
  24. 24. O’Donnell M. A sceptic’s medical dictionary. London: BMJ Publishing Group; 1997.
  25. 25. Eraut M. Informal learning in the workplace. Stud Contin Educ. 2004;26: 247–273.
  26. 26. Billett S. Mimetic learning at work: Learning in the circumstances of practice. Dordrecht: Springer; 2014.
  27. 27. Eppich WJ, Dornan T, Rethans J-J, Teunissen P. “Learning the Lingo”: A Grounded Theory Study of Telephone Talk in Clinical Education. Acad Med. 2019; (epub ahead of p. pmid:30893065
  28. 28. Billett S. Relational Interdependence Between Social and Individual Agency in Work and Working Life. Mind, Cult Act. 2006;13: 53–69.
  29. 29. Lave J, Wenger E. Situated learning. Legitimate peripheral participation. Situated learning. Legitimate peripheral participation. Cambridge: Cambridge University Press; 1991.
  30. 30. Regehr G. It’s NOT rocket science: rethinking our metaphors for research in health professions education. Med Educ. 2010;44: 31–39. pmid:20078754
  31. 31. Dolmans DHJM, Tigelaar D. Building bridges between theory and practice in medical education using a design-based research approach: AMEE Guide No. 60. Med Teach. 2012;34: 1–10. pmid:22250671
  32. 32. Michie S, van Stralen MM, West R, Grimshaw J, Shirran L, Thomas R, et al. The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6: 42. pmid:21513547
  33. 33. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander J a, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4: 40–55.
  34. 34. Reason J. Human error. Cambridge: University of Cambridge Press; 1990.
  35. 35. Tully MPP, Ashcroft DMM, Dornan T, Lewis PJJ, Taylor D, Wass V. The causes of and factors associated with prescribing errors in hospital inpatients. A systematic review. Drug Saf. 2009;32: 819–836. pmid:19722726
  36. 36. Tabachnick B., Fidell L. Using Multivariate Statistics. 5th ed. New York: Allyn and Bacon; 2007.
  37. 37. Downing S. Validity: on the meaningful interpretation of assessment data. Med Educ. 2003;37: 830–7. pmid:14506816