Figures
Abstract
This study aims to compare the effectiveness of Hybrid and Pure problem-based learning (PBL) in teaching clinical reasoning skills to medical students. The study sample consisted of 99 medical students participating in a clerkship rotation at the Department of General Medicine, Chiba University Hospital. They were randomly assigned to Hybrid PBL (intervention group, n = 52) or Pure PBL group (control group, n = 47). The quantitative outcomes were measured with the students’ perceived competence in PBL, satisfaction with sessions, and self-evaluation of competency in clinical reasoning. The qualitative component consisted of a content analysis on the benefits of learning clinical reasoning using Hybrid PBL. There was no significant difference between intervention and control groups in the five students’ perceived competence and satisfaction with sessions. In two-way repeated measure analysis of variance, self-evaluation of competency in clinical reasoning was significantly improved in the intervention group in "recalling appropriate differential diagnosis from patient’s chief complaint" (F(1,97) = 5.295, p = 0.024) and "practicing the appropriate clinical reasoning process" (F(1,97) = 4.016, p = 0.038). According to multiple comparisons, the scores of "recalling appropriate history, physical examination, and tests on clinical hypothesis generation" (F(1,97) = 6.796, p = 0.011), "verbalizing and reflecting appropriately on own mistakes," (F(1,97) = 4.352, p = 0.040) "selecting keywords from the whole aspect of the patient," (F(1,97) = 5.607, p = 0.020) and "examining the patient while visualizing his/her daily life" (F(1,97) = 7.120, p = 0.009) were significantly higher in the control group. In the content analysis, 13 advantage categories of Hybrid PBL were extracted. In the subcategories, "acquisition of knowledge" was the most frequent subcategory, followed by "leading the discussion," "smooth discussion," "getting feedback," "timely feedback," and "supporting the clinical reasoning process." Hybrid PBL can help acquire practical knowledge and deepen understanding of clinical reasoning, whereas Pure PBL can improve several important skills such as verbalizing and reflecting on one’s own errors and selecting appropriate keywords from the whole aspect of the patient.
Citation: Ishizuka K, Shikino K, Tamura H, Yokokawa D, Yanagita Y, Uchida S, et al. (2023) Hybrid PBL and Pure PBL: Which one is more effective in developing clinical reasoning skills for general medicine clerkship?—A mixed-method study. PLoS ONE 18(1): e0279554. https://doi.org/10.1371/journal.pone.0279554
Editor: Somayeh Delavari, Iran University of Medical Sciences, ISLAMIC REPUBLIC OF IRAN
Received: July 17, 2022; Accepted: December 9, 2022; Published: January 23, 2023
Copyright: © 2023 Ishizuka et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Clinical reasoning is a core competency that all healthcare professionals must develop [1,2]. However, it can be challenging for beginner students because of their inadequate knowledge, poor data collection, and inappropriate approaches to information processing [1]. As such, active feedback and coaching by teachers should be tailored to individual stages of learning [3,4].
Problem-based learning (PBL) has been introduced into graduate medical education as a learner-centered educational strategy to develop clinical reasoning skills [5–8], where small groups (usually 5 to 8 people) study independently under the supervision of a tutor [5,6]. In addition, PBL using simulated patients and video materials enables free conception and direct non-verbal communication through visual and auditory information [8]. Therefore, learners can easily visualize real patients and develop the comprehensive approach required in actual clinical situations [8]. However, PBL faculty have often witnessed medical students getting stuck on a problem, which stalled the discussion [9]. In these situations, some studies have argued that a session with a faculty member can effectively solve the shortcomings of PBL [10,11].
Hybrid PBL, in which PBL is supplemented by lectures according to the student’s knowledge, may be a more effective and efficient alternative to PBL alone (Pure PBL) [10,11]. While the usefulness of Hybrid PBL for clinical practice in the field of neurology has been reported, there is no study to date comparing Hybrid PBL and Pure PBL in clinical reasoning education during a general medicine clerkship [10].
The purpose of this study is thus to compare the effectiveness of Hybrid PBL and Pure PBL in teaching clinical reasoning skills to medical students.
Methods
Ethics statement
This research was performed per the Declaration of Helsinki and approved by the Ethics Review Committee of the Chiba University Graduate School of Medicine (Chiba, Japan). The procedures for informed consent were explained to the medical students. If consent was not obtained, the students were allowed to take the conventional Pure PBL method. Even though the researcher was also a teacher in the class, it was made clear to the medical students that this study would not affect their grade evaluations. This study has been registered in the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CRT) (UMIN000045861).
Trial design
Hybrid PBL was conducted in the intervention group, and Pure PBL was conducted in the control group. The quantitative data in this study examined students’ perceived competence in PBL, satisfaction with sessions after educational intervention, and self-evaluation of competency in clinical reasoning before and after the educational intervention in the intervention and control groups. In the clinical clerkship, participants were rotated in groups assigned by the university before this study. The participants were equally allocated to the intervention or control groups. To improve the reporting quality, we conducted a cluster randomized controlled trial based on the CONSORT 2010 statement (S1 Fig) [12]. To integrate quantitative and qualitative assessments, we designed a mixed-method sequential explanatory study [13–15]. In the intervention group, we evaluated the advantages of learning clinical reasoning via Hybrid PBL qualitatively using focus group interview (FGI) after PBL completion to examine the cognitive aspects of the medical students.
Participants
This study was conducted at a single facility in the Department of General Medicine, Chiba University Hospital in Japan. The study included 109 fifth-year medical students at the Chiba University School of Medicine who participated in a clinical clerkship at our department from January 2020 to October 2021. This study was embedded into a clinical clerkship rotation in the department; thus, the participants were not sampled randomly. In the clinical clerkship, 5–6 medical students per group would rotate through the department every two weeks. A total of 20 groups were rotated during the study period. Each group was exposed to PBL in one case. All subjects had already received lectures and simulation training in basic and clinical medicine by the fourth year and had passed computer-based testing and objective structured clinical examinations before their clinical clerkship. Additionally, all students have experienced Pure PBL since it is integrated into Chiba University’s curriculum from the first to the fourth year of medical school. Students who were unable to participate in one of the conferences for any reason were excluded from the study. We had to suspend this study because of the COVID-19 pandemic, so we only analyzed the gathered data up till then.
Intervention
In the control group (Pure PBL group), students learn independently under the supervision of a tutor without any intervention from a faculty member [16,17]. In the intervention group (Hybrid PBL group), the tutor was allowed to help the target students understand clinical discussions beyond their own medical knowledge, to help medical students problem solve, and to answer "learner-centered" questions when the target students were stuck or overlooked the key learning issues in each scenario [18,19].
Tutors were randomly selected from 12 faculty members in our department who were PGY (postgraduate year)-4 or above (median PGY-7.5 [6–24]), with one faculty member assigned per group. Before the PBL, each teacher was provided a lecture on PBL and their role as tutors as well as some handouts. All instructions were standardized and given immediately before the PBL. Each tutor was skilled enough to explain the process of clinical reasoning to the target students.
PBL was divided into two sessions per case, with each session lasting approximately two hours (S1 Table). In the first session, history and physical examination findings were presented. In the second session, examination findings, imaging findings, and treatment plans were presented. Each group was equally assigned to one of five cases in the study groups. The five cases were Fitz-Hugh-Curtis syndrome, panic disorder, subacute combined degeneration of the spinal cord, primary amyloidosis, and sleep apnea syndrome. They present challenges in pattern recognition and can help examine analytical and diagnostic reasoning skills. In Japan, the fourth version of the national core curriculum for undergraduate medical education in 2016 introduced a new list of possible diagnoses for 37 common signs, symptoms, and pathophysiology that ought to be learned as part of the six-year undergraduate curriculum [20]. Regarding these common signs, students must acquire the competence to anticipate a set of differential diagnoses from the earliest phase of the diagnostic process, gather confirming and refuting information according to an initial hypothesis, select and perform the relevant history taking and physical examination, and interpret the findings to confirm or deny the initial hypothesis [20,21]. The five cases were selected from those with any one of the 37 common signs that appeared on the Japanese National Medical Examination questions. A patient-simulated video and paper-based scenario assignment were prepared for each case [8]. The patient-simulated videos were produced based on the theme of case investigations. In addition to medical interview scenes in the examining rooms, they also showed the symptoms displayed in the patient’s daily life. It was shown clearly to the students on a display screen. In addition, physical findings, examination findings, imaging findings, and treatment plans were distributed as paper materials.
Outcome measures
1) Students’ perceived competence in PBL and satisfaction with sessions.
The students’ perceived competence in PBL and satisfaction with sessions after educational intervention were compared between the intervention and control groups.
After the PBL, in addition to the five students’ perceived competence in PBL at the university (1. development of knowledge structure for use in clinical contexts, 2. development of an effective clinical reasoning process, 3. development of effective self-directed learning skills, 4. provision of encouragement and motivation for learning, and 5. development of team skills), item 6. satisfaction with sessions was also surveyed (S2 Table). The five students’ perceived competence in PBL was determined by the faculty members of the Medical Education Laboratory of Chiba University based on the report of Barrows et al [5].
The data were collected using a self-administered 7-point Likert scale questionnaire. The scale of 7-point Likert scale was as follows: 1 = very poor, 7 = very good for students’ perceived competence in PBL, and 1 = not very satisfied, 7 = very satisfied for satisfaction with sessions.
2) Self-evaluation of competency in clinical reasoning.
Before and after the educational intervention, we investigated the self-evaluation of competency in clinical reasoning using the following eight items: 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation, 8. recalling appropriate differential diagnosis from the patient’s chief complaint, 9. verbalizing points that fit/don’t fit the recalled differential diagnosis appropriately, 10. verbalizing and reflecting appropriately on own mistakes, 11. selecting keywords from the whole aspect of the patient, 12. examining the patient while visualizing his/her daily life, 13. considering biological, psychological, and social perspectives, and 14. practicing the appropriate clinical reasoning process (S2 Table). The self-evaluation of competency in clinical reasoning investigated in this study was based on the report of Cooper et al. and was determined through discussions among the faculty members of our department [3].
The data were collected using a self-administered 7-point Likert scale questionnaire. The scale of the 7-point Likert scale was as follows: 1 = not at all confident, 7 = very confident for self-evaluation of competency in clinical reasoning.
Sample size
Because this study also served as an educational program for a clinical clerkship in our department, medical students who were rotated at the beginning of the study and all medical students who would rotate thereafter in the same year were included in the study. For the quantitative data, the sample size required for a two-tailed t-test of the difference between the means of the two groups was calculated to be 128 students in total (64 students in each group), assuming a significance level of 0.05, a power of 0.8, and an effect size of 0.5. When the Mann-Whitney U test was conducted at the significance level of 0.05, power of 0.8, and effect size of 0.5, the required sample size was calculated to be 67 persons in each group, for a total of 134 persons. However, we had to suspend this study because of the COVID-19 pandemic and only analyzed the gathered data till then. Therefore, a total of 109 students across 20 groups were considered.
Randomization
The 20 eligible groups were allocated to the intervention or control groups in an equal number of groups. The groups were allocated through cluster randomization using Microsoft Excel (Microsoft Co., Redmond, WA, USA). The allocation was not blinded to the medical students or the faculty.
Statistical method
All statistical analyses were performed using SPSS Statistics for Windows 26.0 (IBM Co., Armonk, NY, USA) with a significance level of less than 5%. Sex and age were analyzed using the chi-square test and Mann-Whitney U test respectively. The results of the questionnaires on students’ perceived competence in PBL and satisfaction with sessions after educational intervention were compared between the intervention and control groups using an unpaired t-test (Fig 1). Regarding self-evaluation of competency in clinical reasoning, we conducted an analysis of covariance (ANCOVA) between intervention and control groups before educational intervention as a covariate and the method of PBL as a fixed factor. In addition, to examine whether the self-evaluation of competency in clinical reasoning changed between the intervention and control groups, we conducted a two-way repeated measure analysis of variance (ANOVA), where the within-subjects factor was the time of implementation, and the between-subjects factor was the method of PBL.
Focus group interview
Qualitative evaluation was conducted following the quantitative evaluation to assess the acquisition of higher-order intellectual skills, and the results were integrated as a mixed-method sequential explanatory study [13–15]. Because all of the target students had experienced Pure PBL by their fourth year but not Hybrid PBL, the medical students in the intervention group were selected as the sampling group for our qualitative survey. We evaluated the advantages of learning clinical reasoning via Hybrid PBL qualitatively using focus group interview (FGI) after PBL completion to examine the cognitive aspects of the medical students, which is thought to influence the learning effectiveness of clinical reasoning by PBL. After obtaining informed consent from medical students in the intervention group, we conducted an FGI face-to-face immediately after completion of the PBL, which lasted about 10 minutes to minimize participants’ fatigue. In the FGI, the advantages of Hybrid PBL in improving clinical reasoning skills were investigated. The aims of the study were reviewed at the beginning of each focus group. The interview content was discussed among the interviewers (HT, KS), and an interview guide was developed (S3 Table). Medical students were asked the following open-ended questions: "What are the advantages of Hybrid PBL? Why do you think so?” The responses of the focus groups were recorded and transcribed verbatim. One faculty member administered the FGI to a group of 5–6 medical students immediately after filling out the post-PBL questionnaires. The FGI sampling was 52 students (10 groups) from all intervention groups. The FGI interviewers were trained facilitators from the faculty (KI, KS, YH, YY, JK, SU, YY, DY, ES, NH, SY, TT, KN) who were in charge of PBL. They had experience in higher education in their respective countries and had previously conducted educational research. The FGI interviewers did not share their personal attitudes and behaviors. A team debrief was held after each focus group session, and field notes were documented. There were no repeat interviews. Participants were not asked for feedback nor to review transcripts.
For the qualitative research, content analysis was used to analyze the themes (Table 1) [22,23]. A preliminary analytic template, aligned with the focus group guide, was developed as a starting point for analysis. Two researchers (KI, HT) independently read and did the initial coding of all focus group transcripts. Thereafter, to ensure the quality of the research, researcher triangulation was conducted by two researchers (KI, HT), who discussed, identified, and agreed on the coding of the descriptors. Following the coding, similar codes were grouped into categories and subcategories, which were regularly discussed and reviewed by KS (who had experience in qualitative research) to ensure the credibility of the findings. Cohen’s kappa coefficient was used to evaluate the intra-rater validity between the two researchers (0.8–1.0 = almost perfect, 0.6–0.8 = substantial, 0.4–0.6 = moderate, 0.2–0.4 = good) [24]. The consolidated criteria for reporting qualitative research (COREQ) checklist was used to report our findings [25].
To examine the effects of Hybrid PBL on the cognitive processes of medical students, the analytic categories were set according to the six cognitive process dimensions of the revised Bloom’s taxonomy [26]. After open coding, similar codes were classified into subcategories and categories. We analyzed the concepts in each of the six cognitive process dimensions in the revised Bloom’s taxonomy and calculated the number of analysis units for each concept [26]. We also grouped similar codes as themes and checked which dimension of the cognitive process dimension they correspond to.
Mixed methods research
To integrate quantitative and qualitative assessments, a mixed methods study was conducted with exploratory sequential design [13–15]. This type of research study design capitalizes on the respective strengths of both quantitative and qualitative design while minimizing each other’s shortcomings. Furthermore, it allows the researchers to understand the experimental results better while incorporating the medical students’ perspectives. This is based on the advice of The National Institute of Health, which advocates a mixed-method approach to research “to improve the quality and scientific power of data” and to better address the complexity of issues in health science education today [22,27].
Results
Participant characteristics
Consent was obtained from all 109 participating students. Ninety-nine students (90.8%) were included in the quantitative evaluation, while ten students who missed at least one PBL session were excluded. 52 participants (10 groups) were assigned to the intervention group and 47 participants (10 groups) to the control group. Comparison between the two groups showed no significant difference in age and sex (Table 2).
1) Students’ perceived competence in PBL and satisfaction with sessions.
Students’ perceived competence in PBL after educational intervention and satisfaction with sessions were not significantly different between two groups (Table 3).
2) Self-evaluation of competency in clinical reasoning.
Table 4 showed the baseline of self-evaluation of competency in clinical reasoning before educational intervention between the intervention and the control groups. Self-evaluation of competency in clinical reasoning before the PBL: 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation (3.04 ± 1.27 vs. 2.42 ± 0.92, p = 0.006), 8. recalling appropriate differential diagnosis from patient’s chief complaint (3.06 ± 1.28 vs. 2.54 ± 0.96, p = 0.022), 9. verbalizing points that fit/don’t fit the recalled differential diagnosis appropriately (3.15 ± 1.32 vs. 2.65 ± 1.03, p = 0.039), 14. practicing the appropriate clinical reasoning process (2.91 ± 1.25 vs. 2.31 ± 0.96, p = 0.008) were significantly higher in the control group. There was no significant difference between the intervention and the control groups in the following items: 10. verbalizing and reflecting appropriately on own mistakes, 11. selecting keywords from the whole aspect of the patient, 12. examining the patient while visualizing his/her daily life, and 13. considering biological, psychological, and social perspectives.
In ANCOVA of the self-evaluation of competency in clinical reasoning after the educational intervention, there was no significant difference between the intervention and the control groups in all items (Table 5).
In two-way ANOVA of self-evaluation of competency in clinical reasoning, a significant interaction effect was shown in item 8. recalling appropriate differential diagnosis from the patient’s chief complaint (F(1,97) = 5.295, p = 0.024) and item 14. practicing the appropriate clinical reasoning process (F(1,97) = 4.016, p = 0.038) (Table 6). There was no significant interaction effect between the intervention and the control groups in items 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation, 9. verbalizing points that fit/don’t fit the recalled differential diagnosis appropriately, 10. verbalizing and reflecting appropriately on own mistakes, 11. selecting keywords from the whole aspect of the patient, 12. examining the patient while visualizing his/her daily life, and 13. considering biological, psychological, and social perspectives (Table 6). According to multiple comparisons, the control group showed significantly better results in item 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation (F(1,97) = 6.796, p = 0.011), item 10. verbalizing and reflecting appropriately on own mistakes (F(1,97) = 4.352, p = 0.040), item 11. selecting keywords from the whole aspect of the patient (F(1,97) = 5.607, p = 0.020), and item 12. examining the patient while visualizing his/her daily life (F(1,97) = 7.120, p = 0.009) (Table 6). In addition, the intervention and control groups showed significant improvement in self-evaluation of competency in clinical reasoning after the implementation of PBL (Table 6).
Content analysis
All 52 medical students (94.5%) in the intervention group consented to participate in the FGI immediately after the completion of the PBL and were included in the questionnaire analysis. After analyzing the records of the 10 focus groups, we confirmed that we had reached thematic saturation. Table 7 shows the categories, subcategories, number of codes, and representative quotations in the present data. Regarding the advantages of the intervention group, we identified 13 categories and 27 subcategories that correspond to the four stages of the revised Bloom’s taxonomy (remember, understand, apply, and analyze) [26]. The most frequent cognitive domain in the revised Bloom’s taxonomy was "understand." In the subcategories of content analysis, "acquisition of knowledge" was the most frequent subcategory, followed by "leading the discussion," "smooth discussion," "getting feedback," "timely feedback," and "supporting the clinical reasoning process." Interrater reliability was substantial (Cohen’s kappa = 0.81).
Discussion
This study suggests that Hybrid PBL and Pure PBL have their own advantages in helping medical students to acquire clinical reasoning skills. There was no significant difference between the intervention and the control groups in the five students’ perceived competence in PBL and satisfaction with sessions. In ANCOVA of the self-evaluation of competency in clinical reasoning after the educational intervention, there was no significant difference between the intervention and the control groups in all items. Meanwhile, the two-way ANOVA of the self-evaluation of competency in clinical reasoning before and after the educational intervention showed that the intervention group significantly improved in items 8. recalling appropriate differential diagnosis from the patient’s chief complaint and 14. practicing the appropriate clinical reasoning process. However, according to multiple comparisons, the control group still scored significantly higher than the intervention group in items 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation, 10. verbalizing and reflecting appropriately on own mistakes, 11. selecting keywords from the whole aspect of the patient, and 12. examining the patient while visualizing his/her daily life. The difference in results between the ANCOVA and the two-way ANOVA in self-evaluation of competency in clinical reasoning was considered to be smaller number of participants and baseline differences.
There was no significant difference between the intervention and the control groups in the five students’ perceived competence in PBL and satisfaction with sessions. It is suggested that the intervention group achieved the original goals of PBL reported by Barrows et al. to the same extent as the control group [5].
The two-way ANOVA of the self-evaluation of competency in clinical reasoning before and after the educational intervention showed that the intervention group significantly improved in items 8. recalling appropriate differential diagnosis from the patient’s chief complaint and 14. practicing the appropriate clinical reasoning process. Furthermore, in the subcategories of content analysis, "acquisition of knowledge" was the most frequent subcategory, followed by "leading the discussion," "smooth discussion," "getting feedback," "timely feedback," and "supporting the clinical reasoning process." This indicates that acquiring practical knowledge and deepening understanding through timely and effective feedback may be an advantage of Hybrid PBL. The disadvantages of Pure PBL are that it is ineffective for medical students with low self-learning ability, it increases the variability of learning outcomes among medical students, it is difficult to have effective discussions when medical students do not understand the concept of clinical reasoning, it does not allow students to learn knowledge systematically, it does not provide deep and practical knowledge, and students tend to miss important topics [10]. From the qualitative research of this study, those categories and codes were extracted as advantages of Hybrid PBL. The integration of quantitative and qualitative data signifies an advantage of Hybrid PBL in that it enhances the acquisition of practical knowledge and understanding in the process of clinical reasoning through timely and effective feedback from the tutors.
Based on the two-way ANOVA of self-evaluation of competency in clinical reasoning, the control group scored higher than the intervention group in items 7. recalling appropriate history, physical examination, and tests on clinical hypothesis generation, 10. verbalizing and reflecting appropriately on own mistakes, 11. selecting keywords from the whole aspect of the patient, and 12. examining the patient while visualizing his/her daily life. PBL not only enhances problem-solving skills in clinical situations but also encourages medical students to learn independently [5]. The advantage of Pure PBL is that medical students can independently have the time and process to think carefully, integrate and interpret the information based on the collected information, and verbalize and reflect on the errors in the process. Therefore, Pure PBL can help improve several important skills in clinical reasoning, such as verbalizing and reflecting on the process of one’s own errors and selecting the appropriate keywords from the whole aspect.
Limitations
The limitations of this study are as follows:
First, the differences in results between the ANCOVA and the two-way ANOVA in self-evaluation of competency in clinical reasoning were considered to be smaller number of participants and baseline differences. Sampling bias may have occurred between both groups. In addition, an alpha error may have occurred in the statistical analysis of the two-way ANOVA because it did not follow a normal distribution. Owing to the coronavirus disease 2019 pandemic, this study was suspended and resumed. The results should be accepted cautiously as this is a single study with relatively few participants.
Second, this study was conducted using scenario tasks with patient-simulated video and paper-based scenarios, not actual patients. It is necessary to verify whether the advantages of Hybrid PBL are similar to those of Pure PBL for real patients.
Third, the effectiveness of clinical reasoning education by Hybrid PBL may vary depending on the teaching skills of the teachers. To control for this, the instruction and training of teachers were designed to minimize the influence of their instructional skills.
Fourth, this study was conducted on fifth-year medical students at a single institution. Therefore, the results of this study may not be generalizable or transferable beyond the specific population from which the sample was drawn. Further validation is needed to determine whether the results can be applied to residents and general physicians.
Fifth, since we only held the focus groups among the intervention group, it could have had a separate effect on students’ competence in clinical reasoning via reflection or simply increased interaction with the tutors. Also, this might have introduced bias as a co-intervention in the intervention group.
Sixth, the allocation to the two groups could not be blinded to the participants and the faculty members, and subjective bias may have affected the results. Furthermore, the assignment of these groups by the university before the start of this study might also have been biased, which may have affected the study results.
Conclusion
This study suggests that Hybrid PBL and Pure PBL have their own advantages in teaching medical students to acquire clinical reasoning skills. Hybrid PBL can better help acquire practical knowledge and deepen understanding in the process of clinical reasoning through timely and effective feedback from the tutors, whereas Pure PBL is more useful for improving several important skills in the process of clinical reasoning, such as verbalizing and reflecting on the process of one’s own errors and selecting appropriate keywords from the whole aspect.
Supporting information
S2 Table. The questionnaire of the Fifth-year medical students’ perceived competence in PBL, satisfaction with sessions, and self-evaluation of competency in clinical reasoning about hybrid- and pure-PBL, Chiba University Hospital (N = 99).
https://doi.org/10.1371/journal.pone.0279554.s003
(PDF)
References
- 1. Kassirer JP. Teaching clinical reasoning: case-based and coached. Acad Med. 2010; 85: 1118–24. pmid:20603909
- 2. Groves M, O’Rourke P, Alexander H. Clinical reasoning: the relative contribution of identification, interpretation and hypothesis errors to misdiagnosis. Med Teach. 2003; 25: 621–5. pmid:15369910
- 3. Cooper N, Bartlett M, Gay S, Hammond A, Lillicrap M, Matthan J, et al; UK Clinical Reasoning in Medical Education (CReME) consensus statement group. Consensus statement on the content of clinical reasoning curricula in undergraduate medical education. Med Teach. 2021; 43: 152–9.
- 4. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004; 79(10 Suppl): S70–81. pmid:15383395
- 5. Barrows HS. A taxonomy of problem-based learning methods. Med Educ. 1986; 20: 481–6. pmid:3796328
- 6. Neufeld VR, Barrows HS. The "McMaster Philosophy": an approach to medical education. J Med Educ. 1974; 49: 1040–50. pmid:4444006
- 7. Davis MH. AMEE Medical Education Guide No. 15: Problem-based learning: a practical guide. Med Teach. 1999; 21: 130–40. pmid:21275726
- 8. Ikegami A, Ohira Y, Uehara T, Noda K, Suzuki S, Shikino K, et al. Problem-based learning using patient-simulated videos showing daily life for a comprehensive clinical approach. Int J Med Educ. 2017; 8: 70–6. pmid:28245193
- 9. Hmelo-Silver CE. Problem-based learning: what and how do students learn? Educ Psychol Rev. 2004; 16: 235–66.
- 10. Yang LH, Jiang LY, Xu B, Liu SQ, Liang YR, Ye JH, et al. Evaluating team-based, lecture-based, and hybrid learning methods for neurology clerkship in China: a method-comparison study. BMC Med Educ. 2014; 14: 98. pmid:24884854
- 11. Jiménez-Saiz R, Rosace D. Is hybrid-PBL advancing teaching in biomedicine? A systematic review. BMC Med Educ. 2019; 19: 226. pmid:31234856
- 12. Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010; 340: c332.
- 13. Malterud K. The art and science of clinical knowledge: evidence beyond measures and numbers. Lancet. 2001; 358: 397–400. pmid:11502338
- 14. Côté L, Turgeon J. Appraising qualitative research articles in medicine and medical education. Med Teach. 2005; 27: 71–5. pmid:16147774
- 15. Barbour RS. The case for combining qualitative and quantitative approaches in health services research. J Health Serv Res Policy. 1999; 4: 39–43. pmid:10345565
- 16.
Barrows HS, Tamblyn RM. Problem-based learning: an approach to medical education: Springer Publishing Company; 1980.
- 17. Savery JR. Overview of problem-based learning: definitions and distinctions. Interdiscip J Problem-based Learning. 2006; 1: 3.
- 18. Maudsley G Roles and responsibilities of the problem based learning tutor in the undergraduate medical curriculum. BMJ. 1999; 318: 657–61. pmid:10066213
- 19.
Woods DR. Problem-based learning: how to gain the most from PBL. Waterdown, ON: Donald R Woods, 1994.
- 20.
Medical Education Model Core Curriculum Coordination Committee, Medical Education Model Core Curriculum Expert Research Committee. Model Core Curriculum for Medical Education. AY 2016 Revision. Published 2016. Available from:https://www.mext.go.jp/component/a_menu/education/detail/__icsFiles/afieldfile/2018/06/18/1325989_30.pdf.
- 21. Urushibara-Miyachi Y, Kikukawa M, Ikusaka M, Otaki J, Nishigori H. Lists of potential diagnoses that final-year medical students need to consider: a modified Delphi study. BMC Med Educ. 2021; 21: 234. pmid:33892708
- 22.
Creswell JW, Plano Clarck VL. Designing and conducting mixed method research. Third ed. Los Angeles | London | New Delhi | Singapore | Washington DC |Melbourne: Sage Publications, Inc.; 2017.
- 23.
Wesley, J. J. Qualitative document analysis in political science. Paper presented at the T2PP Workshop, Vrije Universiteit Amstardam; 2010.
- 24. Landis JR, Koch GG. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics. 1977; 33: 363–74. pmid:884196
- 25. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007; 19: 349–57. pmid:17872937
- 26.
Anderson LW, Krathwohl DR, Airasian P, Cruikshank KA, Mayer RE, Pintrich PR, et al. A taxonomy for learning, teaching and assessing: a revision of Bloom’s taxonomy. New York, NY: Longman; 2001.
- 27.
Dowding D. Review of the book Best practices for mixed methods research in the health sciences, by Creswell JW, Klassen AC, Plano Clark VL, Smith KC. Qual Soc Work. 2013; 12: 541–5.