Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Crowdsource authoring as a tool for enhancing the quality of competency assessments in healthcare professions

  • Che-Wei Lin,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft

    Affiliation Department of Education and Humanities in Medicine, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan

  • Daniel L. Clinciu,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Graduate Institute of Biomedical Science, China Medical University, Taichung, Taiwan

  • Daniel Salcedo,

    Roles Data curation, Visualization, Writing – review & editing

    Affiliation Taipei Municipal Wanfang Hospital, Taipei, Taiwan

  • Chih-Wei Huang,

    Roles Data curation, Investigation, Resources, Supervision

    Affiliation International Center for Health Information Technology, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan

  • Enoch Yi No Kang,

    Roles Investigation, Methodology, Software

    Affiliations Department of Education, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan, Evidence-Based Medicine Center, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan, Institute of Health Policy and Management, College of Public Health, National Taiwan University, Taipei, Taiwan

  • Yu-Chuan (Jack) Li

    Roles Formal analysis, Project administration, Supervision

    jaak88@gmail.com

    Affiliations Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan, International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan, Research Center of Big Data and Meta-analysis, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan, Department of Dermatology, Wan Fang Hospital, Taipei, Taiwan, TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei, Taiwan

Abstract

The current Objective Structured Clinical Examination (OSCE) is complex, costly, and difficult to provide high-quality assessments. This pilot study employed a focus group and debugging stage to test the Crowdsource Authoring Assessment Tool (CAAT) for the creation and sharing of assessment tools used in editing and customizing, to match specific users’ needs, and to provide higher-quality checklists. Competency assessment international experts (n = 50) were asked to 1) participate in and experience the CAAT system when editing their own checklist, 2) edit a urinary catheterization checklist using CAAT, and 3) complete a Technology Acceptance Model (TAM) questionnaire consisting of 14 items to evaluate its four domains. The study occurred between October 2018 and May 2019. The median time for developing a new checklist using the CAAT was 65.76 minutes whereas the traditional method required 167.90 minutes. The CAAT system enabled quicker checklist creation and editing regardless of the experience and native language of participants. Participants also expressed the CAAT enhanced checklist development with 96% of them willing to recommend this tool to others. The use of a crowdsource authoring tool as revealed by this study has efficiently reduced the time to almost a third it would take when using the traditional method. In addition, it allows collaborations to partake on a simple platform which also promotes contributions in checklist creation, editing, and rating.

1. Introduction

During the past 5 decades, the focus of the training of medical education has shifted dramatically towards competency-based medical education (CBME) [1]. Thus, the development of new assessment methodologies for the accurate measurement of competency in an accurate and unbiased form was needed [2]. The Objective Structured Clinical Examination (OSCE) was developed in the 1970s as a replacement for the Clinical Examination, which was a required practical competency exam for graduating physicians in the United Kingdom [3]. In the OSCE, the rater will use a checklist or rating scale to assess the student performance in a simulated environment via a standardized patient or manikin. The checklist is popular and easy to use in OSCE. OSCE has been in place for several decades, however, it attracted a significant amount of criticism, due to lack of reliability in measuring candidates’ clinical competence and the effect of no implemented standardization of cases and examiners. Later developments to improve the quality of OSCE in terms of validity and reliability scores were the design of scenario and assessment tool design checklists, more frequently used as interactive digital files [4]. The basic procedure for developing OSCE checklists includes three main stages for developing assessment tools to measure clinical skills [5]. These stages constitute the traditional method of checklist development which is still used by numerous institutions around the world: 1) Preliminary List of Measurable Steps—the OSCE station authors develop an initial list of all key and measurable steps in the clinical skill to be evaluated in a particular station; 2) Specialist Assessment—experts with particular skills assess the OSCE station’s draft checklists and provide feedback and suggestions for improvement; 3) Field Testing—a mock examination using realistic OSCE conditions.

Developing a new OSCE station checklist is complex, and time-consuming, it requires significant resources and the participation of experts, mock examiners, and candidates [69]. However, the electronic checklist system, a worldwide recent development, can eliminate missing data and decrease post-assessment working loads. However, it only provides an electronic checklist to replace the paper checklist, and therefore not helpful for faculty when designing the checklist. Thus, we hope to create an easier and more efficient system to help the faculties when designing the checklist. From the literature review, the checklist design process is complex and time-consuming. Most medical schools have the OSCE. The most used clinical skills in OSCE are the same topics. Why not make the experts collaborate to design and improve the checklist? Therefore, the quality of future medical education will rely on increased collaboration between institutions providing medical programs [1012]. These inter-institutional collaborations could enhance the quality of medicine using joint development of educational programs and activities by combining the resources and capabilities of different medical schools [13].

One example is the MedEdPORTAL system for social media and the downloading of geographic information. It recognizes author contributions by displaying publication metrics on its website. Anyone can contribute and upload various information and data, however, only the relevant, peer-reviewed ones are accepted and saved into the system. The built-in credit system displays specific metrics such as the number of views, number of downloads, and the conversion rate from views to downloads. The system also tracks social media coverage and displays an “attention score” calculated based on these metrics. The system tracks the impact of the shared resource but indirectly acts as a reward for the authors since it is an indicator of quality and relevance.

One of the greatest examples of successful crowd authoring and crowdsourcing not only in the medical field but in all other fields is Wikipedia. This is an online encyclopedia that is completely crowd-authored; anyone with knowledge of a particular subject can contribute.

Contributions are added to the database after a peer-review process to ensure accuracy. By December 2020, Wikipedia contained more than 6 million crowdsourced articles and has become one of the largest repositories of online information in the world [14,15]. Crowdsourcing comes from a less-specific, more public group and its advantages may include improved costs, speed, quality, flexibility, scalability, or diversity. Therefore, this study aims to build a Crowdsource Assessment Authoring Tool (CAAT) which can generate an OSCE checklist easier and more efficiently. Many studies have identified the benefits of crowdsourcing, a way to gather collective wisdom effectively. However, crowdsourcing has some issues that need to be addressed. The frequently discussed ones include confidentiality, misleading, and Intellectual property rights (IPR) issues. Crowdsourcing is a practice to update or add to the information available online by soliciting the contribution of the general public to accomplish a task. All the information is public, therefore, confidentiality is a given limitation of crowdsourcing. A crowdsourced task is done by masses of people, and one of the risks is misleading. Due to a lack of knowledge or bias, the crowd might take the project in the wrong direction. Also, since all the content is from the crowd, IPR has always been a debate. Further, there is concern about collective wisdom theft since the information is publicly available [16]. Once the CAAT is built it must be tested to demonstrate usefulness and efficiency and also understand how to eliminate/resolve the possible issues mentioned.

2. Methods

2.1 CAAT system development

The online OSCE generator checklist (CAAT) conceptualized in this study was devised through two phases. The first phase established frameworks for procedure-based OSCE checklists and online platforms and it began in October 2018. Fig 1 demonstrates the CAAT (beta 1.0) front page of the system displaying details of editable evaluation criteria in procedure-based OSCE checklists (an example of urinary catheterization). The second phase was aimed at detecting and fixing bugs (e.g. problems with software, and programs) and was performed in December 2018. The data collection for the study phase occurred between February and May 2019.

thumbnail
Fig 1. The CAAT (beta 1.0) front page of the system displaying details of editable evaluation criteria in procedure-based OSCE checklists (an example of urinary catheterization).

https://doi.org/10.1371/journal.pone.0278571.g001

In Phases I and II, the focus group experts should qualify based on the following standards:

  1. Medical doctors and educational scholars
  2. Medical doctors must have at least three years of experience participating in the national OSCE certification or developing OSCE checklists.
  3. Educational scholars must have at least three years of experience handling medical education projects or have published three medical education studies in SCI journals.

2.1.1 Phase I: Development of the framework for the CAAT.

There are 3 assessment experts who joined the focus group to examine and adjust the OSCE’s principle for drafting a CAAT framework. All are physicians (3 males, 0 females) at the TMU health system and have been involved with OSCE for over 5 years. During the drafting process, they followed the concept of ontology or mind map in order to identify the elements of the OSCE checklist and to establish their relationships. As clinical skills completion occurs in a step-by-step manner, an important element of OSCE is time series. Thus, the first layer for the CAAT is the observation of a checklist with assessment criteria and its definition. Users can edit all necessary information and make it as an individualized checklist. The second layer is a preview of the checklist to be printed and it provides the option to edit the layout accordingly. It can also be converted to an office word file. Then, the invited experts organized the first focus group framework.

During the focus group meeting, experts presented the primary concept and framework of the CAAT and guided the group in developing an OSCE checklist. According to the experience exchange and verification, the researchers further modified and enhanced the elements used in developing the OSCE checklist. Afterward, the researchers combined their knowledge and skills to extract the concepts/elements for developing an OSCE checklist. This cycle continued until no new information or modifications were produced. Then, researchers turned their confirmed findings to the engineer which implemented the CAAT’s IT framework. Fig 2 demonstrates the CAAT front page of the system displaying choices of procedure and check items (an example of building a sterile system).

thumbnail
Fig 2. The CAAT front page of the system displaying choices of procedure and check items (an example of building sterile system).

https://doi.org/10.1371/journal.pone.0278571.g002

2.1.2 Phase II: CAAT problems detection and fixing.

In Phase II, we invited 5 experts to review the system. Four were clinical physicians (males) and one nursing faculty (female). All have experience with designing Taiwan’s medical student high stake OSCE checklists for over 5 years; one nursing faculty who was involved with the national nurse practitioner OSCE in Taiwan also participated in the Phase II development. After the engineer built the first version of the CAAT (alpha 1.0), the researchers examined the systematic problems with other invited experts. Then, the study called the second focus group to identify and fix existing problems. The same criteria for expert panels were used as in Phase I: 1) medical doctors, 2) educational scholars and 3) their required qualifications.

Five experts tested the Phase II of CAAT in a limited period (one month). The five experts used this prototype to investigate if the function and flow is appropriate. In this phase, the researchers addressed the gap between IT design and users’ needs and identified some system problems. The engineer re-designed and adjusted the CAAT according to the recommendations of the panels, and the researchers re-tested the modified CAAT (alpha 1.1 version) using various medical faculty. The adjusting cycle continued until no further suggestions were given and no problems or issues were encountered and the CAAT (beta 1.0 version, Fig 2) was approved for the study phase testing and survey.

After the above development, the CAAT (beta 1.0) system was created. Within the system, an expert can upload his checklist as a template. Other checklist designers can use the existing template and edit it accordingly to become their own checklist. The system provides a simple editing function and also can collect contributions and feedback from other crowdsource authors.

2.2 CAAT study phase

2.2.1 Consent and ethics statement.

Participants were given informed consent about the study and were explained in detail how their participation can help improve collaboration, checklist creation, and to save time and resources in the healthcare profession. This study was approved by the Ethics Committee Review Board at Taipei Medical University Hospital. IRB: TMU-JIRBN201603019.

2.2.2 The CAAT (beta version) for user test and survey.

To further verify the CAAT (beta 1.0 version) can benefit Taiwan and healthcare systems worldwide, a user test and survey for the OSCE checklist development were implemented. The initial sample size quota for this particular survey was 50 experts based on the G-power statistics (n = 43).

The expert participants in this study also should meet the following criteria:

  1. Must be medical doctors and educational scholars
  2. Medical doctors must have at least three years of experience participating in developing OSCE checklists.
  3. Educational scholars must have at least three years of experience handling medical education projects

The CAAT user experience survey was performed through Google docs and included background information, the CAAT acceptance scale, and the CAAT impact. To obtain unbiased feedback, this study used an anonymous survey format (See appendix). The CAAT acceptance scale was adopted from a verified scale [17], and consisted of 14 items reflecting four domains

of a technology acceptance model (TAM). The domains are perceived in terms of user-friendliness, usefulness, attractiveness, and intention of use. The CAAT acceptance scale involved four additional domains. The usefulness domain consists of items 1 to 4, the user-friendliness consists of items 5 to 8, attractiveness consists of items 9 to 12, and the behavioral intention consists of item 13 to item 14. Aside from the attractiveness domain, the other three domains used a 7-point Likert scale. The attractiveness domain used a 7-point bipolar scale.

2.3. Statistical analysis

This study used descriptive statistics (percentage event and mean with SD) for background information and inferential statistics for testing the study’s hypothesis. The percentage event represented nationality, gender, and occupation. The SD mean represented experience for clinical skill teaching, checklists used for education purposes, checklist development, and the average time for developing a checklist.

For inferential statistics, an independent sample t-test, Pearson correlation, one-sample t-test, and dependent sample t-test were conducted as per hypotheses. Pearson correlation served the analysis of the correlation between the user acceptance of the CAAT and experience in clinical skill teaching (hypothesis 1), experience with checklists used for educational purposes, and experience in checklist development (hypothesis 2). A one-sample t-test was used for testing the acceptance of the CAAT among the experts (hypothesis 3). Since the questionnaire was based on a 7-point scale, the test value for the one-sample t-test was set at 5. The dependent sample t-test assessed the difference in time for developing a checklist between the CAAT and the traditional OSCE (hypothesis 4). As the p-value was lower than 0.05, the result reached statistical significance.

3. Results

Phases I and II are the development phases of CAAT and the CAAT study phase is to investigate the acceptance and prove the concept and are described in the methodology.

Phase I drafted the frame of the system. It produced the first layer for the CAAT which includes the observation of a checklist with assessment criteria and its definition. Users can edit all necessary information and make it an individualized checklist. The second layer is a preview of the checklist to be printed and it provides the option to edit the layout accordingly. It can also be converted to an office word file.

In Phase II, the researchers addressed the gap between IT design and users’ needs and identified some system problems, and the CAAT (CAAT (beta 1.0 version, Fig 2) was approved for the study phase testing and survey.

The study phase data are as follows:

From the 60 surveys collected 50 were included in this study; 10 surveys were removed due to incomplete data. The 50 experts were from China (n = 19), Hong Kong (n = 1), Japan (n = 1), Malaysia (n = 4), Singapore (n = 1), South Korea (n = 2), Taiwan (n = 10), Thailand (n = 3), and USA (n = 9). Thirty-four (68%) were males, and 16 (32%) were females with the following professions: physicians (n = 41, 82%), and nine administrators (n = 2), bioengineers (n = 1), operation room directors (n = 1), educators (n = 2), nurses (n = 2), occupational therapists (n = 1) as seen in Table 1. The average teaching experience– 8 years; using a checklist for educational purposes– 6 years; and developing a checklist– 4.5 years.

thumbnail
Table 1. Demographics and characteristics of participants.

https://doi.org/10.1371/journal.pone.0278571.t001

3.1. Overall results of the CAAT acceptance survey

The scale for information system user acceptance was verified and the four domains revealed the following: The Cronbach’s alpha for the usefulness domain was .90, the alpha for user friendly was .87, the alpha for attractiveness was .86, and the alpha for the behavioral intention was .87 (Table 2).

3.2. Participants’ perception of CAAT

The following feedback was obtained from the experts participating in CAAT development and testing (Tables 2 and 3): (i) I can quickly decide the items I want to use for checking students’ performance in urinary catheter insertion (M = 5.94, SD = 0.98); (ii) I can easily decide the items. I want to use for checking students’ performance in urinary catheter insertion (M = 6.00, SD = 0.86); (iii) I can develop an accurate checklist to assess students’ performance in urinary catheter insertion (M = 5.92, SD = 0.94); (iv) the CAAT provides me with information that allows me to develop an effective checklist for urinary catheter insertion (M = 5.84, SD = 1.17); (v) the interaction with CAAT is clear and understandable (M = 6.06, SD = 0.89); (vi) the interaction with CAAT does not increase the workload when creating a checklist (M = 6.22, SD = 0.91); (vii) I find CAAT easy to use (M = 6.16, SD = 0.74); (viii) it is easy to get CAAT to do what I need (M = 6.02, SD = 0.98); (ix) I feel CAAT is enjoyable/awful (M = 6.02, SD = 0.89); (x) I feel CAAT is exciting/dull (M = 5.82, SD = 1.08); (xi) I feel CAAT is pleasant/unpleasant (M = 5.80, SD = 0.97); (xii) I feel CAAT is interesting/boring (M = 5.76, SD = 1.00); (xiii) I intend to revisit CAAT in the future (M = 6.08, SD = 1.01); and (xiv) I will use CAAT next time I need to generate OSCE checklist (M = 5.88, SD = 1.22).

thumbnail
Table 3. Difference in developing a checklist between the CAAT and traditional method.

https://doi.org/10.1371/journal.pone.0278571.t003

The results of the one-sample t-test with a test value of 5 showed that the experts CAAT to user (MD = 0.80, t = 5.84, p < .001); (xii) how interesting is CAAT (MD = 0.76, t = 5.37, p < .001); (xiii) user intends to use CAAT in the future (MD = 1.08, t = 7.58, p < .001); and (xiv) user will likely use CAAT for generating the next OSCE checklist (MD = 0.88, t = 5.09, p < .001).

The experts agreed that CAAT is a useful, easy, and interesting system for generating a checklist for educational purposes (Table 2). More specifically, the results showed that: (i) a user can quickly decide which items to use for checking students’ performance in urinary catheter insertion (MD = 0.94, t = 6.80, p < .001); (ii) can easily decide which item to use for checking students’ performance in urinary catheter insertion (MD = 1.00, t = 8.25, p < .001); (iii) can develop an accurate checklist to assess students’ performance in urinary catheter insertion (MD = 0.92, t = 6.89, p < .001); (iv) CAAT provides information in ways to effectively develop a checklist for urinary catheter insertion (MD = 0.84, t = 5.09, p < .001); (v) interaction with CAAT is efficient and responsive (MD = 1.06, t = 8.42, p < .001); (vi) CAAT does not increase workload when creating a checklist (MD = 1.22, t = 9.48, p < .001); (vii) CAAT is easy to use (MD = 1.16, t = 11.12, p < .001); (viii) CAAT is adjusted to user’s needs (MD = 1.02, t = 7.37, p < .001); (ix) the degree of like/dislike (MD = 1.02, t = 8.09, p < .001); (x) CAAT is exciting to use (MD = 0.82, t = 5.36, p < .001); (xi) its user appeal.

Overall, experts around the world that participated indicate that the CAAT is a useful (MD = 0.93, t = 8.20, p < .001), easy to use (MD = 1.12, t = 10.79, p < .001), and enjoyable (MD = 0.85, t = 6.61, p < .001) platform for generating a checklist for clinical skill. Moreover, they expressed keen interest in using the CAAT in future (MD = 0.98, t = 6.86, p < .001).

3.3. Effectiveness of the CAAT

Overall, the experts used 65.76 minutes for developing a new checklist using CAAT, whereas, they spent 167.90 minutes building a new checklist traditionally (Table 3). The dependent-sample t-test revealed that CAAT saved 102 minutes in generating a new checklist as compared to the traditional method (t = -2.36, p < .05).

In stratified analyses, this study explored the effectiveness of the CAAT among English native speakers and non-native speakers respectively; the study also separated data for understanding the effectiveness of the CAAT among junior experts and senior experts in developing checklists. The English native speakers generated a new checklist in about 25.20 minutes using CAAT, and in 54.67 minutes traditionally. The mean difference between the two approaches was -29.47 minutes (t = -2.98, p < .05). The non-native speakers generated a new checklist using CAAT in about 83.14 minutes, and in 216.43 minutes traditionally. The mean difference between the two approaches was -133.29 minutes (t = -2.18, p < .05). When the study separated junior experts from senior experts, the results demonstrated that the junior experts generated a new checklist with CAAT in about 41.12 minutes, and with the traditional way in 96.60 minutes. The mean difference between the two approaches was -55.48 minutes (t = -3.76, p = .001). However, among the senior experts, the results demonstrated that they generated a new checklist with CAAT in about 90.40 minutes, and with the traditional method in 239.20 minutes. The mean difference between the two approaches was -148.80 minutes with marginal significance (t = -1.75, p = .093). Participants/experts from around the world perceived that CAAT could change current practices of checklist development (item 12: MD = 0.76, t = 5.07, p < .001; Table 2).

4. Discussion

This pilot study developed and tested the Crowdsource Authoring Assessment Tool (CAAT), a new software and collaboration system to faster and more efficiently generate a checklist for assessing the clinical skill performance of learners. Analyzed data collected from 50 recognized international experts in the field of competency assessment revealed that CAAT can significantly reduce time and improve efficiency and ease when generating a checklist to assess a learner’s clinical skills (Table 3).

The average time for participating experts was 65.76 minutes when using CAAT, whereas, they spent 167.90 minutes using OSCE, the traditional method currently used worldwide [13]. According to the study’s results, the average time saved by using CAAT is 102.14 minutes (Table 3). It showed that the CAAT system really saves the time of checklist design. The template function can guide the checklist designer to the basic and important assessment criteria of a particular skill. Moreover, the editing function also provides an individualized checklist for particular skills. The CAAT provides an efficient and innovative way to design the checklist.

In addition, Cronbach’s alpha revealed high reliability for usefulness (.90), user-friendliness (.87), attractiveness (.86), and behavioral intention (.87) are all important features of CAAT (Table 2). To analyze these important features of CAAT in an unbiased and comprehensive manner, experts from around the world were invited to participate, which are both native (30%) and non-native English speakers (70%). In addition, the experts participating in this study comprised a variety of professionals from physicians to bioengineers (Table 1). It is important to have a diverse group of experts from around the world in order to better assess the efficiency of the tool in terms of its usefulness, ease of use and from a non-English native perspective [10,11].

The study provided certain items for participants to use to determine if they can faster and more accurately create a checklist in assessing a test taker’s performance (e.g. the insertion of a urinary catheter). In the CAAT acceptance survey, value 4 is intermediate because of the 7-point scale used. Thus, a test value higher than 5 means a positive review and the items of CAAT assessed by experts received a positive perception overall (Table 2).

CAAT adapted some of Wikipedia’s while also verifying the user’s identity. The user’s instructor identity needs to be confirmed before being allowed into the system. The user is also required to sign the NDA (non-disclosure agreement). These mechanisms will be able to overcome some of the shortcomings of crowdsourcing. The content on CAAT is all about assessment checklists. Therefore, it is important to ensure that students have no access to the content. During the onboarding process, the user’s competence in checklist design will reveal reducing the possibility of editing errors. Further, there are checklist templates available on CAAT. Templates are created and provided by skilled experts. Users can edit based on the template to address the specific needs of their institutions. The system allows users to edit within a direction range to reduce errors due to bias or lack of knowledge. When experts are solicited to provide templates, they need to sign an IPR (intellectual property rights) release to the system. At login, users are reminded that their editing will be logged by the system, and the IPR belongs to CAAT. The NDA ensures that the checklists will not leak. These are the mechanisms to help CAAT enjoy the benefits of crowdsourcing and avoid its downsides [18].

A potential limitation of crowdsourcing is the development of localized assessment checklists that might not be compatible with other settings. Some settings may have specific procedures or equipment for a particular skill, that are not generalizable. Even though in our design we have already tried to provide an editing function it’s difficult to create checklists that are a good fit with all settings. In the crowdsourcing methodology, the quality of the template is a critical issue. The user will modify the template and create a checklist to fit their particular settings. If the quality of the template is inadequate the users may spend a significant amount of time editing and improving the content. It is necessary to implement some mechanism to ensure the content of the template meets certain quality metrics. Currently, we use experts to review the uploaded checklist template. In the future, Natural Language Processing (NLP) based Artificial Intelligence (AI) may help determine if templates meet quality standards and suggest modifications to improve checklist generalizability. In future developments, we will encourage the OSCE experts to upload more checklist templates. We will provide the checklist bank to the scholars who will design the checklists. The more faculties use and edit the checklists the more accurate and efficient the template will be. Moreover, extended use of AI in the future will help to review the templates.

Another limitation of our study is its small sample size. We only invited experts who participated in assessment training courses throughout the four popular simulation centers worldwide. Moreover, the experts needed to spend an additional 30 min for a real experience with CAAT prior to the survey. Even with additional announcements and promotion of CAAT, the response rate of the experts interested to participate in this study was low. The language barrier could be another limitation as two-thirds of the participants were from non-English speaking countries. The CAAT is still in its development stage and it was mostly restricted to OSCE checklists. In the future, it can be expanded to a point-of-care assessment of competency, self, and patient-based assessments.

Supporting information

S1 Fig. The CAAT version 1.0 editing window.

https://doi.org/10.1371/journal.pone.0278571.s001

(DOCX)

S2 Fig. The CAAT display of a newly added assessment item.

https://doi.org/10.1371/journal.pone.0278571.s002

(DOCX)

S3 Fig. The displaying of a completed assessment on the CAAT system.

https://doi.org/10.1371/journal.pone.0278571.s003

(DOCX)

S4 Fig. The CAAT interface displaying a deleted assessment criteria.

https://doi.org/10.1371/journal.pone.0278571.s004

(DOCX)

S5 Fig. CAAT’s Information system framework.

https://doi.org/10.1371/journal.pone.0278571.s005

(DOCX)

S1 Table. Correlations between experience and the CAAT acceptance.

https://doi.org/10.1371/journal.pone.0278571.s006

(DOCX)

S2 Table. Differences in the CAAT acceptance between senior and junior experts.

https://doi.org/10.1371/journal.pone.0278571.s007

(DOCX)

Acknowledgments

The authors would like to thank all the participants and all those involved in helping out with the crowdsourcing tool development and assessment.

References

  1. 1. Harden R., Stevenson M., Downie W. W., & Wilson G. (1975). Assessment of clinical competence using objective structured examination. Br Med J, 1(5955), 447451. pmid:1115966
  2. 2. Howley L. D. (2004). Performance assessment in medical education: where we’ve been and where we’re going. Evaluation & the health professions, 27(3), 285–303.
  3. 3. Wass V., Van der Vleuten C., Shatzer J., & Jones R. (2001). Assessment of clinical competence. The Lancet, 357(9260), 945–949. pmid:11289364
  4. 4. Hochlehnert A., Schultz J.-H., Möltner A., Tımbıl S., Brass K., & Jünger J. (2015). Electronic acquisition of OSCE performance using tablets. GMS Zeitschrift für Medizinische Ausbildung, 32(4). pmid:26483854
  5. 5. OConnor H. M., & McGraw R. C. (1997). Clinical skills training: developing objective assessment instruments. Medical education, 31(5), 359–363. pmid:9488858
  6. 6. Norcini J. J. (2003). Setting standards on educational tests. Medical education, 37(5), 464–469. pmid:12709190
  7. 7. Norman G., Van der Vleuten C., & De Graaff E. (1991). Pitfalls in the pursuit of objectivity: issues of validity, efficiency and acceptability. Medical education, 25(2), 119–126. pmid:2023553
  8. 8. Daniels V. J., & Pugh D. (2018). Twelve tips for developing an OSCE that measures what you want. Medical teacher, 40(12), 1208–1213. pmid:29069965
  9. 9. LaRochelle J., Durning S. J., Boulet J. R., van der Vleuten C., van Merrienboer J., & Donkers J. (2016). Beyond standard checklist assessment: Question sequence may impact student performance. Perspectives on medical education, 5(2), 95–102. pmid:27056080
  10. 10. Davis D. (2018). The medical school without walls: Reflections on the future of medical education. Medical teacher, 40(10), 1004–1009. pmid:30259766
  11. 11. Harden R. M. (2018). Ten key features of the future medical school—not an impossible dream. Medical teacher, 40(10), 1010–1015. pmid:30326759
  12. 12. Rourke J. (2018). What does the future hold? No one knows for sure…. Medical teacher, 40(10), 980–981. pmid:30444164
  13. 13. Ellaway R. H., Albright S., Smothers V., Cameron T., & Willett T. (2014). Curriculum inventory: Modeling, sharing and comparing medical education programs. Medical teacher, 36(3), 208–215. pmid:24559305
  14. 14. Wikipedia. https://en.wikipedia.org/wiki/Wikipedia:Size_comparisons#:~:text=Currently%2C%20the%20English%20Wikipedia%20alone,million%20articles%20in%2030%209%20languages.
  15. 15. Ranard B. L., Ha Y. P., Meisel Z. F., Asch D. A., Hill S. S., Becker L. B., et al. (2014). Crowdsourcing—harnessing the masses to advance health and medicine, a systematic review. Journal of general internal medicine, 29(1), 187203.
  16. 16. Gao, H., Barbier, G., & Goolsby, R. (2011) Harnessing the Crowdsourcing Power of Social Media for Disaster Relief. IEEE, 1541-1672/11/.
  17. 17. van der Heijden H. (2004). User acceptance of hedonic information systems. Mis Quarterly, 28(4), 695–704.
  18. 18. Doan A., Ramakrishnan R., & Halevy A. Y. (2011). The practice of crowdsourcing is transforming the Web and giving rise to a new field. Communications of the ACM, 54(4), 86–96.