Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The comprehensive researcher development framework (CRDF): Core learning outcomes for research training

  • Janet L. Branchaw ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    branchaw@wisc.edu

    Affiliations Department of Kinesiology, University of Wisconsin - Madison, Madison, Wisconsin, United States of America, Wisconsin Institute of Science Education and Community Engagement, University of Wisconsin - Madison, Madison, Wisconsin, United States of America

  • Amanda R. Butz,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Wisconsin Institute of Science Education and Community Engagement, University of Wisconsin - Madison, Madison, Wisconsin, United States of America

  • Joseph C. Ayoob

    Roles Formal analysis, Investigation, Validation, Visualization, Writing – review & editing

    Affiliation Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America

Abstract

Becoming a researcher involves the iterative development of deep disciplinary knowledge, specific technical skills, and psychosocial attitudes, behaviors, and beliefs. Consequently, training researchers is resource- and time-intensive. In addition, expectations can be opaque because the traditional apprenticeship model used in research training is idiosyncratic, defined by norms and traditions that vary across disciplines. To align and make research training expectations more transparent, we developed the Comprehensive Researcher Development Framework (CRDF) by extracting and analyzing learning outcomes from 56 previously published evidence-based frameworks from across disciplines. The individual frameworks each addressed a limited range of training stages (e.g., undergraduate only), focused on a subset of learning outcomes (e.g., technical skills), and/or included a single or narrow subset of disciplines (e.g., biomedical sciences). The CRDF derived from these frameworks includes 79 core learning outcomes nested under 8 areas of researcher development that are supported by evidence of content validity collected from experts in the research community. The CRDF builds consensus across disciplines and addresses undergraduate through postdoctoral career stages to define a coherent continuum of research learning outcomes that can be used to monitor and study researcher development. The CRDF does not replace existing discipline-based or training stage specific frameworks but rather can link and coordinate their use. The CRDF can be used by research training program directors to design new or refine existing research training programs, track individual research mentee development over time, and demystify the research training process for mentors and mentees. The CRDF can also be used by scholars studying researcher development to link data on core learning outcomes across research training programs, stages, and disciplines.

Introduction

Attracting motivated students with high potential from diverse backgrounds to research careers and providing them with rigorous, yet supportive research training experiences [1,2] is key to building a strong and innovative research workforce. However, training individuals to become researchers is complex and takes time. It involves the development of deep disciplinary knowledge, specific technical skills, and psychosocial attitudes, behaviors, and beliefs that promote integration and belonging in disciplinary research communities [36]. The diverse ways of knowing [7], research methodologies, and types of research projects across disciplines coupled with the apprentice model of training used in many disciplines produce research training journeys that are unique to each student. Consequently, designing and assessing the effectiveness of varied research training pathways can be challenging.

Though approaches and methods for conducting research are always evolving, and learning to do research is a lifelong process, formal research training in most disciplines begins during undergraduate education, is the primary focus of graduate education, and may be extended with postdoctoral training depending on the discipline. Ideally, formal research training across training stages forms a continuum that builds increasingly sophisticated disciplinary knowledge, perspectives, expertise, professional responsibilities, and relationships that are needed to successfully design and conduct rigorous research. However, consensus across disciplines and training stages about research learning outcomes is limited. Without defined common core learning outcomes, it is difficult to coordinate research training across programs, training stages, and between mentors, which can lead to contradicting or ill-defined expectations for mentees.

Inconsistencies across training programs and unclear expectations pose challenges for mentees. These challenges sometimes result in talented mentees abandoning a seemingly uncertain research career path for more well-defined, familiar, or lucrative opportunities outside of research [814]. This can be particularly important for mentees with limited research backgrounds, for whom the research culture is unfamiliar and persistence along a research career path uncertain. Systematizing research training and clarifying expectations is key to retaining the talented, high potential students we need to build the research workforce [3,4].

To clarify, align, and study research training, scholars (including two of the authors, [15]) have conducted studies to identify and understand how researchers develop and have published researcher development frameworks and/or assessments based on their findings (S1 Appendix). Conceptual frameworks are structures that describe “the factors and/or variables involved in (a) study and their relationships to one another” [16]. They can be used to guide training programs, mentors, and mentees in selecting and evaluating the impact of training activities, as well as to assess and monitor mentee development as a researcher over time. The individual frameworks in published studies, however, typically span a limited range of training stages (e.g., undergraduate only), focus on a subset of learning outcomes (e.g., technical skills), and/or include a single or narrow subset of disciplines (e.g., biomedical sciences). Consequently, building a coherent continuum of research training or studying researcher development across training stages and/or disciplines is challenging.

We leveraged the prior work done on the discipline and training stage specific frameworks to demonstrate consensus across disciplines and develop the Comprehensive Researcher Development Framework (CRDF). We analyzed 56 individual frameworks to identify the common knowledge, skill, and psychosocial attitudes, behaviors, and beliefs that researchers develop from the undergraduate through postdoctoral training stages. Through multiple phases of input from the research community, we defined and confirmed the importance of 79 core learning outcomes and organized them into 8 areas of researcher development (Table 1).

thumbnail
Table 1. Comprehensive Researcher Development Framework (CRDF).

https://doi.org/10.1371/journal.pone.0332587.t001

Importantly, the CRDF is not meant to replace existing disciplinary or training stage specific frameworks but rather provide a comprehensive benchmark against which these frameworks can be compared and through which they can be linked. The CRDF can be used to study researcher development and research training programs across disciplines and training stages, to guide training program development when specific frameworks do not exist, or as a template from which to build new frameworks for specific disciplines or training stages. The CRDF can also be shared with mentees to make the expectations of research training explicit and empower them to take responsibility for their research training experience. Likewise, it can be shared with mentors to help them articulate expectations and assess their mentees’ progress. Tools based on the CRDF to support these uses are provided in the S2a and S2b Appendix.

Methods

An overview of the process used to develop the CRDF is presented in Fig 1. All human subjects research presented in this article was approved by the University of Wisconsin - Madison’s Institutional Review Board, protocol # 2024−0876.

thumbnail
Fig 1. Comprehensive Researcher Development Framework Development Process.

https://doi.org/10.1371/journal.pone.0332587.g001

  1. 1. Synthesize research

We followed the five stage process of research synthesis outlined by Cooper [17]: Problem Formulation; Literature Search; Data Evaluation; Analysis & Interpretation; and Presentation of Results. The first four stages are described in the Methods section and the final stage in the Results section.

Problem formulation

We sought to answer the question: What common sets of knowledge, skills, and psychosocial attitudes, behaviors, and beliefs do researchers-in-training develop across different disciplines and training stages (i.e., undergraduate through postdoctoral)?

Literature search

We conducted a comprehensive review of the literature published in the last quarter century (2000–2024) to identify research development frameworks across disciplines from undergraduate to postdoctoral career stages that were supported with evidence of validity. Prior to conducting the literature search, key terms to inform the search were defined:

  • Research knowledge: specific disciplinary knowledge; knowledge about the process of investigating or exploring the unknown needed to conduct disciplinary research and/or to advance the discipline; knowledge of the structures and resources that support research in the discipline (e.g., peer review practices, funding mechanisms).
  • Research skills: sets of skills that trainees develop while engaging in research that advance their development as researchers.
  • Research psychosocial attitudes, behaviors, and beliefs: disciplinary cultural norms, ways of networking, and engaging in interpersonal interactions; development of identity as a researcher.
  • Researcher development frameworks: evidence-based organization of concepts or ideas around researcher development; research skill assessment.

The scope of the literature search included identifying undergraduate, graduate, and postdoctoral frameworks in the physical, life, or social sciences, arts/humanities or in multiple disciplines. Searches were conducted on Web of Science and Ebsco Academic Search for articles published from 2000 to 2024 using keywords. In the following list of key words, the brackets refer to different iterations of the same search term, the quotes were used to search for specific text, and the asterisk allowed us to search for similar terms: [undergraduate, graduate, postdoctoral] research “development framework” or framework, [undergraduate, graduate, postdoctoral] “research* competencies”, “research competency” assessment “research learning” assessment, “research framework” assessment.

In addition, we reviewed professional organization websites for the Council on Undergraduate Research, Council of Graduate Schools, and the National Postdoctoral Association for guidance on key skills that mentees may be expected to learn at each training stage. Finally, frameworks known to us or referenced in articles found in the initial search were incorporated.

Data evaluation

The articles identified in the literature search were evaluated to determine if they met 3 inclusion criteria: 1) They must define specific areas of knowledge, skill, or psychosocial development; 2) They must address undergraduate, graduate, and/or postdoctoral research training; and 3) They must be published with at least 1 source of validity evidence as defined by the Standards on Educational Testing [18], i.e., evidence-based on test content, response processes, internal structure, or relation to other variables or criterion. Only articles meeting all three criteria were included in the study.

Analysis & interpretation

The articles that met the inclusion criteria were reviewed, and individual knowledge, skill, and psychosocial elements of researcher development were identified and extracted. Three researchers, a discipline-based educational researcher trained in neurophysiology research methods (JB), an educational psychologist and academic motivation researcher trained in quantitative and qualitative social science research methods (AB), and a director of training and mentoring programs in computational and systems biology and discipline-based educational researcher trained in cellular and developmental neuroscience and molecular genetics (JA), independently analyzed the extracted elements and proposed themes to code the elements through an open coding process [19]. Each researcher organized the elements into similar groups and assigned themes to their groups. The researchers met to compare and discuss the individually proposed group themes and agreed on an initial set of themes and definitions to begin coding. The researchers met to compare, discuss, and revise the themes throughout the coding process. Elements representing general headings or non-research skills (e.g., teaching skills) in the published frameworks were removed from the list of elements.

  1. 2. Draft and Iteratively Revise Core Learning Outcomes

One researcher (JB) compared, contrasted, and grouped the elements coded under each of the final themes to draft an initial set of core learning outcomes that represented specific knowledge, skills, and psychosocial attitudes, behaviors, and beliefs. The other two researchers (AB and JA) reviewed the initial drafts, and the team met to discuss their feedback and make revisions. To ensure that all elements extracted from the original source frameworks were represented, JB and AB mapped the original elements to the resulting draft core learning outcomes. Iterative revision of the learning outcomes based on feedback from the research community continued throughout the development process.

  1. 3. National Survey to Collect Feedback and Evidence of Content Validity

The draft core learning outcomes were organized into 5 broad categories for use in an online survey to gather feedback from professional researchers (postdoctoral scholars and faculty/staff) across the nation. The broad categories on the survey were: Thinking and Communicating about Research, Researcher Self-Beliefs and Attitudes, Research Career Readiness, Relationships in the Research Environment, and Conducting Research. The S4 Appendix tracks all the learning outcome revisions from the initial to the final versions. While initially designed to capture data on all 79 learning outcomes, 4 learning outcomes related to career pathways were not rated for importance, due to a survey error, and thus results were only available for 75 learning outcomes regarding importance.

The survey was sent to individuals in the researchers’ networks, as well as several local, regional, and national groups (see Table in the S5 Appendix for a full listing of groups contacted). Survey respondents were asked to report the level of research mentee(s) with whom they work, their disciplinary area of expertise, their current role, and their institution. To avoid survey fatigue, each respondent was randomly asked one of two questions: 1) How important is achieving each learning outcome to becoming a mature, independent scholar in your discipline? (scale: not relevant to research professionals in my discipline, not important, slightly important, moderately important, extremely important); or 2) At what stage of training in your discipline do research scholars make the greatest gains toward each learning outcome? (scale: not emphasized in training, undergraduate/post baccalaureate, graduate/professional, postdoctoral). Once a respondent answered the first question for each draft learning outcome, they were given the option to answer the other question for each draft learning outcome. After answering one (or both) questions, respondents were invited to give feedback on the draft learning outcomes and to submit any learning outcomes important to their discipline that were missing from the list.

  1. 4. Define Framework Categories to Organize Learning Outcomes

Research community members were invited to participate in 2 rounds of card sorting exercises to organize the 78 core learning outcomes into categories for the framework. Card sorting is used to understand how people group and categorize information [20]. Card sorting exercise participants were recruited through email listservs for graduate training program leaders and postdoc scholars at a single research university in the Midwest. The message invited them to sign up and asked them to share the invitation with others in their networks who might be interested in participating.

Open card sorting was conducted in groups that included faculty, research staff, and postdoctoral scholars from multiple disciplines. Participants worked collaboratively to open-sort 78 cards, each with a core learning outcome, into categories defined by the group. First, participants discussed the meaning of each learning outcome, then iteratively organized them into groups, and then named their final groups. Important points of discussion and feedback on the learning outcomes were documented by researchers in real time and used for subsequent revision of the learning outcomes. Data from across groups were combined, and based on analysis of the combined data set, an initial set of categories of researcher development were derived. The categories defined by each group and the cards associated with each group were entered into a spreadsheet that was developed to analyze card sort data [21]. Similar categories offered by groups were merged into standardized categories prior to analysis and the percent agreement among all groups was examined to determine whether the standard categories accurately represented each group of learning outcomes.

Using the categories from the open card sorting exercise, faculty, research staff, postdoctoral scholars, graduate students and undergraduate students from across the country participated in a second, closed card sorting exercise (i.e., all categories were predetermined). Participants either participated in in-person group sessions on the same research university campus as the open card sort exercise or as individuals online using an online card sorting tool [22] at multiple campuses across the country. All were asked to sort the 78 learning outcome cards into the categories generated by the open card sorting exercise. Participants were forced to select one primary category to assign each learning outcome, though many could reasonably be assigned to more than one category, and participants were able to note other categories to which they considered assigning a particular learning outcome. Data from the groups was weighted by the number of participants and combined with the individual card sort data. The combined data was analyzed for patterns to determine under which primary category each learning outcome should nest.

  1. 5. Back Map Source Framework Elements to Learning Outcomes and Categories to Confirm Coverage

To ensure that all the elements extracted from the original source frameworks were represented in the final core learning outcomes, the researchers back mapped each extracted element to one or more final learning outcome. The elements were divided into three groups and each researcher mapped one group. Then, a second researcher reviewed the mapping and either confirmed or flagged it for further discussion. Flagged elements were reviewed and discussed by all researchers to agree on which learning objectives it should be mapped.

Results

  1. 1. Synthesize research

The literature search yielded over 13,000 results. Articles were flagged for further review if they addressed undergraduate, graduate or postdoctoral training stages and related to research trainee development. All but 123 articles were excluded from further review because they described empirical research and not researcher learning outcomes or researcher development frameworks.

Of the 123 articles reviewed, 56 met the criteria for inclusion: 34 were identified in either Ebsco or Web of Science, 14 were referenced in other articles, 1 was a professional society framework, and 7 were known to the authors. Thirty-four percent of the articles were published by researchers outside of the United States. The sources, validity evidence, and the discipline(s) and career stage(s) of these 56 frameworks [5,15,2376] were documented (S1 Appendix). Figs 2 and 3 show summaries of the career stage(s) and disciplines addressed in the articles.

thumbnail
Fig 2. Career stages addressed in included framework articles (N = 56).

Note that some articles addressed more than one career stage.

https://doi.org/10.1371/journal.pone.0332587.g002

thumbnail
Fig 3. Research disciplines addressed in included framework articles (N = 56).

Note that some articles addressed more than one discipline.

https://doi.org/10.1371/journal.pone.0332587.g003

The three researchers reviewed, identified and extracted 1,434 elements from the 56 frameworks. Removal of general headings and non-research skill elements left 1,343 elements to be coded. Each researcher individually reviewed the elements and proposed themes to use in coding. Through discussion, they agreed on an initial shared set of 44 themes to begin coding. Through three rounds of coding, the themes and discrepancies in code assignments were discussed and revisions to the themes and code assignments were made. A final set of 48 themes was used and consensus [19] was reached on the theme codes for all 1,343 elements (45% agreement in the first round, 73% agreement in the second round, and 100% agreement in third round). A full list of the elements and code themes is available in the S3 Appendix.

  1. 2. Draft and iteratively revise core learning outcomes

One researcher (JB) reviewed the elements coded in each theme and drafted 78 core learning outcomes. These were reviewed by the other two researchers (AB and JA) and the three researchers met to discuss and revise the draft core learning outcomes to generate a revised initial set of 79 core learning outcomes (S4 Appendix).

  1. 3. National survey to collect feedback and evidence of content validity

To collect feedback on the draft learning outcomes and evidence of content validity (i.e., validity evidence based on the relationship between the content and the construct that it is intended to measure as determined by experts [18]), a national online survey was conducted with researchers asking about the importance of each learning outcome and at what career stage it is emphasized. Overall, 169 individuals responded to the survey. A summary of the characteristics of the survey respondents is presented in Table 2.

thumbnail
Table 2. Characteristics of National Survey Respondents (N = 169).

https://doi.org/10.1371/journal.pone.0332587.t002

Learning outcome importance

Of our 169 survey respondents, 123 rated how important each learning outcome was to researcher development in their discipline (Fig 4). The overwhelming majority of respondents categorized most of the learning objectives as moderately or extremely important for the development of a researcher in their field. This evidence of content validity from experts supports that the core learning outcomes are relevant to research training across disciplines.

thumbnail
Fig 4. Percentage of 75 learning outcomes for which 50% or more of respondents selected each level of importance in response to the question “How important is achieving each learning outcome in your discipline?” (N = 123).

https://doi.org/10.1371/journal.pone.0332587.g004

Though there is some variability in level of importance across disciplines, several learning outcomes were consistently rated as extremely important. Table 3 shows the learning outcomes that 80% or more of respondents agreed were extremely important, with perseverance most frequently rated as extremely important.

thumbnail
Table 3. Learning Outcomes Rated as Extremely Important by 80% or More of Respondents.

https://doi.org/10.1371/journal.pone.0332587.t003

Overall, two thirds (66.67%) or 50 of the 75 learning outcomes were rated as extremely important by over 50% of the survey respondents. For the remaining 25 (33.33%) there was not a majority rating, nor were there discernable rating patterns based on learning outcome content. We discovered some variation in the level of importance by discipline, with individuals from different disciplines agreeing on the specific level of importance for 29 (38.67%) of the 75 learning outcomes. When we combine the ratings of moderately important and extremely important, the level of agreement increases to 65 (86.67%).

Learning outcome training stage

We also received input from 112 survey respondents who rated the career stage at which researchers in training make the greatest gains toward each of the 79 learning outcomes in their discipline (Fig 5). Overall, 51 (64.56%) of the learning outcomes were reported as emphasized during graduate education by over 50% of the survey respondents, 7 (8.86%) during undergraduate education, and 6 (7.59%) during postdoctoral training. There was not a majority rating for the remaining 15 (18.99%) learning outcomes, which were reported as addressed during various career stages. When examined within each discipline, trends in the various sciences reflected the overall trend, but the career stage at which learning outcomes were addressed in the Arts & Humanities was primarily shifted to the earlier undergraduate stage. However, given the small number of respondents in the Arts & Humanities it is not possible to draw conclusions about these differences.

thumbnail
Fig 5. Percentage of 79 learning outcomes for which 50% or more of respondents selected the career stage at which greatest gains were made. (N = 112).

https://doi.org/10.1371/journal.pone.0332587.g005

Notably, the data show that 7 learning outcomes (3.09, 3.11, 3.12, 4.04, 4.06, 6.10, and 8.06) were rated as not emphasized in training (Fig 5) but were rated as moderately or extremely important (Fig 4) by most respondents. This suggests that research training program directors should consider these important learning outcomes and integrate new learning activities or experiences to address them if their program is not currently addressing them.

In addition to rating the learning outcomes for importance and career stage, survey respondents were asked to provide feedback on the learning outcomes. They were asked to comment on whether the language used was appropriate for their discipline or if any learning outcomes important for their discipline were missing. Based on this feedback and ongoing review by the research team, 21 learning outcomes were revised, 9 were merged, 5 were added, and 1 was deleted. The total number of learning outcomes at the end of this stage of development was 78. See the S4 Appendix for details about the learning outcome revisions made from the initial drafts to the final versions.

  1. 4. Define Framework Categories to Organize Learning Outcomes

We solicited input from those in the research community engaged in research training (practitioners and trainees) as well as those who study research training (educational researchers) to organize the learning outcomes into categories (areas of researcher development) that would be useful to the community. Two separate phases of card sorting activities were implemented. In the first phase, expert research practitioners with deep knowledge of research training (faculty, research staff, and postdoctoral scholars) from across disciplines were recruited to work in groups to open sort the learning outcomes into categories representing areas of researcher development. The preliminary researcher development categories generated across these groups were analyzed and a consensus list developed. In the second phase, professional research practitioners as well as undergraduate and graduate research students used the categories generated from the open sorting exercise to assign the learning outcomes to the categories in a closed sorting exercise. The characteristics of the card sorting exercise participants are summarized in Table 4.

thumbnail
Table 4. First and Second Round Card Sort Participant Characteristics.

https://doi.org/10.1371/journal.pone.0332587.t004

Open card sort to define researcher development categories

The recruited research professionals worked in nine separate groups of two to four members each to sort the 78 learning outcomes into categories. Each group discussed and named their categories independently. A total of 93 category titles across groups were generated (Table 5) and the category titles were analyzed to identify commonalities. There was overlap, but groups chose to define their categories at different levels of detail. For example, a single category generated by one group could align with multiple categories generated by another group. Comparisons of the learning outcomes each group assigned to their categories were used to understand how categories across different groups were related and to identify consensus categories across groups. Based on this data analysis, the research team defined nine integrated preliminary categories (Table 5). Results from the individual groups are available in S6 Appendix.

thumbnail
Table 5. Evolution of Area of Researcher Development Categories.

https://doi.org/10.1371/journal.pone.0332587.t005

Faculty, research staff, postdoctoral scholars, graduate students, and undergraduate students participated in the closed card sorting exercise to provide feedback and assign the 78 learning outcomes to the 9 integrated preliminary categories generated by the open card sorting exercise (Table 5). Some participants engaged as part of in-person groups while others engaged individually with an online card sorting tool. Nine in-person groups with two to four participants each were held, and 46 individuals from across the country participated online. Data from the groups was weighted by the number of individuals in the group and combined with online data from individuals. For example, if four individuals participated in a focus group, the results of their card sort were weighted four times compared to card sort data from individual participants who completed the online card sort activity.

A map of the closed card sorting results is presented in Table 6. Overall, closed card sorting participants reported that it was sometimes challenging to assign a learning outcome to just one area. Consensus across groups for each learning outcome shown in Table 6 was considered strong if there was 75–100% agreement (dark blue), moderate if there was 50–75% agreement (light blue), and weak if there was less than 50% agreement (no shading). Assignment of learning outcomes between the “Research Thinking and Reasoning Skills” and “Practical Research Skills” categories showed significant overlap. Therefore, these two categories were combined into one, “Practical and Cognitive Research Skills,” and the data from the two original categories were combined for analysis.

After the combined “Practical and Cognitive Research Skills” category was created, analysis of the raw data showed strong agreement for 8 (36%) of the 24 learning outcomes and moderate agreement for 13 (59%). These were assigned to the given category, and no further modifications were made to them. Learning outcomes across all categories that showed weak agreement were reviewed by the research team. Those that were misinterpreted based on observations during in-person card sorting sessions or comments shared by online participants were revised to clarify their meaning. Those that generated equally valid, but different interpretations by different groups or individuals were divided into two separate learning outcomes. Development and refinement of the core learning outcomes are detailed in the S4 Appendix.

In addition to combining the two areas of researcher development, the names of three other areas of researcher development were modified based on the closed card sort participant feedback. “Interpersonal Skills as a Researcher” was modified to “Interpersonal Research Skills” to clarify that this category includes learning outcomes focused on a researcher’s ability to interact with other researchers. “Personal Attributes as a Researcher” was modified to “Researcher Self-Beliefs and Attitudes” based on feedback received that the word “attribute” suggested that these were unchangeable traits, rather than psychosocial skills that individuals can develop. Finally, “Knowledge and Skills to Manage Research Projects and Teams” was modified to “Knowledge and Skills to Administer and Manage Research Projects and Teams” to clarify that this category included learning outcomes needed for administration of research. The last column in Table 5 shows the final categories. The final CRDF with the categories and their nested core learning outcomes is in Table 1.

  1. 5. Back Map Source Framework Elements to Learning Outcomes and Categories to Confirm Coverage

Researchers back mapped the elements derived from the original source frameworks to the final CRDF core learning outcomes and areas of researcher development (categories) to confirm that all were covered in the final CRDF. Table 7 reports the percentage of CRDF core learning outcomes in each area of researcher development addressed in the original source frameworks. These percentages were calculated from mapping each source element to one or more core learning outcomes. The detailed map of each element to a core learning objective(s) is in the S7 Appendix.

thumbnail
Table 7. Original Framework Elements Mapped to Areas of Researcher Development.

https://doi.org/10.1371/journal.pone.0332587.t007

Discussion

To our knowledge, the CRDF presented here is the first published framework that synthesizes common outcomes across multiple disciplines and addresses undergraduate through postdoctoral training career stages. The CRDF will play an important role in standardizing programmatic, assessment and evaluation efforts, as well as demystify the researcher development process for mentees and mentors. Though feedback and evidence of validity were gathered from the research community in the United States, 34% of the source frameworks were published by scholars outside the United States suggesting the CRDF will be applicable globally. Below we detail needs that this new framework will address and how it will benefit the research community at large.

The CRDF will identify focus areas and potential gaps in existing frameworks

Currently available frameworks tend to address only a subset of the areas of researcher development and be discipline and/or career stage specific. Focus areas and gaps in the 56 frameworks used to develop the new framework are revealed in Table 7, where the elements of these source frameworks are mapped to the eight areas of researcher development in the CRDF and in the S7 Appendix where they have been mapped in greater detail to the individual 79 learning outcomes. For example, the map in Table 7 reveals that several frameworks include a focus on Research Communication Skills, while many fewer frameworks address developing Knowledge and Skills to Pursue a Research or Research-Related Career. Using the CRDF, research training program directors can compare any framework they’re using to design and/or evaluate their training programs to the CRDF to identify focus areas and gaps they may need to address.

The CRDF will align the training and performance expectations of multiple stakeholders engaged in research training

The learning outcomes in the CRDF are meant to represent a relative consensus across disciplines to promote a shared understanding of the knowledge, skills, and psychosocial attitudes, behaviors, and beliefs that should be developed through research training among the multiple stakeholders (e.g., mentees, mentors, training programs, disciplinary communities, funders) involved in developing researchers and the research workforce. The core learning outcomes provide the means to track mentee development across training career stages and disciplines, as well as the structure needed to coordinate training across institutions. Training programs that use the CRDF to design their programs and assess development of their mentees will be better positioned to collaborate and provide support for mentees in transition from one program to the next and therefore contribute to national efforts to develop the research workforce. Those in STEM will also be better positioned to address the recommendations outlined in two recent National Academies of Science, Engineering and Medicine (NASEM) reports on undergraduate [3] and graduate research [4] training, which were referenced in developing the CRDF.

The CRDF will level the playing field for mentees by making the expectations in research training transparent

The lack of structure in research learning experiences, especially apprentice-style research learning experiences, has created an irregular and hidden curriculum that disadvantages students with limited research backgrounds (e.g., first-generation college students). Novice research mentees often lack the knowledge and social capital they need to successfully navigate the research environment and are consequently more likely to encounter and struggle to meet expectations and navigate unanticipated challenges throughout their training journey [7780]. The core learning outcomes in the CRDF clarify what mentees should be learning during formal research training and can therefore empower them to meet expectations and successfully navigate the research training environment. The core outcomes also build mentees’ agency to take responsibility in designing their training journey by allowing them to self-assess their progress and to self-advocate for learning experiences that will support achievement of the learning outcomes. The CRDF can be shared with research mentees by including it in a student handbook or by using the Researcher Development Plan tool provided in the S2a and S2b Appendix, which can be implemented in conjunction with Individual Development Plans and Mentor-Mentee Compacts.

The CRDF will support the development of common metrics to measure and understand how researchers develop across training programs

Core learning outcomes synthesized in the CRDF provide the basis for developing measurement tools such as learning assessments, rubrics, and interview/focus group protocols that can be used in program evaluation and basic research on research training and researcher development. Evaluators can use the data generated by common metrics to provide targeted feedback to research training program directors and mentors about the efficacy of the research learning experiences they are providing to guide the continuous improvement of specific learning experiences for mentees and research training programs [81]. Researchers can use the tools in their investigations of researcher development and research learning experiences across multiple training sites, thus potentially discovering causal relationships and generalizable knowledge about researcher development that can be applied broadly. Common metrics used across multiple sites allow investigation of the mechanisms by which research training environments, specific learning experiences, and individual mentee characteristics contribute to or inhibit researcher development. Even when common metrics or frameworks are not used across programs, the CRDF may allow researchers and practitioners to align outcomes from different studies to investigate the impact of research training programs [3,8284].

Limitations

We were able to gather only limited input on the CRDF from the arts and humanities research community. Therefore, it is difficult to draw conclusions about the use of the framework in this community or about the disciplinary differences we observed between this community and the sciences. Nonetheless, since arts and humanities frameworks were used to construct the CRDF and we are confident in the feedback we did receive, we believe the CRDF can be used in these disciplines. Further testing in this community is needed to verify or highlight areas for revision in the framework for use in arts and humanities.

The majority, though not all the content validity data reported here was from one large research university. Likewise, there was a lack of racial/ethnic diversity in the national survey respondent pool and among the card sorting activity participants for whom we have this information. Therefore, as the framework is implemented across institutions, gaps resulting from the lack of diversity in the pool of researchers who provided input may emerge and modifications may be needed to make it more universally applicable.

Conclusion

We were able to define and document consensus across disciplines on core learning outcomes that articulate the knowledge, skills, and psychosocial attitudes, behaviors, and beliefs needed to become a researcher. The resulting comprehensive framework, the CRDF, transcends disciplines and formal training career stages. Adoption and adaptation of the CRDF will not only support individual research mentees and improvement of individual research training programs but also facilitate coordination of research workforce development across programs and training stages. The S2a and S2b Appendix includes two tools to facilitate CRDF use: 1) a tool for research training program directors to map their current program activities and assessments to identify potential gaps; and 2) a tool to complement individual development plans for mentees to use with their mentors and thesis committees in planning and mapping their development as a researcher over time.

Future directions

Our research team is committed to developing evaluation and assessment rubrics and instruments based on the CRDF. We welcome collaborators on this work and invite those interested to contact us.

Supporting information

S1 Appendix. Research Development Frameworks and Assessments included in Literature Review.

https://doi.org/10.1371/journal.pone.0332587.s001

(PDF)

S2a Appendix. Research Training Program Development and Mapping Tool.

https://doi.org/10.1371/journal.pone.0332587.s002

(XLSX)

S2b Appendix. Researcher Development Plan (RDP) Tool.

https://doi.org/10.1371/journal.pone.0332587.s003

(XLSX)

S3 Appendix. Code Themes, Definitions and Assignments.

https://doi.org/10.1371/journal.pone.0332587.s004

(PDF)

S4 Appendix. Evolution of Learning Outcomes for CRDF.

https://doi.org/10.1371/journal.pone.0332587.s005

(PDF)

S5 Appendix. Dissemination Contacts for National Survey.

https://doi.org/10.1371/journal.pone.0332587.s006

(PDF)

Acknowledgments

We thank undergraduate student Shefali Bhatt for her help with the literature search and our colleagues Drs. Amber Smith, Melissa McDaniels, Christine Pfund, and Fatima Sancheznieto for their feedback on a preliminary draft of this manuscript.

References

  1. 1. Sanford N. Where Colleges Fail: A study of the student as a person. SanFrancisco: Jossey-Bass; 1967.
  2. 2. Sanford N. Self and Society. Routledge. 2017.
  3. 3. Gentile J, Brenner K, Stephens A, editors. Undergraduate Research Experiences for STEM Students: Successes, Challenges, and Opportunities. Washington, D.C.: National Academies Press; 2017.
  4. 4. Leshner A, Scherer L, editors. Graduate STEM Education for the 21st Century. Washington, D.C.: National Academies Press; 2018.
  5. 5. NPA Core Competencies. [cited 25 Apr 2025]. Available: https://www.nationalpostdoc.org/page/CoreCompetencies
  6. 6. Research Training Framework for Doctoral Students. 30 Oct 2014 [cited 25 Jun 2025]. Available: https://www.ukri.org/publications/research-training-framework-for-doctoral-students/
  7. 7. Shulman LS. Ways of seeing, ways of knowing: ways of teaching, ways of learning about teaching. Journal of Curriculum Studies. 1991;23(5):393–5.
  8. 8. Nicole F, Deboer J. A Systematized Literature Review of the Factors that Predict the Retention of Racially Minoritized Students in STEM Graduate Degree Programs. 2020. Available: https://peer.asee.org/a-systematized-literature-review-of-the-factors-that-predict-the-retention-of-racially-minoritized-students-in-stem-graduate-degree-programs
  9. 9. Young SN, Vanwye WR, Schafer MA, Robertson TA, Poore AV. Factors Affecting PhD Student Success. Int J Exerc Sci. 2019;12(1):34–45. pmid:30761191
  10. 10. Sverdlik A, C. Hall N, McAlpine L, Hubbard K. The PhD Experience: A Review of the Factors Influencing Doctoral Students’ Completion, Achievement, and Well-Being. IJDS. 2018;13:361–88.
  11. 11. Mbonyiryivuze A, Dorimana A, Nyirahabimana P, Nsabayezu E. Challenges Affecting Women PhD Candidates for Completion of Doctoral Educations: A Synthesis of the Literature. African Journal of Educational Studies in Mathematics and Sciences. 2023;19:123–34.
  12. 12. Muchaku S, Mwale M, Magaiza G, Tjale MM. No doctoral studies without hurdles: A review on pathways to prevent dropouts. IJER. 2024;6:1–12.
  13. 13. Rigler KL, Bowlin LK, Sweat K, Watts S, Throne R. Agency, Socialization, and Support: A Critical Review of Doctoral Student Attrition. Online Submission. 2017. Available: https://eric.ed.gov/?id=ED580853
  14. 14. Layton RL, Brandt PD, Freeman AM, Harrell JR, Hall JD, Sinche M. Diversity Exiting the Academy: Influential Factors for the Career Choice of Well-Represented and Underrepresented Minority Scientists. CBE Life Sci Educ. 2016;15(3):ar41. pmid:27587854
  15. 15. Butz AR, Branchaw JL. Entering Research Learning Assessment (ERLA): Validity Evidence for an Instrument to Measure Undergraduate and Graduate Research Trainee Development. CBE Life Sci Educ. 2020;19(2):ar18. pmid:32412837
  16. 16. Luft JA, Jeong S, Idsardi R, Gardner G. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks: An Introduction for New Biology Education Researchers. CBE Life Sci Educ. 2022;21(3):rm33. pmid:35759629
  17. 17. Cooper HM. Synthesizing research: A guide for literature reviews. Sage; 1998.
  18. 18. American Educational Research Association, American Psychological Association, National Council on Measurement in Education, editors. Standards for Educational and Psychological Testing. Washington, D.C: American Educational Research Association; 2014.
  19. 19. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 3rd ed. Thousand Oaks, CA: Sage; 2009.
  20. 20. Spencer D. Card Sorting: Designing Usable Categories. Rosenfeld; 2009. Available: https://rosenfeldmedia.com/books/card-sorting/
  21. 21. Spencer D. Card sorting analysis spreadsheet | Maadmob. 2007. Available: https://maadmob.com.au/resources/card_sort_analysis_spreadsheet
  22. 22. UX Testing & Research Tools | Proven By Users. [cited 22 Apr 2025]. Available: https://provenbyusers.com/
  23. 23. Verderame MF, Freedman VH, Kozlowski LM, McCormack WT. Competency-based assessment for the training of PhD students and early-career scientists. Elife. 2018;7:e34801. pmid:29848440
  24. 24. Willison J, O’Regan K, Kuhn SK. Researcher Skill Development Framework (US English Edition). 2018.
  25. 25. Pelaez N, Anderson T, Gardner S, Yin Y, Abraham J, Bartlett E, et al. The Basic Competencies of Biological Experimentation: Concept-Skill Statements. PIBERG Instructional Innovation Materials. 2016. Available: https://docs.lib.purdue.edu/pibergiim/4
  26. 26. Clemmons AW, Timbrook J, Herron JC, Crowe AJ. BioSkills Guide: Development and National Validation of a Tool for Interpreting the Vision and Change Core Competencies. CBE Life Sci Educ. 2020;19(4):ar53. pmid:33001766
  27. 27. Cui Q, Harshman J. Qualitative Investigation to Identify the Knowledge and Skills That U.S.-Trained Doctoral Chemists Require in Typical Chemistry Positions. J Chem Educ. 2020;97(5):1247–55.
  28. 28. Singer J, Weiler D, Zimmerman B, Fox S, Ambos E. Assessment in Undergraduate Research. Earth Sciences Faculty Publications. 2022.
  29. 29. Bray R, Boon S. Towards a framework for research career development. International Journal for Researcher Development. 2011;2(2):99–116.
  30. 30. Ahmadi M, Sheikhtaheri A, Tahmasbi F, Eslami Jahromi M, Rangraz Jeddi F. A competency framework for Ph.D. programs in health information management. Int J Med Inform. 2022;168:104906. pmid:36332521
  31. 31. Charumbira MY, Berner K, Louw QA. Research competencies for undergraduate rehabilitation students: A scoping review. AJHPE. 2021;13(1):52.
  32. 32. Drotar D, Cortina S, Crosby LE, Hommel KA, Modi AC, Pai ALH. Competency-based postdoctoral research training for clinical psychologists: An example and implications. Training and Education in Professional Psychology. 2015;9(2):92–8.
  33. 33. Willison J, O’Regan K. Commonly known, commonly not known, totally unknown: a framework for students becoming researchers. Higher Education Research & Development. 2007;26(4):393–409.
  34. 34. Duru P, Örsal Ö. Development of the Scientific Research Competency Scale for nurses. J Res Nurs. 2021;26(7):684–700. pmid:35669152
  35. 35. Gess C, Geiger C, Ziegler M. Social-Scientific Research Competency. European Journal of Psychological Assessment. 2019;35(5):737–50.
  36. 36. Harsh J, Esteb JJ, Maltese AV. Evaluating the development of chemistry undergraduate researchers’ scientific thinking skills using performance-data: first findings from the performance assessment of undergraduate research (PURE) instrument. Chem Educ Res Pract. 2017;18(3):472–85.
  37. 37. Hayes-Harb R, St. Andre M, Shannahan M. Assessment of Undergraduate Research Learning Outcomes: Poster Presentations as Artifacts. SPUR. 2020;3(4):55–61.
  38. 38. Kariyana I, Sonn RA, Marongwe N. Objectivity of the subjective quality: Convergence on competencies expected of doctoral graduates. Cogent Education. 2017;4(1):1390827.
  39. 39. Lindsay H, Floyd A. Experiences of using the researching professional development framework. SGPE. 2019;10(1):54–68.
  40. 40. Miller L, Brushett S, Ayn C, Furlotte K, Jackson L, MacQuarrie M, et al. Developing a Competency Framework for Population Health Graduate Students Through Student and Faculty Collaboration. Pedagogy in Health Promotion. 2019;7(3):280–8.
  41. 41. Nowell L, Dhingra S, Kenny N, Jacobsen M, Pexman P. Professional learning and development framework for postdoctoral scholars. SGPE. 2021;12(3):353–70.
  42. 42. Qiu C, Feng X, Reinhardt JD, Li J. Development and psychometric testing of the Research Competency Scale for Nursing Students: An instrument design study. Nurse Educ Today. 2019;79:198–203. pmid:31154266
  43. 43. Wilson Sayres MA, Hauser C, Sierk M, Robic S, Rosenwald AG, Smith TM, et al. Bioinformatics core competencies for undergraduate life sciences education. PLoS One. 2018;13(6):e0196878. pmid:29870542
  44. 44. Senekal JS, Munnik E, Frantz JM. A systematic review of doctoral graduate attributes: Domains and definitions. Front Educ. 2022;7.
  45. 45. Steen K, Vornhagen J, Weinberg ZY, Boulanger-Bertolus J, Rao A, Gardner ME, et al. A structured professional development curriculum for postdoctoral fellows leads to recognized knowledge growth. PLoS One. 2021;16(11):e0260212. pmid:34807941
  46. 46. Stiers W, Barisa M, Stucky K, Pawlowski C, Van Tubbergen M, Turner AP, et al. Guidelines for competency development and measurement in rehabilitation psychology postdoctoral training. Rehabil Psychol. 2015;60(2):111–22. pmid:25496436
  47. 47. Talley NB. Are you doing it backward? Improving information literacy instruction using the AALL principles and standards for legal research competency, taxonomies, and backward design. Law Libr J. 2014;106:47–68.
  48. 48. Böttcher F, Thiel F. Evaluating research-oriented teaching: a new instrument to assess university students’ research competences. High Educ. 2017;75(1):91–110.
  49. 49. Ipanaqué-Zapata M, Figueroa-Quiñones J, Bazalar-Palacios J, Arhuis-Inca W, Quiñones-Negrete M, Villarreal-Zegarra D. Research skills for university students’ thesis in E-learning: Scale development and validation in Peru. Heliyon. 2023;9(3):e13770. pmid:36851971
  50. 50. Maltese A, Harsh J, Jung E. Evaluating Undergraduate Research Experiences—Development of a Self-Report Tool. Education Sciences. 2017;7(4):87.
  51. 51. Singer J, Zimmerman B. Evaluating a summer undergraduate research program: measuring student outcomes and program impact. Counc Undergrad Res Q. 2012;32:40–7.
  52. 52. Kiley M, Wisker G. Threshold concepts in research education and evidence of threshold crossing. Higher Education Research & Development. 2009;28(4):431–41.
  53. 53. Feldon DF, Maher MA, Hurst M, Timmerman B. Faculty Mentors’, Graduate Students’, and Performance-Based Assessments of Students’ Research Skill Development. American Educational Research Journal. 2015;52(2):334–70.
  54. 54. Feldon DF, Litson K, Jeong S, Blaney JM, Kang J, Miller C, et al. Postdocs’ lab engagement predicts trajectories of PhD students’ skill development. Proc Natl Acad Sci U S A. 2019;116(42):20910–6. pmid:31570599
  55. 55. Swank JM, Lambie GW. Development of the Research Competencies Scale. Measurement and Evaluation in Counseling and Development. 2016;49(2):91–108.
  56. 56. Carnethon MR, Neubauer LC, Greenland P. Competency-Based Postdoctoral Education. Circulation. 2019;139(3):310–2. pmid:30640541
  57. 57. Lambie GW, Hayes BG, Griffith C, Limberg D, Mullen PR. An Exploratory Investigation of the Research Self-Efficacy, Interest in Research, and Research Knowledge of Ph.D. in Education Students. Innov High Educ. 2013;39(2):139–53.
  58. 58. Mekolichick J. Mapping the Impacts of Undergraduate Research, Scholarship, and Creative Inquiry Experiences to the NACE Career Readiness Competencies. NACE Journal. 2021;82: 34–40. Available: https://ebiztest.naceweb.org/career-readiness/competencies/mapping-the-impacts-of-undergraduate-research-scholarship-and-creative-inquiry-experiences-to-the-nace-career-readiness-competencies/
  59. 59. Patra S, Khan AM. Development and implementation of a competency-based module for teaching research methodology to medical undergraduates. J Educ Health Promot. 2019;8:164. pmid:31544129
  60. 60. Brown AM, Lewis SN, Bevan DR. Development of a structured undergraduate research experience: Framework and implications. Biochem Mol Biol Educ. 2016;44(5):463–74. pmid:27124101
  61. 61. Meijers AWM, Borghuis VAJ, Mutsaers EJPJ, Overveld van CWAM, Perrenet JC. 2e, gew. dr. ed. Eindhoven: Technische Universiteit Eindhoven. 2005.
  62. 62. Feldon DF, Rates C, Sun C. Doctoral conceptual thresholds in cellular and molecular biology. International Journal of Science Education. 2017;39(18):2574–93.
  63. 63. Brownell SE, Kloser MJ. Toward a conceptual framework for measuring the effectiveness of course-based undergraduate research experiences in undergraduate biology. Studies in Higher Education. 2015;40(3):525–44.
  64. 64. Elder S, Wittman H, Giang A. Building sustainability research competencies through scaffolded pathways for undergraduate research experience. Elem Sci Anth. 2023;11(1).
  65. 65. Burke LE, Schlenk EA, Sereika SM, Cohen SM, Happ MB, Dorman JS. Developing research competence to support evidence-based practice. J Prof Nurs. 2005;21(6):358–63. pmid:16311231
  66. 66. Dewey JD, Montrosse BE, Schröter DC, Sullins CD, Mattox JR II. Evaluator Competencies. American Journal of Evaluation. 2008;29(3):268–87.
  67. 67. Enders F. Evaluating mastery of biostatistics for medical researchers: need for a new assessment tool. Clin Transl Sci. 2011;4(6):448–54. pmid:22212227
  68. 68. France CR, Masters KS, Belar CD, Kerns RD, Klonoff EA, Larkin KT, et al. Application of the competency model to clinical health psychology. Professional Psychology: Research and Practice. 2008;39(6):573–80.
  69. 69. Hodgson JL, Pelzer JM, Inzana KD. Beyond NAVMEC: competency-based veterinary education and assessment of the professional competencies. J Vet Med Educ. 2013;40(2):102–18. pmid:23709107
  70. 70. Kamen C, Veilleux JC, Bangen KJ, VanderVeen JW, Klonoff EA. Climbing the stairway to competency: Trainee perspectives on competency development. Training and Education in Professional Psychology. 2010;4(4):227–34.
  71. 71. Kulikowski CA, Shortliffe EH, Currie LM, Elkin PL, Hunter LE, Johnson TR, et al. AMIA Board white paper: definition of biomedical informatics and specification of core competencies for graduate education in the discipline. J Am Med Inform Assoc. 2012;19(6):931–8. pmid:22683918
  72. 72. Larson EL, Landers TF, Begg MD. Building interdisciplinary research models: a didactic course to prepare interdisciplinary scholars and faculty. Clin Transl Sci. 2011;4(1):38–41. pmid:21348954
  73. 73. Madan-Swain A, Hankins SL, Gilliam MB, Ross K, Reynolds N, Milby J, et al. Applying the cube model to pediatric psychology: development of research competency skills at the doctoral level. J Pediatr Psychol. 2012;37(2):136–48. pmid:22108224
  74. 74. American Library Association. Information Literacy Competency Standards for Higher Education. Jan 2000. Available: https://alair.ala.org/items/294803b6-2521-4a96-a044-96976239e3fb
  75. 75. Musial JL, Rubinfeld IS, Parker AO, Reickert CA, Adams SA, Rao S, et al. Developing a scoring rubric for resident research presentations: a pilot study. J Surg Res. 2007;142(2):304–7. pmid:17719066
  76. 76. Poloyac SM, Empey KM, Rohan LC, Skledar SJ, Empey PE, Nolin TD, et al. Core competencies for research training in the clinical pharmaceutical sciences. Am J Pharm Educ. 2011;75(2):27. pmid:21519417
  77. 77. Bauer KW, Bennett JS. Alumni Perceptions Used to Assess Undergraduate Research Experience. The Journal of Higher Education. 2003;74(2):210–30.
  78. 78. Hurtado S, Eagan MK, Cabrera NL, Lin MH, Park J, Lopez M. Training Future Scientists: Predicting First-year Minority Student Participation in Health Science Research. Res High Educ. 2008;49(2):126–52. pmid:23503996
  79. 79. Mau WCJ. Characteristics of US students that pursued a STEM major and factors that predicted their persistence in degree completion. Universal Journal of Educational Research. 2016;4:1495–500.
  80. 80. Carver S, Sickle JV, Holcomb JP, Quinn C, Jackson DK, Resnick AH, et al. Operation STEM: increasing success and improving retention among first-generation and underrepresented minority students in STEM. Journal of STEM Education: Innovations and Research. 2017;18.
  81. 81. Denecke D, Kent J, McCarthy MT. Articulating Learning Outcomes in Doctoral Education. Washington, D.C: Council of Graduate Schools; 2017. Available: https://cgsnet.org/wp-content/uploads/2022/01/ArticulatingLearningOutcomesinDoctoralEducationWeb-2.pdf
  82. 82. Crowe M, Brakke D. Assessing the impact of undergraduate-research experiences on students: An overview of current literature. Council on Undergraduate Research Quarterly. 2008;28. Available: https://cdn.serc.carleton.edu/files/NAGTWorkshops/undergraduate_research/cur_publication_summer_2008.pdf
  83. 83. Linn MC, Palmer E, Baranger A, Gerard E, Stone E. Education. Undergraduate research experiences: impacts and opportunities. Science. 2015;347(6222):1261757. pmid:25657254
  84. 84. Haeger H, Banks JE, Smith C, Armstrong-Land M. What We Know and What We Need to Know about Undergraduate Research. SPUR. 2020;3(4):62–9.