Figures
Abstract
The underrepresentation of women and minority students in STEMM graduate programs remain a significant challenge, compounded by biases in traditional admissions processes and barriers to effective mentoring and retention. This study develops and validates the Quinn Miller Competency Assessment (QMCA), a tool designed to assess emotional and social intelligence (ESI) competencies crucial for STEMM graduate student success. The QMCA was created through an iterative process involving literature review, expert consultations, and empirical studies. It evaluates five key competencies: self-awareness, self-control, adaptability, achievement orientation, and teamwork. The tool’s validity and reliability were tested using exploratory and confirmatory factor analyses on diverse samples of STEMM graduate students and applicants. Results demonstrated strong construct validity and reliability and invariance across gender and race/ethnicity supporting the QMCA’s use in both admissions and student development contexts, in conjunction with other measures. By enabling a more holistic evaluation of applicants’ competencies through an assessment that fairly and consistently evaluates individuals across demographic groups, the QMCA aims to improve access and retention for underrepresented groups in STEMM fields, fostering a more inclusive and diverse scientific community. Future research will specifically explore how the QMCA can enhance holistic evaluation processes and contribute to student retention and success efforts. Additionally, we will test its applicability across broader disciplines.
Citation: Quinn JF, Cola PA, Cooper M, Miller C, on behalf of University of South Florida Muma College of Business (2025) Development of a survey tool to assess emotional and social behavioral competencies of science technology engineering math and medicine (STEMM) graduate students. PLoS One 20(9): e0328308. https://doi.org/10.1371/journal.pone.0328308
Editor: Andres Acero, Tecnologico de Monterrey, MEXICO
Received: January 10, 2025; Accepted: June 29, 2025; Published: September 2, 2025
Copyright: © 2025 Quinn et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data for this study are publicly available from the OSF repository (https://osf.io/j9k7q).
Funding: This work was supported initially by NSF grant 1633275 (CM) and completed under NSF grant 1834516 (CM). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. There was no additional external funding received for this study.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The access and retention of US women and underrepresented minorities in science, technology, engineering, mathematics and medicine (STEMM) PhD programs continue to be problematic in higher education [1,2]. It is well documented that the traditional admissions process in graduate education selects against women and underrepresented minorities (URMs = African American, Hispanic American, and Native American) [1,3]. Biases and barriers can also manifest after admission resulting in disparities in access to effective mentoring, support systems, retention and completion [3,4]. Addressing these issues is not only critical for individual graduate student success, but also for the overall diversity and vitality of academia. When institutions neglect to foster an inclusive environment, they risk driving away talented individuals, thereby depleting the pool of diverse approaches, perspectives, and voices necessary for cutting edge research and development [3,5]. To address these issues, the National Science Foundation INCLUDES Alliance: IGEN (Inclusive Graduate Education Network [cite: https://igenetwork.org/]) is a cross-sector collaboration aimed at improving access and retention in for underrepresented racial and ethnic minority (UREM) students in in STEM PhD programs, particularly in the physical sciences. With participation from over 30 organizations, IGEN promotes inclusive, evidence-based practices for graduate admissions and mentoring. To enhance access, IGEN helped establish Bridge-to-the-Doctorate programs to help students overlooked by traditional admissions processes, while also training faculty on holistic admissions practices [6] To improve retention, IGEN provides evidence-based support structures for graduate students, such as equity-minded mentoring workshops and emotional intelligence training for faculty staff and students.
A key strategy in advancing IGEN’s mission is the development of assessment tools that impartially evaluate students’ emotional and social intelligence (ESI) competencies, which are critical for success in STEM. Research indicates that higher levels of emotional intelligence is associated with better academic performance, psychological well-being, and resilience among graduate students. A study with university students found that emotional intelligence positively correlates with self-efficacy, motivation, and resilience, which are factors that enhance both psychological well-being and academic achievement. These effects were particularly pronounced among postgraduate students, highlighting the importance of emotional intelligence in advanced academic settings. [7].
From their experience in building Fisk-Vanderbilt’s renowned Bridge program [8]. Stassun et al. argue that one key to promoting diversity in physics graduate education is to identify and then cultivate “unrealized or unrecognized potential” held by individuals often overlooked by the typical admissions and recruitment system. ESI competencies serve as indicators of such unrealized potential, as prior research suggests that such non-cognitive factors play a role in academic success and persistence in graduate education. As the world and work becomes more collaborative and interconnected, ESI competencies rise in importance as foundational abilities critical for personal, professional, and societal success, ensuring individuals can effectively navigate complex social networks and diverse environments.
Measures of ESI may be well suited to identifying unrealized potential in individuals historically marginalized in STEMM, as ESI measures have been demonstrated to be invariant across race and gender [9–11]. Although reliable, valid, and systematic assessments of emotional intelligence constructs may be critical in this regard, ESI research has historically neglected graduate education in STEMM fields [12] and systematic assessments of ESI competencies directly addressing such for STEMM programs have been lacking, limiting the ability of programs to integrate these attributes into admissions and student success efforts.
This study introduces the Quinn Miller Competency Assessment (QMCA), a newly developed tool designed to assess five key ESI competencies (i.e., self-awareness, self-control, adaptability, achievement orientation, and teamwork) in STEMM graduate students and graduate applicants. While ESI have been linked to student outcomes in broader educational settings, the primary objective of this research is to validate the QMCA as a reliable and psychometrically sound assessment tool, ensuring that it effectively and impartially measures these competencies for future use in both admissions and mentoring contexts for STEMM program success. We anticipate that future research will explore how the QMCA can enhance holistic evaluation processes and contribute to student retention and success efforts.
Background and literature review
ESI has been a hot topic for the last few decades and continues to be a topic of increasing interest across cultures, sectors, disciplines, etc. [13] with emotional and social competencies becoming increasingly important in the post-COVID-19 era [14] Prior research supports the importance of ESI in performance [15,16]and well-being [17,18]. While studies in ESI are numerous, the bulk of the research has been in traditional business environments, not in areas such as the natural sciences, and even less of a focus has been within higher education. However, success in higher education demands not only cognitive capabilities but also emotional resilience and the ability to adeptly manage social interactions. ESI are crucial as they shape students’ experiences, educator-student dynamics, and collaborative research endeavors. Natural sciences can be particularly challenging due to the intrinsic complexity of content and research demands. Emotional competencies refer to the ability to recognize, understand, and manage one’s own emotions and the emotions of others. These competencies are crucial for effective self-regulation, resilience, and interpersonal interactions. Key emotional and social competencies include a framework that comprises self-awareness, self-management, social awareness, and relationship management.
Relevance in STEMM graduate education
The inclusion of ESI in STEMM graduate education is particularly important due to the collaborative and interdisciplinary nature of scientific research. Success in STEMM fields often requires not only technical expertise but also the ability to work effectively in teams, communicate complex ideas, and navigate interpersonal dynamics [19].
By assessing and developing ESI, educational institutions can better prepare STEMM graduate students for the multifaceted challenges of their future careers. This holistic approach to education can enhance not only individual student outcomes but also the overall diversity and innovation within the scientific community [2] Below we briefly review how assessing ESI supports student admissions and recruitment and retention and student success. Next, we more closely examine three categories of ESI in relation to STEMM graduate student success: self-awareness, self-management and relationship management. Finally, we explore how ESI assessments can improve access and retention for women and underserved minorities in STEMM Programs.
Admission and recruitment
ESI competencies are widely recognized as critical factors influencing personal and professional success. Research in organizational behavior and psychology suggests that competencies such as self-awareness, adaptability, and teamwork contribute to leadership effectiveness, problem-solving, and resilience [20]. In higher education, particularly in professional fields such as medicine and law, non-cognitive assessments have been explored as a supplement to traditional academic metrics [21] previously.
In the STEMM context, emotional and social competencies may likely play a particularly important role given the highly collaborative nature of scientific research and the persistence and adaptability required to complete challenging graduate programs. Prior studies indicate that social support networks, mentorship quality, and self-regulation skills are associated with student retention in STEMM PhD programs [2,22]. However, research explicitly linking emotional intelligence competencies to graduate student success in STEMM remains limited. The QMCA scale developed in this study is intended as a first step toward addressing this gap by providing a validated instrument for assessing ESI competencies in STEMM graduate students.
While some studies suggest that holistic admissions, including assessments of ESI competencies, may help identify students with strong potential for success beyond cognitive metrics alone, the QMCA’s specific impact on admissions outcomes or minority student retention has yet to be tested. Future studies should examine whether incorporating the QMCA into graduate admissions or mentoring programs influences the selection and success of underrepresented students in STEMM.
Retention and student success
Once a student is admitted, challenges to success do not end. The literature discusses the challenges faced by minority students, including feelings of isolation and lack of belonging [22]. These authors argued that social skills, such as networking and building relationships with peers and mentors, are essential for creating a supportive learning environment, which in turn aids retention. Tinto posits in his theory of student departure that a lack of social integration is a key factor leading to student dropout [23]. Minority students often struggle with cultural and social integration into predominantly white institutions, which can lead to feelings of isolation and alienation. Others further elaborate that these students frequently face difficulties in assimilating into the college environment, which can impede their academic and social integration [24].
Coaching and mentoring are widely accepted methods for supporting student retention and success. Coaching and mentoring, though often used interchangeably, are distinct concepts with unique characteristics and objectives. According to Grant, coaching is a collaborative, solution-focused, result-oriented, and systematic process in which the coach facilitates the enhancement of work performance, life experience, self-directed learning, and personal growth of the coachee [25]. Mentoring is a more long-term relationship where a more experienced individual (the mentor) provides guidance, advice, and support to a less experienced person (the mentee). Kram in her seminal work “Mentoring at Work” describes mentoring as a relationship that involves career and psychosocial support [26]. The mentor’s role is not only to guide in professional matters but also to assist in the personal growth of the mentee. Both coaching and mentoring may support student retention and success by focusing on the development of ESI competencies, with coaching following a prescribed plan for supporting the individual in their own development, while mentors can offer role modeling, as well as advice and the sharing of experiences. In turn, improved ESI competencies can enhance mentoring relationships that support student success.
Emotional and social competencies
In terms of emotional intelligence, there is evidence that the competencies of self-awareness (knowing ones’ emotional states, preferences, resources, and intuitions) and self-management (managing ones’ internal states, impulses, and resources) may be of importance for graduate students in STEMM [27] In particular, emotional skills such as resilience and adaptability are essential when facing research hurdles. As posited nearly a decade ago researchers found that emotional competencies directly influence a student’s research motivation and persistence [28]. There are two broad categories of emotional intelligence: self-awareness and self-management.
Self-awareness broadly refers to an individual’s ability to introspect and recognize their own thoughts, emotions, strengths, weaknesses, and values [29]. In particular, awareness of one’s emotions and their effects serve as a foundation for other aspects of emotional and social intelligence. By recognizing emotional triggers and patterns, individuals gain better control over their reactions, fostering self-control and the ability to adapt to new circumstances. Research shows that leaders with higher self-awareness can decipher their emotional responses and understand how their emotions impact their behavior making them better equipped to navigate complex organizational landscapes [30]. Self-awareness fosters improved interpersonal relationships through the preemptive management of potential emotional conflicts [31]. which enhances collaboration and overall teamwork. Research has even indicated that developing emotional intelligence, which includes self-awareness, can positively impact students’ achievement in STEM disciplines [32].
While self-awareness serves as a foundation, self-management is the capability that builds on this awareness, allowing individuals to regulate their emotional reactions [31,33]. Self-management refers to an individual’s capacity to actively regulate their emotions, behaviors, and commitments to achieve desired goals.
Evaluating self-management skills such as adaptability, self-control, and achievement orientation can aid in identifying candidates who possess the necessary discipline and perseverance required for success in demanding graduate programs and PhD-level roles in the STEMM workforce. Adaptability, or being able to adjust to new conditions is essential for those involved in research, as is self-control (being able to respond with thoughtful behavior instead of reacting without thinking), and achievement orientation keeps an individual motivated to keep moving forward. Assessing self-management for terminal degree programs can provide valuable insights into an applicant’s ability to effectively manage themselves [34], including their time, resources, and personal well-being. Self-management skills support the individual in navigating and excelling in demanding and self-directed terminal degree programs.
Scientific endeavors often involve long hours of experimentation, data collection, and analysis. These processes can be tedious and sometimes yield unexpected or unwanted results. Self-control enables scientists to maintain focus, avoid premature conclusions, and remain patient throughout their investigations; scientific misconduct can arguably be traced back to deficits in self-control. Furthermore, it helps in persevering through the inevitable setbacks and obstacles that arise during research. Adaptability allows scientists to be flexible in their approaches, adjusting their methodologies or hypotheses considering new data or technologies. An achievement orientation ensures that scientists are motivated to excel, set high standards for their work, and are committed to making a meaningful impact in their field. It drives them to push the boundaries of what’s known, take calculated risks, and persistently seek answers.
Relationship management
Relationship management refers to the ability to build and maintain positive relationships with others, including effective communication, collaboration, and conflict resolution. The ability to work collaboratively allows individuals to pool their collective knowledge, skills, and resources, often leading to enhanced outcomes and innovations. Recent research has referred to this as a relational capacity, which is the ability to develop deep and meaningful bi-directional relationships that allow for improved performance [35]. Every role has an element of teamwork. This is particularly true for research in natural sciences, which often involves students and faculty working together in highly interdependent lab environments.
Teamwork in particular may be key to successful STEMM graduate study. A key social competency, teamwork involves effective communication, understanding, and capitalizing on diverse viewpoints, and leveraging the collective strengths of individuals. Scholars have emphasized that students and researchers with such social competencies can foster a collaborative environment, ensuring research efficacy [36]. In addition, these social skills allow students to effectively communicate with educators, enhancing the quality of their interactions which in turn can significantly affect learning outcomes [37]. Being an effective team member is no longer just a commendable skill but a crucial competency. Working in teams provides a support system for members that is crucial during challenging phases of a project, helps foster resilience, and aids collective problem-solving [38].
Use of ESI assessments in student selection and development
While the body of work devoted to studying the validity of non-cognitive constructs in selection is enormous in organizational studies, studies specifically in higher education and particularly STEMM PhD programs are few. However, non-cognitive assessments have been used successfully in dental school admissions [27,39,40] and were shown to have validity beyond cognitive measures for long term performance of medical students. Lievens et al. showed that the correlation of non-cognitive competencies with performance in medical school [41] and long-term success [21] increased with time, while the correlation with cognitive tests decreased with time. Similar results from a study of lawyering effectiveness have been used to suggest that non-cognitive competencies be used to transform law school admissions [38]. In addition to being meaningful predictors of performance, measures of emotional intelligence are useful tools to increase access to underrepresented groups in STEMM because they do not show major group differences based on gender or race [9–11]. Indeed, research suggests that non-cognitive assessments in areas of emotional intelligence, self-efficacy, outcome expectations and interest can specifically support access and retention of underrepresented minorities in STEMM PhD programs [42,43].
In addition to enhancing the selection process, ESI assessments may also be used to support graduate student development and retention. Competency-based ESI assessments focus on assessing behaviors as opposed to individual traits, and as such, have a long history of use supporting individual development. By assessing themselves and reviewing the results of other’s evaluations of their competencies, individuals gain insights into their strengths and areas for improvement, fostering deeper self-awareness and facilitates personal growth [44]. Thus, assessments may be used in mentoring and support programs to help STEMM graduate students further develop these core ESI competencies. Using the assessment to support ESI development can in turn foster increased retention. One study underscored that students with refined emotional competencies can better handle academic pressures, leading to improved well-being and retention rates [45]. As members of underrepresented groups may encounter increased challenges and stress during college and thus would particularly benefit from tools that support the development of their emotional and social competencies.
Need for new ESI assessment
Although we initially sought to employ an existing assessment of ESI, we found that none were designed for purposes of both selection and development in a STEMM education context. Those intended to support personal growth and development, such as the Emotional and Social Competency Inventory (ESCI) [46] are often designed for holistic development, and are widely used in organizations; however, this tool has been developed and tested with the target population, and only includes a select few competencies deemed to be most important for success in higher education. In addition, as prediction is not a primary goal of developmental assessments, they are often used without controlling for method bias in self-ratings. When assessments are used in selection processes, social desirability significantly impact self-report ratings [47], making inclusion to control this bias more crucial. As our aim was to also use the assessment to help students develop their competencies, we adopt a behavioral approach to social emotional intelligence, focusing on assessing behavioral competencies that can be developed. This ruled out use of trait measures of emotional intelligence [48]. This led us to create our own assessment tool, the Quinn Miller Competency Assessment (QMCA). This strategic shift allowed us to address the gap in selection-focused assessments while recognizing the critical importance of incorporating a discrete set of competencies valued by the STEMM community. After many rounds of refinement, the QMCA includes a set of ESI competencies with the intention of the tool to address self-development.
Materials and methods
In a series of studies, we identify a set of ESI competencies related to STEMM student success and develop the QMCA tool to assess the competencies. Establishing scale validity is essential prior to scale use, as it ensures that the data collected is accurate and meaningful, and that survey items consistently measure targeted concepts. We used an iterative process of empirical and conceptual analysis to inform the development of the QMCA. Stage 1 informed selection of specific ESI constructs and item creation. At Stage 2 we conducted two studies to reduce the number of ESI constructs and inform item development (Studies A & B). At Stage 3, we confirmed the measurement model and assessed construct validity and invariance (Study C). Table 1 includes demographic information for each sample. To address concerns about socially desirable responding in evaluation of self-report responses, a particular concern for measures used in selection processes, the research team concurrently tested the QMCA with various social desirability controls during stages 2 and 3.
The QMCA, like other measures of behavioral competencies, provides individual scores by competency, it is not a single score assessment that provides an overall measure of emotional and social intelligence. While emotional and social competencies are broadly deemed to be interconnected, measuring competencies individually enables insights into how an individual may excel in one competency, but require development in another, providing pathways for targeted developmental opportunities. The final scale together with instructions on how to score the scale are included in S1 Appendix.
This study was submitted to the Rochester Institute of Technology Institutional Review Board and approved as human subject research. All data collection and proceeding analyses were performed in accordance with relevant guidelines and regulations. Informed consent was provided within the survey used for data collection and was accepted by all participants within the sample. All identifying information, if provided, was removed prior to analysis.
Competency Selection & Measure Development. (Stage 1). We began the process by conducting a literature review and engaging a representative panel of experts, primarily composed of faculty and research scientists in physics and astronomy, to pinpoint the non-cognitive (emotional and social) competencies most pertinent to success in a STEMM PhD program. Two members of the research team then selected a final set of ESI competency constructs and created initial items. We then recruited a focus group of STEMM PhD program directors to assess the content validity of the ESI competencies and iterated on item development.
Construct and Item reduction and refinement (Stage 2). At stage 2, we assessed performance of the selected ESI competency measures in the target population of students engaged in or applying to STEMM graduate school programs and iterated on item development. In Study A, we assessed the factor structure of the ESI constructs in a sample of 467 applicants for the 2019–2020 academic year to nine physics and/or astronomy PhD programs at private and public institutions geographically spread across the US, as well as programs affiliated with the Southeastern Compact for Inclusive Student Transitions in Engineering and Physical Sciences (SCI-STEPS), and student fellows associated with the American Physical Society’s Bridge Program. Students were matriculated at 275 colleges and universities around the world. Based on our empirical findings and additional theoretical review, we revised the set of ESI competencies and their measures and tested them in Winter 2020 in a student sample of 364 STEMM PhD students at large public research institution in the southeast (Study B). Students in Study B represented a wider range of STEMM disciplines (Physics: 24%, Engineering: 20.3%, Chemistry: 17.8%, Computer Science: 13.8%. Mathematics: 12.7%, Earth Sciences: 10.5%). In Study C, we re-tested factor structure and reduced redundancy and improved item-factor alignment. Scale revision at Stage 2 was also informed by iterative reviews of construct definitions for the included social and emotional intelligence50 and Grit competencies51 and measures of closely related competencies such as academic achievement motivation51 and career related adaptability52.
Scale validation (Stage 3). In Study C, we tested the final QMCA in a representative sample of 330 applicants for 2022−23 STEMM graduate programs, our focal population, to validate construct validity and reliability of each factor measure and to conduct preliminary tests of invariance across sex and race/ethnicity. The final QMCA is presented in S1 Appendix.
Results and discussion
Stage 1. Competency Selection & Measure development
Faculty and researchers at the 2017 Joint Graduate Education and Bridge Program Conference of the American Physical Society and the American Astronomical Society’s Women in Astronomy IV Conference (2017) were asked to identify non-cognitive competencies valuable for STEMM graduate studies. The two groups agreed that the same clusters of non-cognitive competencies are important for research success: self-management (e.g., Optimism, Trustworthiness, Achievement Orientation, Conscientiousness, Adaptability, and Initiative) and self-awareness (e.g., Accurate Self-Assessment, Self-Confidence). The initial version of the QMCA was constructed based on these results, adding two competencies identified through literature review (i.e., professionalism and grit). Two members of the research team developed items for each measure based on a behavioral competency perspective. The literature was reviewed for behavior-based emotional and social competencies, with reference to the item structure employed in the Emotional and Social Competency Inventory (ESCI), which is a well-validated instrument used in many countries. Once items were created based upon the literature and understanding of psychometric properties, expert/user focus groups and interviews were used to assess the competency constructs and their items. Following this stage, both expert and non-expert review continued via content validity surveys. While the final competencies selected for inclusion were based on extensive field testing with physicists and astronomers, the assessment was created in a manner that is generalizable so that it may be utilized throughout STEMM, or at least the physical sciences.
Stage 1 resulted in the development of a survey that measured seven ESI competencies: Self-awareness (6 items), Self-control (5 items), Professionalism (7 items), Teamwork (7 items), Achievement Orientation (7 items), Adaptability (6 items) and Grit (8 items) and a 4-item measure commonly used to control for Social Desirability bias in student populations [49]. Except for the Grit53 measure all items were newly generated. All Items were rated on a five-point Likert Scale (1 = Never to 5 = Consistently). To enable us to screen for accessibility of the items, participants were also given an option to select “I don’t know”.
Stage 2. Scale Reduction and Revision
Study A. Our initial screening for item clarity found no problematic items. Out of the 50 items, only 6 received a single “I don’t know” response, with two coming from the same respondent. Next, we used exploratory factor analysis (EFA) to assess factor structure and construct redundancy. An initial 13-factor solution was recommended from principal components analysis with Promax rotation based on eigen values (1.05) and the scree plot. We further explored factor structure by constraining the model to 8 factors (reflecting the 7 ESI competencies and social desirability) using Maximum Likelihood extraction with Promax rotation. After an iterative process involving removal of items that failed to load or that significantly cross-loaded, 20 items, including all the social desirability items, were removed from the scale. The remaining 30 items revealed an adequate 7-factor model (KMO = .816, Bartlett’s Test of Sphericity < .05), which explained 38.8% of the variance. However, communalities for each factor were barely sufficient (<.20) and factor scales failed to demonstrate adequate reliability (α’s < .70).
We revised the scale based on the conceptual objectives of each factor measure and our empirical results. Professionalism, Grit, and Social Desirability performed poorly, with items either failing to load or more than half the items cross loading. Professionalism was excluded because only those items related to timeliness loaded on an independent factor. EFA results also indicated redundancy between Grit and Achievement Orientation, however items from the “perseverance of effort” dimension of Grit loaded independently. Thus, we replaced Grit with Perseverance, defined as persisting in a course of action and finishing what one starts [50] measuring it with 6 items from the original Grit scale [51]. The academic based social desirability scale performed poorly in this sample, possibly due to its focus on math engagement and performance, thus we used a more generic social desirability measure [52] in the next data collection. Finally, we reviewed and revised items that cross-loaded to increase their correspondence to the construct definition of their referent factor.
Beyond the conceptual review we assessed potential methodological threats to the scale. The initial QMCA mixed negatively and positively worded items to reduce response bias, however, recent research has shown that this approach can be ineffective and including negatively worded items can rather threaten validity and reliability of scales [53,54]. In fact, in our sample, nearly half of the negatively worded items failed to load properly and removing these items increased the explained variance from 38.8% to 46.4%. Based on this, we positively worded all items in the next scale iteration. The revised QMCA assessed six emotional and social competencies: self-awareness (5 items), self-control (5 items), teamwork (5 items), achievement orientation (5 items), adaptability (5 items) and perseverance (6 items [52]) and included a 5-item social desirability measure [55]. All scales were rated on a 5-point Likert scale (1 = Never to 5 = Consistently).
Study B. Principal Components analysis yielded a 7-factor model based on eigen values (1.12) and the scree plot when the QMCA was tested with the social desirability control. However, perseverance split across two factors and self-control items failed to load on a single factor. The social desirability scale again failed to load on a unique factor and was omitted from the further analysis. We used EFA analysis with Maximum Likelihood extraction and Promax rotation constraining the model to 6 factors to explore the factor structure. EFAs revealed that 3 of the 6 Perseverance items loaded onto the Achievement Orientation factor (β range of cross-loaded items:.51 to.83). We removed perseverance from the QMCA because it failed to load on a unique factor. This decision was supported by the relatively small reduction in variance explained when Perseverance was omitted (6-factor model = 43.55%, 5-factor model = 41.99%).
The 5-factor EFA with Maximum Likelihood extraction and Promax rotation performed well with all but 2 items loading to their intended constructs (β range:.41 to.80) without significant cross loadings (β range of cross-loaded items: −.26 to.27; see Table 2 for all loadings and cross-loadings in the final full 5-factor model). After removing one self-control item that failed to load on any factor, the resulting model (24 items) performed well (KMO = .874, Bartlett’s Test of Sphericity < .001), with each factor measure demonstrating adequate reliability (α range:.71−.78), and the model explained 42.84% of the variance in the sample, however communalities continued to be predominantly low (20 items < .50; 4 items between.50 and.60). Based on these results, we reviewed and revised item text to increase their convergence around each target factor definition. This produced a final 25 item tool assessing 5 competencies: self-awareness, self-control, adaptability, achievement orientation and teamwork. The full text of the final measures are in S1 Appendix together with instructions on calculating factor scores or using average values, calculating the self-deception variable as well as how to use the method controls.
Study B N = 354. Standardized loadings for the QMCA factors were estimated using exploratory factor analysis with a model constrained to 5 factors and using Maximum Likelihood estimation with Promax rotation.
As the initially selected social desirability measures failed to isolate method bias of STEMM students in Studies A and B, we conducted a comprehensive literature review to identify a new approach. Given that the QMCA requires a significant amount of introspection and is intended for use in stu dent selection and development, we follow Paulhus’ [55,56] recommendation and control for two types of social desirability bias: impression management and self-deception. Impression management involves consciously altering responses to create more favorable impression [57], which can be more prevalent when assessments are used in competitive selection processes [58]. Self-deception refers to positively biased responses that respondents believe to be accurate [56], a potential issue when assessing intangible traits like emotional and social competencies. We adapted an existing self-promotion measure to assess impression management. To assess self-deception, we created a new measure which targets two factors that lead to inaccurate self-evaluation of social-emotional competencies: [1] underestimating the impact of emotions on behavior and [2] overestimating one’s ability to assess their own skills or predict responses in various situations [52]. As self-deception involves denying flaws or exaggerating positive traits to an unrealistic extent, we structured the measure as the sum of a participant’s extreme responses on five items. The full text of the social desirability measures is included with the QMCA scale in S1 Appendix.
Assessment tool evaluation: Discriminant and convergent validity and invariance (Stage 3)
Discriminant and convergent validity testing.
In Study C we used confirmatory factor analysis (CFA) in MPlus to assess the performance of the final version of the QMCA in assessing 5 ESI competencies deemed key to STEMM student performance (self-awareness, self-control, adaptability, achievement orientation and teamwork, 25 items total). We controlled for social desirability with a 5-item impression management factor and (5-item factor) and a self-deception variable (aggregate score of 5 items). The model fit the data well. Although chi-square was significant χ2(414)=785.706, p < .001, comparative fit measures were good and absolute measures demonstrated close fit [CFI = .92, RMSEA = .05, p-close = .26] based on recommended standards (CFI > .90, RMSEA < .06, p-close > .05, [59,60]).
Convergent and discriminant validity of the 5 ESI competency measures was supported in the STEMM student sample. Measures of each dimension demonstrated high reliability (Cronbach’s α’s:.74−.89, Composite Reliabilities:.75−.86). Table 3 reports all factor loadings. Factor loadings ranged from.50 to.85 exceeding the recommended thresholds for convergent validity (loadings > .4). Average variance extracted (AVE) was high for the adaptability, achievement orientation and teamwork competencies (AVE range:.55−.56) and marginal for the self-awareness and self-control competencies (AVE range = .38−.42). There was strong support for discriminant validity across the five dimensions with only moderate factor correlations (R range:.49−.67, see Table 4 for all factor correlations), and the AVE exceeded the maximum-shared variance (MSV) for each factor (ΔAVE-MSV range:.12 −.30). Table 5 reports results of all convergent and discriminant validity tests.
Self-report measurement bias assessment
We used the CFA marker technique [61–63] to assess the effect of impression management bias on the QMCA measures (Table 6). This method was not applied to the self-deception measure as it was formulated as a composite variable to capture extreme responses. The CFA marker technique evaluates changes in model fit across different models to detect method bias. In the baseline model (Model B), impression management loadings were set based on the original measurement model, with the method factor uncorrelated with substantive factors. In the constrained model (Model C), items for the substantive factors were allowed to load to impression management with equal loadings, while in the unconstrained model (Model U), these loadings were freely estimated. Factor correlation effects were tested in Model R which built on the best-fitting model (C or U) by constraining correlations between the substantive factors to their values in Model B.
Results confirmed that participants’ assessments of their competencies were biased by impression management as indicated by a significant decline in model fit from Model B to Model C. Model fit significantly improved when method effects were freely estimated (Model U), suggesting that impression management effects varied across items. However, decomposed reliability demonstrated that all factors remained reliable after accounting for such bias (composite reliability range:.64−.75, see Table 5). Additionally, model comparisons showed that controlling for impression management did not alter factor correlations. The results suggest that convergent and discriminant validity of the QMCA scale is robust after accounting for the impression management effects on participant self-reports.
Measurement invariance
Research suggests that non-cognitive assessments do not demonstrate differences based on gender or race [9–11,64]. To assess whether the QMCA meets this criterion, we tested multigroup invariance across gender and race/ethnicity using the Vandenberg and Lance confirmatory factor analysis method [65]. For gender, we compared the individuals identifying as male and female, representing 98.2% of the original sample (N = 324). For race/ethnicity, due to small subgroup sizes, we compared the individuals identifying as white to those identifying as a race or ethnicity other than white, representing 97.9% of the original sample (N = 323). Models were estimated in MPLUS using maximum likelihood with Satorra-Bentler corrections (MLM) which is robust to non-normality.
Configural models showed good fit, indicating that factor structure and the pattern of loadings is equivalent across groups (Gender: CFI = .92, RMSEA = .05, p-close = .25; Race/Ethnicity: CFI = .92, RMSEA = .05, p-close = .42). Based on nonsignificant difference in the Satorra-Bentler Chi-Square statistic for the nested models and decline in the CFI of less than.01 [66]. Metric invariance was supported, demonstrating consistent relationships between items and constructs across groups (Gender: scaled ∆S-Bχ2 [25]=19.26, p = .79, ∆CFI = −.002; Race/Ethnicity: scaled ∆S-Bχ2 = 25.98, p = .41, ∆CFI = −.001). The QMCA achieved full scalar invariance for gender (scaled ∆S-Bχ2 [51]=65.57, p = .07, ∆CFI = −.005) and partial scaler invariance for race/ethnicity with one item in the teamwork factor allowed to vary (scaled ∆S-Bχ2 [50]=63.82, p = .08, ∆CFI = −.005), indicating that item intercepts and thus the scaling of item responses were predominantly equivalent across groups. Recent work suggests that relaxing the strict assumption that all items are equivalent such as through partial invariance supports meaningful and valid group comparisons [67,68].
We then tested latent mean differences by freely estimating means in one group. For gender, no significant differences were found for adaptability (∆Mean = −.01, p = .97), achievement orientation (∆Mean = .18, p = .16) or teamwork (∆Mean = .11, p = .39). Women reported higher self-awareness (Cohen’s d = .38, p < .05), while men reported higher self-control (Cohen’s d = −.36, p < .01). For race/ethnicity, no significant differences emerged for any factors (Self-awareness: ∆Mean = −.06, p = .66; Self-control: ∆Mean = .07, p = .58; Adaptability: ∆Mean = −.13, p = .29; Achievement Orientation: ∆Mean = −.04, p = .73; Teamwork: ∆Mean = .05, p = .69). Table 7 sets forth model fit and model fit difference tests for all models.
Our preliminary invariance testing suggests that the QMCA provides an impartial tool to assess competencies across groups. The scale demonstrated strong invariance at the configural, metric and scalar levels for gender and race/ethnicity. While our sample showed differences in self-awareness and self-control across gender, they had small effect sizes. Future research should explore whether those differences persist in other samples and whether they influence on substantive relationships [69] with outcomes such as school retention and success. A key limitation of our analysis is that it relies solely on self-report responses. Since the QMCA can also be used for 360-degree rating processes that incorporate external evaluations, future studies should assess invariance in that context. Additionally, our sample size constraints limited the ability to examine invariance and latent mean differences across more specific gender and racial/ethnic subgroups, which may also be explored in future work.
Discussion
While holistic admissions practices hold great promise for helping to diversify graduate education, the reality of the typical admissions process is that diversity, if considered at all, is often a parameter used after the applicant pool has been filtered by factors that have disparate impact on women and minority applicants [1,2]. The QMCA was an equivalent measure of competencies across race and gender for all measures and did not demonstrate differences in competency ratings across race and only small differences in two factors, self-awareness and self-control across gender. Thus, the using the QMCA as a complement to cognitive assessments early in the admissions process may avoid excluding of members of underrepresented groups who have the potential to excel in graduate STEMM studies in the selection process. Research suggests that unlike standardized testing, there is no score gap with noncognitive assessments [70]. While data is limited to this point on the correlation of ESI and academic success for minority groups, one study with Black/African American students showed a positive correlation between academic achievement and the ability to recognize, use, and manage emotions. This suggests that ESI skills may help mitigate the effects of systemic barriers and enhance academic outcomes for underrepresented groups [71].
For diversity-friendly practices to be adopted by university staff and faculty, they must be easy to use, like GPA and GRE scores. The QMCA serves this need for ESI competencies.
In 2023, the QMCA was augmented to enable collection of 360-degree rater data to complement the self-report data from the students. This enables the QMCA to provide a holistic picture of how the participant views their own behavior, as well as others’ perceptions of that individual’s enacted behaviors. The 360-degree assessment, also known as multi-source feedback, is a process where individuals receive confidential, anonymous feedback from the people who work with them. In organizational contexts, this typically includes the employee’s manager, peers, and direct reports. However, the 360-method has also been used extensively with student groups; where students seek feedback from colleagues, professors, friends, and family. A variety of feedback from multiple sources provides a well-rounded view of an individual’s performance, behaviors, and interactions within the organization [72]. Providing a validated, easy to use tool that produces a simple report of self- or both self- and 360-degree ratings will make it easier for institutions to incorporate the QMCA as part of the admissions process. It is argued that this comprehensive approach facilitates a more nuanced understanding of behaviors, skills, and areas for improvement, thereby enhancing the validity and reliability of research findings [73]. In addition to quantitative evaluations, the report may also include verbatim comments from the raters that help reviewers and participants understand the context behind the scores, aligning with the intent of holistic admissions processes.
The QMCA may also have applications beyond admissions, particularly in student development and mentoring. Prior research suggests that feedback on EI and SI competencies can support student self-awareness and professional growth. By incorporating the QMCA into coaching and mentoring programs for students, institutions may be able to provide targeted developmental opportunities. However, further research is needed to determine whether using the QMCA in these contexts leads to measurable improvements in student persistence and success rates.
The verbatim feedback included in the QMCA reports may be particularly helpful in development to enable students to understand the context behind their scores. Implementing these assessments in admissions and student support programs can help promote a more equitable and inclusive educational environment, leading to increased access and improved retention rates for underrepresented groups. The validation of the QMCA represents a step toward integrating structured assessments of EI and SI into STEMM graduate education. While the QMCA provides a reliable and valid measure of key EI and SI competencies, its broader implications for admissions, retention, and student success remain open questions for future research. Further studies should explore how QMCA results correlate with academic and professional outcomes and whether its use in graduate admissions and mentoring programs contributes to more inclusive and supportive STEMM environments.
Through extensive field-testing and data analysis, five factors emerged within our model (self-awareness, self-control, adaptability, achievement orientation, and teamwork). The QMCA is weighted toward assessing emotional competencies related to understanding and managing oneself, with a single social competency (teamwork) included to address relation management skills. Future studies could expand the QMCA to include more social competencies related to developing and sustaining healthy interpersonal relationships. For example, empathy and cultural awareness can make for better teachers and working environments by enhancing inclusivity.
Thus, users can triangulate results of the QMCA by employing both self and other ratings to assess behavioral competencies related to emotional intelligence. We acknowledge that certain ESI competencies, such as self-awareness and self-management may be less transparent to third parties. As these competencies may be best measured using self-reports, we have also developed method bias controls designed to target the specific self-report biases of impression management and self-decision that are common concerns when assessing ESI. Our validation study suggests that the method factors improve the accuracy of the self-assessments, but future research is needed to confirm that the method factors are controlling bias and not accounting for another substantive trait. Another method to triangulate results would be to supplement the Likert-scale questions with situational judgement and scenario-based questions that can yield a more comprehensive understanding of ESI competency levels. Combining multiple question forms would enable assessment of not only the individual’s displayed competencies through their actions/behavior, but also their grasp of emotions and how effectively they apply this understanding.
Medical education also stands to gain from such assessments. There is a growing body of research on emotional and social competencies in healthcare related domains, including medical education, practice and training, nursing, and performance improvement interventions [46]. However, there remains a need for further investigation. By assessing how students approach situations, even if they lack certain skills, context-based scenarios and situational judgement questions can provide insights into areas where students may need further training or development. In healthcare, these assessments offer a comprehensive view of student capabilities, enabling trainers to deliver targeted feedback and effectively guide skill development.
We developed the QMCA to create a resource to identify and support potential graduate student applicants from underrepresented groups who may be overlooked by traditional admissions methods. Thus, the QMCA focuses on ESI competencies which do not systematically vary across demographic groups [9–11,64] as do traditional measures of cognitive intelligence, such as the standardized tests. Although the competencies were selected to reduce systematic differentiation due to majority and minority status, an important next step is to assess measurement invariance of the QMCA. Invariance testing with small samples sizes leads to highly unreliable results, accordingly, we could not test invariance in our validation study (Study C). Future studies should seek to assess the QMCA on samples with sufficiently large and balanced group membership in categories such as gender, race and ethnicity, nationality, first generation college students, to ensure systematic group differences do not significantly affect measurement of the competencies.
Finally, one of the fundamental criticisms of EI is the lack of consensus on what it measures. Different researchers have proposed various models, leading to confusion and difficulty in studying EI as a coherent concept. For instance, Salovey and Mayer, who originally coined the term, define EI in terms of the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them, and to use this information to guide one’s thinking and actions [74]. In contrast, Goleman expanded the concept to include a wide range of skills and traits, such as self-regulation, motivation, and social skills [33]. This discrepancy in definitions complicates the measurement and application of EI [74]. Some argue that much of the research on EI lacks rigorous experimental design, making it difficult to establish causal relationships between EI and outcomes like job performance, academic success, or leadership effectiveness [75]. However, new applied research is continuous and while these concerns remain, new measures, such as the QMCA are needed to continue rigorous research on emotional intelligence.
Implications for curriculum and pedagogy
Emotional and social competency training included in academic programs prepare students for both academic challenges and future scientific professions. Programs like the SELECT MD Program at the Morsani College of Medicine at the University of South Florida have integrated a comprehensive focus on emotional and social competencies to augment the traditional core undergraduate medical school (MD) curriculum [76]. Programs like SELECT provide a roadmap for schools that may wish to include a dedicated focus to the development of emotional and social competencies.
Barriers and considerations
The integration of emotional and social competencies within natural sciences faces hurdles. As pointed out by Bennett and Gadlin [77], there exists a traditional emphasis on cognitive skills in natural sciences which sometimes overshadows the importance of emotional and social aspects. The term “emotional intelligence” has been met with challenges, despite its growing popularity over the past decades. To overcome this, institutions need to actively engage faculty and students, highlighting the critical role of emotional intelligence in personal and professional success, citing the numerous studies that support the impact of ESI. Research shows that higher emotional intelligence is linked to better academic performance, improved mental health, and enhanced leadership abilities [20,74,78]. This effort is essential for gaining the necessary buy-in and fully integrating emotional intelligence into educational programs.
The number of studies within the field remains small, and further research should be done as differences (or lack thereof) can depend on the measurement model of emotional intelligence used, as well as the specific populations and contexts studied. Additional studies with larger samples of URM and more women are needed.
Some critics have highlighted potential cultural bias in EI assessments and training programs. They argue that what is considered emotionally intelligent behavior can vary widely across different cultures, and therefore standardized EI measures may not be universally applicable or fair. However, within a professional context, the behaviors that are measured through the QMCA and other similar behavioral competency assessments provide an accurate assessment of the behaviors as agreed upon norms, and which are necessary for individual and organizational success.
When using emotional and social competencies as criteria, it is important to recognize that other individual differences may impact results. For example, neurodiversity, cultural background, and other life experiences may influence an individual’s emotional and social intelligence profile. Similarly, certain mental health conditions might influence emotional regulation and social functioning, affecting assessment results. Therefore, it is crucial to consider assessments in the context and allow for holistic assessments of individuals. While cognitive ability is a strong predictor of job performance, it is not sufficient on its own. Other factors, such as personality traits and emotional intelligence, also play critical roles in predicting individual and organizational outcomes [79]. The QMCA is not intended as a standalone measure, but one used in conjunction with other measures to provide a more holistic understanding than personality and cognitive capabilities can alone. A follow up study is recommended utilizing longitudinal data on student performance.
The QMCA should have broad usage outside of natural sciences, yet to this point the sample data has been collected from that specific population. Further study in applying this tool to other roles and industries is needed with data collection occurring within a diverse population. However, similar assessments of behavioral competencies, such as the ESCI, has had wide data collection and validation across roles, industries, and geographic areas [80].
References
- 1. Posselt JR. Toward Inclusive Excellence in Graduate Education: Constructing Merit and Diversity in PhD Admissions. Am J Educ. 2014;120(4):481–514.
- 2.
Posselt JR. Inside graduate admissions: Merit, diversity, and faculty gatekeeping. Harvard University Press. 2016.
- 3. Posselt JR, Hernandez TE, Cochran GL, Miller CW. Metrics first, diversity later? making the short list and getting admitted to physics phd programs. J Women Minor Scien Eng. 2019;25(4):283–306.
- 4.
Lovitts BE. Leaving the ivory tower: The causes and consequences of departure from doctoral study: Rowman & Littlefield; 2001.
- 5. Miller CW, Zwickl BM, Posselt JR, Silvestrini RT, Hodapp T. Typical physics Ph.D. admissions criteria limit access to underrepresented groups but fail to predict doctoral completion. Sci Adv. 2019;5(1):eaat7550. pmid:30746441
- 6. Roberts SF, Pyfrom E, Hoffman JA, Pai C, Reagan EK, Light AE. Review of Racially Equitable Admissions Practices in STEM Doctoral Programs. Educ Sci. 2021;11(6):270.
- 7. Shengyao Y, Xuefen L, Jenatabadi HS, Samsudin N, Chunchun K, Ishak Z. Emotional intelligence impact on academic achievement and psychological well-being among university students: the mediating role of positive psychological characteristics. BMC Psychol. 2024;12(1):389. pmid:38997786
- 8. Stassun KG, Burger A, Lange SE. The Fisk-Vanderbilt Masters-to-PhD Bridge Program: A Model for Broadening Participation of Underrepresented Groups in the Physical Sciences through Effective Partnerships with Minority-Serving Institutions. J Geoscience Educ. 2010;58(3):135–44.
- 9. Feingold A. Gender differences in personality: a meta-analysis. Psychol Bull. 1994;116(3):429–56. pmid:7809307
- 10. MacCann C, Joseph DL, Newman DA, Roberts RD. Emotional intelligence is a second-stratum factor of intelligence: evidence from hierarchical and bifactor models. Emotion. 2014;14(2):358–74. pmid:24341786
- 11.
Oswald FL, Hough LM. Personality and its assessment in organizations: Theoretical and empirical developments. 2011.
- 12.
Miller CW. Status. 2015.
- 13.
Boyatzis R, McKee A. Resonant leadership: Renewing yourself and connecting with others through mindfulness, hope and compassion. Harvard Business Press. 2005.
- 14. White BAA, Cola PA, Boyatzis RE, Quinn JF. Editorial: Emotionally intelligent leadership in medicine. Front Psychol. 2022;13:999184. pmid:36148135
- 15. Côté S, Miners CTH. Emotional intelligence, cognitive intelligence, and job performance. Administrative Science Quarterly. 2006;51(1):1–28.
- 16.
Wong C-S, Law KS. The effects of leader and follower emotional intelligence on performance and attitude: An exploratory study. Leadership Perspectives. Routledge. 2017. p. 97–128. https://doi.org/10.4324/9781315250601-10
- 17. Austin EJ, Saklofske DH, Egan V. Personality, well-being and health correlates of trait emotional intelligence. Personality and Individual Differences. 2005;38(3):547–58.
- 18. Mavroveli S, Petrides KV, Rieffe C, Bakker F. Trait emotional intelligence, psychological well‐being and peer‐rated social competence in adolescence. British J of Dev Psycho. 2007;25(2):263–75.
- 19.
Council NR. Education for life and work: Developing transferable knowledge and skills in the 21st century. Washington, DC: National Academies. 2013.
- 20.
Cherniss C, Goleman D. The emotional intelligence workplace. How to select for measure and improve emotional intelligence in individuals, groups and organizations. San Francisco: Jossey-Bass. 2001.
- 21. Lievens F, Sackett PR. The validity of interpersonal skills assessment via situational judgment tests for predicting academic success and job performance. J Appl Psychol. 2012;97(2):460–8. pmid:21967295
- 22. Palmer RT, Maramba DC, Dancy Ii TE. A qualitative investigation of factors promoting the retention and persistence of students of color in STEM. J Negro Educ. 2011;80(4):491–504.
- 23. Tinto V. Limits of Theory and Practice in Student Attrition. J Higher Educ. 1982;53(6):687–700.
- 24.
Rendón LI, Jalomo RE, Nora A. Theoretical considerations in the study of minority student retention. Rethinking the departure puzzle: New theory and research on college student retention. 2000. p. 127–56.
- 25.
Grant AM. The efficacy of coaching. The Wiley-Blackwell handbook of the psychology of coaching and mentoring. 2013. p. 15–39.
- 26. Kathy K. Improving the mentoring process. Training and Development Journal. 1985;39(4):40.
- 27. Victoroff KZ, Boyatzis RE. What is the relationship between emotional intelligence and dental student clinical performance?. J Dent Educ. 2013;77(4):416–26. pmid:23576587
- 28. Mega C, Ronconi L, De Beni R. What makes a good student? How emotions, self-regulated learning, and motivation contribute to academic achievement. J Educational Psychol. 2014;106(1):121–31.
- 29. Sutton A. Measuring the Effects of Self-Awareness: Construction of the Self-Awareness Outcomes Questionnaire. Eur J Psychol. 2016;12(4):645–58. pmid:27872672
- 30. Goleman D, Boyatzis R. Emotional intelligence has 12 elements. Which do you need to work on. Harvard Business Review. 2017;84(2):1–5.
- 31.
Bradberry T, Greaves J. Emotional Intelligence 2.0: TalentSmart; 2009.
- 32. Oyebola Olusola Ayeni, Onyebuchi Nneamaka Chisom, Nancy Mohd Al Hamad, Blessing Osawaru, Ololade Elizabeth Adewusi. Enhancing STEM education through emotional intelligence and counseling techniques. World J Adv Res Rev. 2024;21(2):903–16.
- 33.
Goleman D, Intelligence E. Why it can matter more than IQ. 1995.
- 34.
Boyatzis RE. The competent manager: A model for effective performance. John Wiley & Sons. 1991.
- 35. Cola PA, Wang Y. Discovering Factors that Influence Physician Scientist Success in Academic Medical Centers. Qual Health Res. 2022;32(10):1433–46. pmid:35737579
- 36. Lattuca LR, Voigt LJ, Fath KQ. Does Interdisciplinarity Promote Learning? Theoretical Support and Researchable Questions. rhe. 2004;28(1):23–48.
- 37.
Hattie J. Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge. 2008.
- 38. Shultz MM, Zedeck S. Admission to Law School: New Measures. Educational Psychologist. 2012;47(1):51–65.
- 39. Buyse T, Lievens F. Situational judgment tests as a new tool for dental student selection. J Dent Educ. 2011;75(6):743–9. pmid:21642519
- 40. Wilson MB, Sedlacek WE, Lowery BL. An approach to using noncognitive variables in dental school admissions. J Dent Educ. 2014;78(4):567–74. pmid:24706686
- 41. Lievens F, Ones DS, Dilchert S. Personality scale validities increase throughout medical school. J Appl Psychol. 2009;94(6):1514–35. pmid:19916659
- 42.
Aryee M. College students’ persistence and degree completion in science, technology, engineering, and mathematics (STEM): The role of non-cognitive attributes of self-efficacy, outcome expectations, and interest. Seton Hall University. 2017.
- 43. Libbrecht N, Lievens F, Carette B, Côté S. Emotional intelligence predicts success in medical school. Emotion. 2014;14(1):64–73. pmid:24219393
- 44.
Stout RJ, Driskell JE, Salas E. Stout RJ, Driskell JE, Salas E, editors. Los Angeles, CA: SAGE Publications. 1997.
- 45. Tsaousis I, Nikolaou I. Exploring the relationship of emotional intelligence with physical and psychological health functioning. Stress and Health. 2005;21(2):77–86.
- 46.
Boyatzis RE, Goleman D, Rhee K. Clustering competence in emotional intelligence: Insights from the Emotional Competence Inventory (ECI). Handbook of emotional intelligence. 2000.
- 47.
Paulhus DL. Socially desirable responding on self-reports. Encyclopedia of personality and individual differences. 2017.
- 48. Waugh RF. Creating a scale to measure motivation to achieve academically: linking attitudes and behaviours using Rasch measurement. Br J Educ Psychol. 2002;72(Pt 1):65–86. pmid:11916465
- 49. Duckworth AL, Quinn PD. Development and validation of the short grit scale (grit-s). J Pers Assess. 2009;91(2):166–74. pmid:19205937
- 50.
Peterson C, Seligman MEP. Character strengths and virtues: A handbook and classification. Oxford University Press. 2004.
- 51. Duckworth AL, Peterson C, Matthews MD, Kelly DR. Grit: perseverance and passion for long-term goals. J Pers Soc Psychol. 2007;92(6):1087–101. pmid:17547490
- 52. Strahan R, Gerbasi KC. Short, homogeneous versions of the Marlow-Crowne Social Desirability Scale. J Clin Psychol. 1972;28(2):191–3.
- 53. DiStefano C, Motl RW. Further Investigating Method Effects Associated With Negatively Worded Items on Self-Report Surveys. Structural Equation Modeling: A Multidisciplinary J. 2006;13(3):440–64.
- 54. van Sonderen E, Sanderman R, Coyne JC. Ineffectiveness of reverse wording of questionnaire items: let’s learn from cows in the rain. PLoS One. 2013;8(7):e68967. pmid:23935915
- 55. Paulhus DL, Reid DB. Enhancement and denial in socially desirable responding. J Personality and Social Psychology. 1991;60(2):307–17.
- 56.
Paulhus DL. Self-deception and impression management in test responses. Personality assessment via questionnaires: Current issues in theory and measurement. Springer. 1986. p. 143–65.
- 57. Bolino MC, Turnley WH. Measuring Impression Management in Organizations: A Scale Development Based on the Jones and Pittman Taxonomy. Organizational Res Methods. 1999;2(2):187–206.
- 58. Tracey TJG. A note on socially desirable responding. J Couns Psychol. 2016;63(2):224–32. pmid:26689626
- 59. Dunning D, Heath C, Suls JM. Flawed Self-Assessment: Implications for Health, Education, and the Workplace. Psychol Sci Public Interest. 2004;5(3):69–106. pmid:26158995
- 60. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary J. 1999;6(1):1–55.
- 61. Kenny DA, Kaniskan B, McCoach DB. The Performance of RMSEA in Models With Small Degrees of Freedom. Sociological Methods & Research. 2014;44(3):486–507.
- 62. Williams LJ, Hartman N, Cavazotte F. Method Variance and Marker Variables: A Review and Comprehensive CFA Marker Technique. Organizational Research Methods. 2010;13(3):477–514.
- 63. Williams LJ, O’Boyle EH. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters. J Appl Psychol. 2015;100(5):1579–602. pmid:25822071
- 64. Foldes HJ, Duehr EE, Ones DS. Group differences in personality: meta‐analyses comparing five u.s. racial groups. Personnel Psychol. 2008;61(3):579–616.
- 65. Vandenberg RJ, Lance CE. A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research. Organizational Res Methods. 2000;3(1):4–70.
- 66.
Satorra A. Scaled and adjusted restricted tests in multi-sample analysis of moment structures. Innovations in multivariate statistical analysis: A Festschrift for Heinz Neudecker. Springer. 2000. p. 233–47.
- 67. Cheung GW, Rensvold RB. Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance. Structural Equation Modeling: A Multidisciplinary J. 2002;9(2):233–55.
- 68. Leitgöb H, Seddig D, Asparouhov T, Behr D, Davidov E, De Roover K, et al. Measurement invariance in the social sciences: Historical development, methodological challenges, state of the art, and future perspectives. Soc Sci Res. 2023;110:102805. pmid:36796989
- 69. Pokropek A, Davidov E, Schmidt P. A Monte Carlo Simulation Study to Assess The Appropriateness of Traditional and Newer Approaches to Test for Measurement Invariance. Structural Equation Modeling: A Multidisciplinary J. 2019;26(5):724–44.
- 70. Sackett PR, Schmitt N, Ellingson JE, Kabin MB. High-stakes testing in employment, credentialing, and higher education. Prospects in a post-affirmative-action world. Am Psychol. 2001;56(4):302–18. pmid:11330228
- 71. Kim HN. Empathy and academic achievement among Black/African American college students in an HBCU. J Educational Res Practice. 2024;14(1):20.
- 72.
Bracken DW, Timmreck CW, Church AH. The handbook of multisource feedback. John Wiley & Sons. 2001.
- 73. Fleenor JW, Smither JW, Atwater LE, Braddy PW, Sturm RE. Self–other rating agreement in leadership: A review. The Leadership Quarterly. 2010;21(6):1005–34.
- 74. Mayer JD, Roberts RD, Barsade SG. Human abilities: emotional intelligence. Annu Rev Psychol. 2008;59:507–36. pmid:17937602
- 75. Antonakis J, Ashkanasy NM, Dasborough MT. Does leadership need emotional intelligence?. The Leadership Quarterly. 2009;20(2):247–61.
- 76. Quinn JF, DeWaay D, Zwygart KK, Smith AB. Entrustable Professional Activities in Medical Education: SELECT Leadership, Values-Based Patient-Centered Care, Health Systems, and Scholarly Excellence. JRMC. 2024;7(3).
- 77.
Bennett LM, Gadlin H. Collaboration and team science: from theory to practice. Los Angeles, CA: SAGE Publications. 2012.
- 78. Durlak JA, Weissberg RP, Dymnicki AB, Taylor RD, Schellinger KB. The impact of enhancing students’ social and emotional learning: a meta-analysis of school-based universal interventions. Child Dev. 2011;82(1):405–32. pmid:21291449
- 79. Schmidt FL, Hunter JE. The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin. 1998;124(2):262–74.
- 80.
Boyatzis RE, Gaskin J, Wei H. Emotional and social intelligence and behavior. Handbook of intelligence: Evolutionary theory, historical perspective, and current concepts. Springer. 2014. p. 243–62.