Take me where I want to go: Institutional prestige, advisor sponsorship, and academic career placement preferences

Placement in prestigious research institutions for STEM (science, technology, engineering, and mathematics) PhD recipients is generally considered to be optimal. Yet some doctoral recipients are not interested in intensive research careers and instead seek alternative careers, outside but also within academe (for example teaching positions in Liberal Arts Schools). Recent attention to non-academic pathways has expanded our understanding of alternative PhD careers. However, career preferences and placements are also nuanced along the academic pathway. Existing research on academic careers (mostly research-centric) has found that certain factors have a significant impact on the prestige of both the institutional placement and the salary of PhD recipients. We understand less, however, about the functioning of career preferences and related placements outside of the top academic research institutions. Our work builds on prior studies of academic career placement to explore the impact that prestige of PhD-granting institution, advisor involvement, and cultural capital have on the extent to which STEM PhDs are placed in their preferred academic institution types. What determines whether an individual with a preference for research oriented institutions works at a Research Extensive university? Or whether an individual with a preference for teaching works at a Liberal Arts college? Using survey data from a nationally representative sample of faculty in biology, biochemistry, civil engineering and mathematics at four different Carnegie Classified institution types (Research Extensive, Research Intensive, Master’s I & II, and Liberal Arts Colleges), we examine the relative weight of different individual and institutional characteristics on institutional type placement. We find that doctoral institutional prestige plays a significant role in matching individuals with their preferred institutional type, but that advisor involvement only has an impact on those with a preference for research oriented institutions. Gender effects are also observed, particularly in the role of the advisor in affecting preferred career placement.

One interesting aspect to note is that overall centrality has higher correlations than discipline specific centrality even with discipline specific measures of prestige, pointing to possible halo effects.
All our models have been estimated using alternative prestige measures, and our results are very consistent, if not nearly identical in many cases. Further, all results are consistent with the results presented in Table 1 of the main paper. Table S1.2 contains the results of our logistic regression models regarding mismatch:  As Table S1.3 here shows, there are some minor differences in regards to level of significance for Master's institutions for those with teaching preferences, but overall the take away point is the same. Also important to note that using any of these alternative measures of prestige would reduce our sample by nearly 400 observations, at a minimum.
It is important to note that for the purposes of checking the consistency of our measures, we use the 1995 NRC Rankings instead of the most recent ones because the most recent ones notably do not include any survey questions regarding program prestige, instead relying on regression models based on quantitative data. As a result, they are less a measure of prestige and more a measure of program quality based on quantitative criteria. The end result is that the recent NRC rankings do not present a single, quantitative score of prestige like its 1995 counterpart did, but instead a range of rankings based on the confidence intervals of the underlying regressions, making it impossible to reduce to a single value. As Baldi [5] and others have discussed, academic prestige is very "sticky," and despite the time gap the 1995 NRC rankings and current US News rankings (which were measured between 2014 and 2016 depending on discipline) are correlated at over the 0.843 level.
Finally, table S1.4 contains the centrality scores, both overall and discipline specific, based on our sample.
A few words of caution are necessary when looking at any measure of prestige or centrality. First, while there is a tendency for the public at large to focus on ordinal aspects of prestige measures, it is substantially more useful to focus on the quantitative measures themselves. That is, focusing on ordinal aspects of prestige rankings (e.g., whether an institution should be ranked 2nd or 5th) generally revolves around microscopic differences in whatever measure of prestige that is being used. These can never be resolved satisfactorily in a statistical way. Much more productive is to focus on the magnitude of these differences. For example, it may be impossible nor realistic to resolve who is number 1 or number 2 in the rankings, but most rankings will agree in terms of which institutions are, say, in the top 20 versus which are below rank 50. Second, our own measures may contain a result or two that may seem anomalous. It is not our intention to provide a definitive ranking mechanism. All prestige and centrality measures have their limitations. NRC data last collected information on prestige in 1995. US News and World Report does not report the results of programs with a prestige measure lower than 2, and does not include all disciplines (biochemistry, for example, is grouped with biophysics and structural biology as a subspecialty of biology, and is not ranked in the same way as the other disciplines in our sample). QS World University Rankings do not distinguish between graduate and undergraduate education, nor between disciplines within universities. Our measures here have the major benefit of maximizing our number of data points, but as with any survey data, is subject to a number of limitations. More important to note here, once more, is the fact that regardless of which measure is being used all our results are consistent through all the models.