Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Field size as a predictor of “excellence.” The selection of subject fields in Germany’s Excellence Initiative

  • Thomas Heinze ,

    Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Resources, Supervision, Writing – original draft

    theinze@uni-wuppertal.de

    Affiliation Institute of Sociology, University of Wuppertal, Wuppertal, Germany

  • Isabel Maria Habicht,

    Roles Data curation, Formal analysis, Software, Writing – review & editing

    Affiliation Institute of Sociology, University of Wuppertal, Wuppertal, Germany

  • Paul Eberhardt,

    Roles Data curation, Formal analysis, Writing – review & editing

    Affiliation Institute of Sociology, University of Wuppertal, Wuppertal, Germany

  • Dirk Tunger

    Roles Data curation, Writing – review & editing

    Affiliation Project Management, Research Center Jülich, Jülich, Germany

Abstract

We investigate the selection of subject fields in Germany’s “excellence initiative,” a two-phase funding scheme administered by the German Research Foundation (DFG) from 2005 to 2017 to increase international competitiveness of scientific research at German universities. While most empirical studies have examined the “excellence initiative’s” effects at the university level (“elite universities”), we focus on subject fields within universities. Based on both descriptive and logistic regression analyses, we find that the “excellence initiative” reveals a stable social order of public universities based on organizational size, that field selection is biased toward those fields with many professors and considerable grant funding, and that funding success in the second phase largely follows decisions from the first phase. We discuss these results and suggest avenues for future research.

Introduction

In 2005, the German Research Foundation (DFG) established the so-called “excellence initiative” (named “initiative” throughout the paper) that ran until 2017. The purpose of the “initiative” was to increase international competitiveness of scientific research at German universities. More specifically, the DFG argued that “the initiative aims to equally promote the development of peak performers and the broad enhancement of the quality of Germany’s higher education and research landscape” [1]. It involved two funding phases, one in 2006–2011 and one covering 2012–2017. The goals of the initiative remained identical throughout both phases, as highlighted by the DFG, which emphasized that “the sole decisive criterion for the evaluation was the scientific excellence of the submitted proposals” [2].

During our observation period (2005-2017), there were a total of 102 universities in Germany with permission to award doctoral degrees in both funding lines, of which 82 were state-run and 20 private ones. We focus on those public universities that offered a broad range of subject fields and for which comprehensive statistical data were available (n = 68), totaling 17 technical universities (TUs, S1 Appendix ) and 51 non-technical universities (NTUs, S2 Appendix; see also Section 3.2).

Previous analyses on the “initiative” have mainly examined variables at the university level (“elite universities”), whereas here we focus on the subject level, i.e., academic fields within universities (“university subjects”). In this paper, we describe our investigation of the selection of subject fields within the “excellence initiative,” addressing three interrelated questions. First, which institutional factors mattered most in the selection of subject fields, i.e., why were some fields (in some universities) selected but others (either in the same university or other universities) were not? Second, did the influence of these institutional factors change between the first and second funding phases? Third, to what extent did funding success in the first phase influence funding success in the second phase?

The answering of these three research questions is of great importance with regard to future allocation of research funds and institutional reforms in the German university system. To tackle these three research questions, we compiled a comprehensive and innovative dataset, primarily centered on information at the level of subject fields. This dataset incorporates data from the official series of the Federal Statistical Office (StBA) and funding data from the DFG, which we converted into a machine-readable format. Additionally, bibliometric information was collected and assessed as part of robustness checks (see S7 and S11 Appendices). By conducting our analysis at the field level, we aim to fill a long-standing research gap (see also Section 3). Secondly, we analyzed these data using descriptive statistics, including chi-square tests, and, most importantly, logistic regression analyses for each funding phase to identify the factors influencing a subject field’s likelihood of receiving “initiative” funding (Section 4).

Examining the field level is appropriate for two methodological reasons as well. First, most of the funding for the “initiative” was channeled into subject fields (3292.2 million Euro, or 71%) via grants for either Graduate Research Schools (GS) or Excellence Clusters (ECs), and less funding went to Institutional Strategies (IS) at the university level (1347.1 million Euro, or 29%) [3]. Thus, the “initiative” primarily funded subject fields rather than entire universities. Second, despite public sector reforms in the 2000s [46], German universities have limited strategic planning capabilities compared with higher education institutions in other countries, most notably in the United States [711]. Several studies have shown that where planning capabilities have been established at the university level via effective structural reforms, as in the Netherlands in the 1980s and 1990s, universities have achieved consistently high scores in international comparisons of research impact and prestige [12,13]. In contrast, where such capabilities are less developed, as in German public universities, empirical analyses are better focused on the field level [14].

The “initiative” is part of a larger European effort to support global competitiveness of universities via Centers of Excellence (CoEs). Among the earliest such initiatives were those in Finland (1995), Norway (2001), and Ireland (2003) [15,16], and by the mid-2010s, many European countries had established programs, although with considerable variation in goals and funding levels [17,18]. A review of funding instruments indicates that the size of research grants has increased in most developed countries, and CoE schemes have contributed to this trend [19].

Empirical studies on the “initiative” (from the mid-2000s and early 2010s) have shown that funding for the “initiative” was channeled into the largest rather than the most productive universities [20,21], leading to an increased concentration at larger universities in the system [22,23]. In addition, some critics argued that universities with high shares of staff in the natural and engineering sciences and with many DFG peer reviewers among their scientific staff received above-average amounts of funding [2426]. More recent studies (from the mid-2010s and early 2020s) have examined the consequences of the “initiative” for productivity, efficiency, and scientific impact, mostly at the university level [2736]. Here, we draw on the early studies to formulate hypotheses about selection factors for the “initiative” (Section 2) and discuss empirical results with reference to more recent findings (Section 5).

Our analysis yields three empirical insights. First, we find that the “initiative” has revealed a stable sorting of public universities into three size classes: small (without “initiative” funding), medium (one or two funding lines), and large (all three funding lines). Our analysis suggests that this sorting already existed in the 1990s and that the “initiative’s” selection procedures have reproduced this size-based order. Second, we show that the “initiative’s” selection in the first phase was biased toward large subject fields with considerable grant funding. Third, subject fields that received “initiative” support in the first phase were likely to get follow-up support in the second phase, pointing to a high level of path dependence at the field level.

The remainder of this paper is organized as follows: First, we introduce hypotheses based on the available literature (Section 2). We then present the methodology of the paper, including variables and data sources (Section 3). The empirical results follow and include both descriptive statistics and logistic regressions on success in the “initiative’s” two phases (Section 4). Finally, we discuss our findings in light of the recent CoE literature (Section 5).

Literature review and hypotheses

The “initiative” has attracted considerable attention from academic researchers and political commentators, particularly in the years after it started [3741], but also more recently [2736]. Most studies have looked into the “initiative’s” consequences on the public university system, with almost exclusive focus either on the university level (“elite universities”) or the entire higher education system level (“university system”). Some noteworthy results emerged from bibliometric studies; for example, an examination of highly-cited publications (top 10% most cited) found that “the vast majority of universities with Excellence funding held leading or average positions before the funding began. The German university system was already differentiated into stronger and weaker research universities prior to the Excellence Initiative” [31, p. 2234]. In addition, among those universities that published highly-cited research, very few moved upward or downward in the ranking, pointing to a stable sorting within the German university system [30,32]. Other bibliometric studies reported an increase in productivity but a decrease in scientific impact [36], and an overall loss of efficiency in teaching and research for excellence-funded universities [34,42]. Even more interesting are results that contradicted earlier warnings about increased concentration of grant funding due to the “initiative”. While both critics and DFG’s leadership envisaged an increased functional differentiation between research universities and teaching colleges [23,43], such effects have not taken place: Universities with no excellence-funding managed to obtain funding from other sources than the DFG (most notably ministries, and thus other public monies), thereby both buffering the “initiative’s” effect for individual universities and increasing the level of grant funding in the university system as a whole [27,35].

While most studies on the “initiative” have examined its consequences on either the organizational or system levels (or both), very few analyses have looked into how its selection procedure functioned at the subject field level. Yet, the early and mostly critical literature on the “initiative” [2023,25] is highly informative in that it focused on two arguments, formulated for “elite universities” and the “university system” levels, both of which can be applied to the subject field level as well.

The first argument relates to what Robert K. Merton called the “Matthew Effect” [44], a social mechanism that increases inequality (“to those who have, more shall be given”). In the university system, the inequality is between resource-poor and resource-rich public universities, and in brief, the argument is that universities with a large scientific workforce and a high number of research grants are the main beneficiaries of the “initiative” [20,21,23,25,45]. In contrast, funding is not channeled to the most productive universities, as measured in terms of most publications relative to number of scientific staff [20,21]. In consequence, absolute size dominates relative performance, contradicting the principle of meritocracy that the “initiative” purports to follow. Based on this argument, which has been discussed almost exclusively at the university level (“elite universities”) and the higher education system level (“university system”), we formulate two hypotheses for the subject/academic field level:

Empirical findings on H1 and H2 are important because a growing body of literature suggests that breakthrough science is associated with small research entities, whereas large teams typically develop and exploit existing scientific programs [4648]. In addition, evidence from the Nordic countries suggests that large grants given to Centers of Excellence (CoE) did not have the highest impact when they were awarded to already highly ranked research groups but rather when they were “awarded to groups not yet performing at the highest level” [49]. Similarly, a large-scale bibliometric study from Canada shows “that in terms of both the quantity of papers produced and their scientific impact, the concentration of research funding in the hands of the so-called ‘elite’ of researchers generally produces diminishing marginal returns” [50]. These findings are corroborated by a recent review that finds, as a general result, “diminishing returns to grant size, measured for example in terms of number of publications, citation impact and number of highly cited papers” [19].

A second argument centers on the social construction of scientific (and institutional) prestige. In a more general sense, prestige implies that the production and maintenance of status goods are marked more by notions of quality and refinement than by considerations of utility [51,52]. In a more specific sense, the early critical commentators argued that rather than simply identifying and awarding support for truly excellent research, the “initiative” was involved in an “act of consecration”, a procedure that bestows scientific prestige on universities despite apparent inefficiencies and low relative performance. This “consecration” was typically performed by professors from the resource-rich universities (partly from abroad, but also from Germany) and follows the logic of supporting those who have already accumulated considerable resources, thus deepening the consequences of the Matthew Effect [20,26]. Based on this claim, we probe whether selection decisions in the first funding phase consecrated those chosen to receive the funding, and in this way helped them obtain funding in the second phase, as well. So far, the scientific prestige argument has been predicated on findings at the university level, but it can be extended to the level of academic fields [53,54]. Hence, we formulate the following hypothesis:

H3: The probability that a subject field (within a university) will receive excellence funding in the second phase increases when it has been designated as excellent in the first phase.

Methodology

3.1 Variables

Our dependent variable (DV) is binary and measures whether or not a university subject received “initiative” funding (via either the GS or the EC funding line or both) between 2006 and 2011 (first phase) and/or between 2012 and 2017 (second phase) (Table 1). A peculiarity of the second phase is its extension into the year 2019, before the start of “Excellence Strategy,” the “initiative“’s successor; therefore, all university subjects (and universities via the IS funding line) received additional funding before the “Excellence Strategy” became operational late in 2019.

thumbnail
Table 1. List of dependent, independent and control variables.

https://doi.org/10.1371/journal.pone.0300828.t001

Our DV differs from the early literature [20,21,25] and some of the later efficiency-oriented literature [34,36,42] that centered around the amount of grant funding, highly-cited publications and citations, each per professor or scientific staff (a proxy for research efficiency). We use a binary DV for two reasons. First, the DFG does not provide the amounts of funding per funding line and subject field in a given university. Therefore, a cardinal ranking using “initiative” funding at the field level within universities is simply not available. Second, we are interested in estimating the chances of subject fields within universities (“university subjects”) to receive “initiative” funding based on explanatory variables (see below). Therefore, information about whether or not (1/0) a subject field in a given university has received “initiative” funding provides sufficient information for carrying out this estimation procedure (non-linear probabilistic regression with likelihood based standard errors, here: logit regression).

Our explanatory variables (EVs) are measured at the level of university subjects (field level) and include number of professors (EV1), amount of all grant funding in million Euros (EV2), and a binary variable for the second phase that measures whether a university subject was funded in the first phase (EV3). Another EV is measured at the university level: amount of DFG grant funding in million Euros (EV4). It should be pointed out that the “initiative’s” focus is on research, and not teaching. Therefore, all EVs – as explanatory variables – are meant to capture various aspects of the research dimension (Table 1). Professors (EV1) are the most senior level of scientists recruited for doing research (and teaching), and grant funding (EV2, EV4) measures the amount of externally funded research activities both guided (and carried out) by professors. In addition, EV3 records the continuity of “initiative” funding in both the first and second phase.

In addition, we included one control variable (CV), the number of students (CV1). CV1 was entered to control for the size of a subject field’s student population (Table 1). While the EVs are theoretically anchored (Matthew Effect, prestige consecration), the CV is introduced to reduce errors in measurement. Initially, the number of non-professorial scientific staff, amount of basic funding in million Euros, number of WoS publications, and number of WoS citations were included in our list of CVs. However, because these four variables were highly correlated with EV1 and with each other, we excluded them from further analyses to avoid multicollinearity (S7 Appendix). For robustness checks, we also conducted a regression analysis using the number of citations as a control variable (to account for subject field differences in scientific visibility) instead of DFG grant funding. We made this change because both variables are highly correlated. However, this change did not significantly alter the main results (S11 Appendix).

Regarding H1, EV1 was used to operationalize the absolute size of (senior) scientific staff. With respect to H2, EV2 and EV4 were used to capture the absolute amounts of grant funding, both at the subject field level (EV2) and the university level (EV4). With regard to H3, we used IV3 to examine the effects of prior funding success on the subsequent probability of receiving “initiative” funding. All EVs and CV were measured in the five years prior to the first phase (beginning in 2006) and the second phase (beginning in 2012). Therefore, we computed mean values for EV1–4 and CV1 for the years 2001–2005 and then for the years 2007–2011. As a robustness check, we also computed mean values for the three years preceding the first and second phases (i.e., 2003–2005 and 2009–2011). We found no major differences between the different time windows, so our interpretation builds on the five-year time windows.

Due to the nature of our DV, we calculated logistic regression models to measure the effects of the EVs on the odds of receiving “initiative” funding [55,56]. The logistic regression is the preferred method due to the binary measurement of our DV. To ensure that our models fit appropriately, we calculated goodness of fit tests (S10 Appendix) developed by Hosmer, Lemeshow [57].

In addition to the results presented below (Section 4), we calculated two-level regression models with subject fields as the first-level unit and universities as the second-level unit. As mentioned above (Section 1), most empirical studies have focused on variables at the university level (“elite universities”), so we added two time-invariant binary variables to the existing EVs and CV at the university level: one that measures whether universities were founded after 1945 (“young” universities) and one that captures the geographical location (“West Germany” versus “East Germany”). With these additional models, we controlled for variance at the university level [58] (S9 Appendix).

3.2 Units of Analysis

This paper focuses on public universities in Germany. According to the Federal Statistical Office (StBA) in 2015, of 102 universities with permission to award doctoral degrees, 82 are state-run and 20 are privately run. Private universities play only a minor role due to their low share of all enrolled students, the very limited range of disciplinary fields offered, and the low level of research activities [4]. They are therefore not considered further here. In addition, we excluded several specialized public higher education institutions that are not suitable for comparison with universities that offer a broad range of fields; and some universities could not be analyzed due to considerable gaps in the data, especially staffing and funding data (for more details, see [11]). Of the remaining universities, 17 were classified as “technical” (TUs, S1 Appendix), either because they are members of the TU9 association of the nine leading technical universities in Germany or because they bear “Technische Universität” or “Technische Hochschule” in their name; the remaining 51 universities were classified as “non-technical” (NTUs, S2 Appendix).

Data on staff, finances, and student numbers for these 68 public universities (1995–2018) were obtained directly from the StBA (S3 Appendix) and correspond with published data from the reports in Serial 11: Education and Culture, Sections 4.1 (Students at Universities), 4.4 (Personnel at Universities), and 4.5 (Finances at Universities). For students, the StBA records the target degrees grouped into accumulated higher-level categories, and of these, here we use university degrees, bachelor’s and master’s degrees, and teaching examinations (S4 Appendix). Medicine was excluded because separating hospital units from their affiliated university departments was not possible for all years.

Publication and citation data for the 68 universities were obtained from the Competence Centre for Bibliometrics, using the WoS. From this source, cleaned address data are available at the university level but not at a more detailed level of aggregation, which means publications cannot be directly allocated to institutes, academic departments, or faculties. Instead, we used a field classification to break down universities into their subject fields. Following [11], we use the Archambault classification [59], which assigns each journal to one subject category and provides a good match with the StBA field classification (S5 Appendix). In this way, we connected WoS publication and citation data with the staff, finance, and student data provided by the StBA.

For our purposes, each WoS publication was counted once, even if several co-authors from a single university were listed. The procedure was different for publications with co-authors from different institutions; in this case, the WoS publication was counted for each institution (whole count). We considered all publications in journals covered by the WoS with the participation of at least one German university that belonged to the document type “Article,” “Review,” or “Letter.” The same types of documents were considered for measuring citations. We counted all citations received by a university subject per year, regardless of the publication years of the cited publications. With this counting method, the change in the citations obtained for a subject field in the respective year could easily be observed.

The validity of bibliometric analyses depends on the rate of coverage for the respective subject field. To estimate the coverage for German publications by subject fields, we analyzed the proportion of their cited references that were in turn included in the WoS. This method is called “internal” WoS field coverage [60] and provides a proxy for how well the WoS reflects scholarly activity of an academic field. Our analysis documented that the internal coverage in many fields is insufficient for bibliometric analyses. Following a common standard [60], we applied a cut-off value of 50% for cited references. This cut-off yielded the following subject fields: Biology, Chemistry, Physics and Astronomy, Psychology, Agricultural Sciences, Food and Beverages Technology, Mechanical Engineering/Process Engineering, Geosciences [excluding Geography], Electrical Engineering, Forestry/Timber Management, Economics, and Mathematics (S5 Appendix, see also Fig 1 in [11]). For these 12 subject fields, we obtained and analyzed publication and citation data.

Furthermore, we retrieved data from reports on the distribution of “initiative” funding [3,61,62]. We began with detailed descriptions of Graduate Schools (GS) and Excellence Clusters (EC) from the second funding phase. In these cases, the official spokesperson for each project is named, allowing for a clear assignment to a particular subject field. Additionally, the descriptions often include the affiliation of the project to one or several subject fields. Based on the official spokesperson and the GS or EC descriptions, we assigned them to subject fields using the StBA classification. Each GS or EC was assigned to at least one, and up to three, such subject fields.

The assignment of funded projects from the first phase required more work. Of 83 funded GS or ECs, 70 were continued in the second phase, so that only 12 were not yet assigned to a subject field. The DFG assigns each GS or EC to one disciplinary domain: humanities and social sciences, life sciences, natural sciences, or engineering [61]. We differentiated these into subject fields, either directly through the name (e.g., “Bonn Graduate School of Economics” was assigned to economics) or if an assignment was not possible, by conducting searches of other (internet) sources for official spokespersons, publications, or public events that allowed an assignment to up to three disciplines. This methodology produced S6 Appendix, which lists all three lines of “initiative” funding at our 68 universities. We dispensed with a verbatim catalogue, which would have been untenably extensive.

All data were processed at the level of universities and their respective subject fields. Here, we use the term “university subject” to denote subject fields within universities. With 68 universities and 56 subject fields, we arrived at 68 *  56 =  3,808 possible observations with respect to staff, funding, and student data, of which, for example, 2,829 were realized for grant funding. Please note, first, that not all 56 subject fields are present in all 68 universities in the entire time period 2001-2018, hence our data set contains a smaller number of observations than the maximum of 3,808. In fact, the maximum number of observations in our dataset is n =  2,829, and thus the average number of subject fields per university is 41.6. Second, not all subject fields were present during the two phases. For example, the first phase contains nphase1 =  2,388 valid observations, while the second phase contains nphase2 =  2,396 valid observations. For the sake of robustness, however, we set the missing entries in the relevant years to zero (instead of dropping these observations). Therefore, our calculations are based on the maximum sample of n =  2,829 subject fields in both periods. Calculating with the smaller samples (nphase1 =  2,388, nphase2 =  2,396) or the maximum sample leads to almost the same results (see S12 Appendix Table 12a and Table 12b). Regarding the bibliometric analyses, the number of fields is 12, so that we arrive at 68 *  12 =  816 possible observations, of which, for example, 624 were realized for publications.

Analysis and Discussion of Results

4.1 Descriptive Results

A first set of descriptive results provides contextual information at the university level for H1 and H2. Starting with an overview of changes across all universities from 1995 to 2018, there were considerable differences with regard to average growth between the EVs and CVs during the observation period (Table 2, see “mean values”). The average number of professors (EV1) grew by 8% (from 254.7 to 276.0), the amount of DFG grant funding at the university level (EV4) by 215% (from 10.4 million Euros to 32.9 million Euros), and total grant funding at the subject field level (EV2) by 150% (from 23.8 million Euros to 59.5 million Euros); the number of WoS citations skyrocketed by 886% (from 4,812 to 47,440) and the number of students (CV1) by 26% (from 15,902 to 19,997). These results suggest that German professors not only taught more students in 2018 (compared with 1995) but also that their research activities, as exemplified by external grant moneys, increased over time.

thumbnail
Table 2. Variables by “excellence initiative” funding status.

https://doi.org/10.1371/journal.pone.0300828.t002

We probed whether universities without any “initiative” funding differed in their growth pattern compared with universities with some or full “initiative” funding. For this purpose, we mapped all EVs and CVs onto three university categories, beginning in 1995 and with the university system as reference category (Table 2). We note that the three university categories were devised using information on both the first and second phases of the “initiative”: (1) “non-funded universities” were those with no funding in both the first and second phases, (2) “universities with at least one funding line” included all universities that received either GS or EC in the first and/or second phase(s), and (3) “universities with all three funding lines” included universities that received all three lines of funding either in the first and/or second phase(s).

Based on this tabulation, we observed a very stable social order within the German university system that appeared to be based on organizational size: small, medium, and large universities (Table 2). This means that the number of funding lines received by a university increases with the size of the university, which is reflected in the number of scientific staff. It is important to emphasize that this size-based order already existed in the 1990s: There was almost no change between 1995, 2000, and 2005 – the years before the “initiative” started to distribute funding; in addition, there was very little change between 2010, 2015, and 2018.

The size-based social order of public universities in Germany means that universities with no “initiative” funding represent, on average, the smallest entities, followed by medium-sized institutions that received support from at least one line of funding (GS or EC) and the largest ones that received support from all three lines of funding (GS, EC, IS). To illustrate the changes relative to the mean values in each year, we calculated factor scores to show whether universities with different funding lines were below or above average in terms of staff, grants, citations, and students (Table 2, see “factor values”). Regarding professorial staff, non-funded universities were about 70% of the average size in 1995; regarding total grant funding, they reached about 50% of the mean; and regarding citations, they had about a third of the mean number of citations. Universities with at least one funding line were bigger and had greater scientific visibility: Their professoriate was 20% above the average, their total grant funding was 30% above average, and their citations were 40% above the means. The “excellence” universities were the largest institutions: Their professoriate was 40% and their total grant funding 80% above the mean, and they had about twice as many citations as the average university. The magnitude of these ratios has remained robust over time.

The descriptive results are interesting with regard to H1 and H2 in that large universities with above-average amounts of external grant funding in the years prior to the “initiative” are those that received most “excellence” funding. As shown below, these findings are supported by logistic regressions at the subject field level as well. In addition, the results are consistent with regard to scientific visibility. Universities with above-average numbers of citations in the years preceding the “initiative” were those that received the most “initiative” funding.

Regarding student numbers, “excellence” universities had, on average, about twice as many students as non-funded institutions in 2018, yet they were larger in relative terms 23 years earlier when they had about three times as many students (compared with non-funded universities). More specifically, “excellence” universities had about 24,005 students in 1995, compared with 26,690 in 2018 (plus 11%), whereas non-funded universities had about 9,174 students in 1995 and 13,854 in 2018 (a 51% increase). In other words, the overall growth of students between 1995 and 2018 was absorbed (both in absolute and relative terms) to a greater extent by universities with no “initiative” funding compared with those that received full funding.

A second set of descriptive results provides contextual information for H3 at both the subject field level and the university level between 2006 and 2017 (Table 3). A cross-tabulation between subject fields that received “initiative” funding in either one (GS or EC) or two lines (GS and EC) in the two phases revealed a staggering level of path dependency. We observe 2,248 subject fields in the first phase that received no funding, of which 2,215 (99%) received no funding in the second phase either. Among the 102 subject fields with one or two lines of “initiative” funding in the first phase, 91 (89%) received funding in the second phase as well. The overall “reproduction rate” at the subject field level, i.e., the diagonal of Table 3, divided by all observations, amounted to 98% (2,300/2,350). Based on a chi-square test, independence between the two funding phases was rejected at a high level of statistical significance. When interpreting Table 3, note that the “no funding” category includes subject fields in all three university categories mentioned above (Table 2). In other words, there are also subject fields at “excellence” universities that were not funded by the “initiative.”

While Table 3 provides information at the subject level, Table 4 offers funding information at the university level. We observed 35 universities with no “initiative” funding in the first phase, meaning that none of their subject fields received any “initiative” funding. Of these, 27 (77%) also received no follow-up funding in the second phase (Table 4). Among the 33 universities with at least one line of “initiative“ funding in the first phase, 32 (97%) also received such funding in the second phase. The “reproduction rate,” i.e., the diagonal in Table 4, divided by all observations, amounts to 72% (49/68). Again, based on a chi-square test, independence was rejected at a high level of statistical significance.

Taken together, these results and in particular those from Table 3 provide support for H3: Subject fields (and universities) with funding in the “initiative’s” first phase were extremely likely also to receive support in the second phase, a clear indication of path dependency. As shown below, logistic regressions provide additional support for this descriptive finding.

4.2. Results from logistic regressions

We conducted separate logistic regression analyses for each funding phase, as outlined above. Our results, using observation of all universities and thus our largest sample, provided considerable support for H1–H3. There were some differences and modifications with regard to NTUs and TUs on the one hand and subject fields with good bibliometric coverage on the other hand (see Appendix tables). In addition, we follow Breen et al.’s recommendation [56] that the absolute magnitude of EVs’ coefficients should not be compared across models, but rather their direction (either positive or negative) and statistical significance (p-value).

First, and in support of H1, our analysis with all universities (Tables 5,6) revealed that in both funding phases, the number of professors significantly increased the chances for a subject field to be selected by the “excellence initiative” (all models). This result confirms similar findings at the university level [27]. Second, and in support of H2, the amount of external grant funding had explanatory power as well. In the first phase, both total grant funding at the subject field level (EV2) and DFG funding at the university level (EV4) significantly reduced the share of unexplained variance in the dataset (Table 5), and in the second phase, these two variables were significant (model 4; Table 6) only before EV3 was introduced (model 5; Table 6). Third, and in strong support of H3, success in the second phase of the “initiative” largely depended on success in the first phase.

thumbnail
Table 5. Logistic regression, first “initiative” phase (2006–2011), all universities.

https://doi.org/10.1371/journal.pone.0300828.t005

thumbnail
Table 6. Logistic regression, second “initiative” phase (2012–2017), all universities.

https://doi.org/10.1371/journal.pone.0300828.t006

There were some noteworthy differences regarding university groups. For NTUs, grant funding was highly significant in the second phase as well, even when the consecration variable (EV3) was introduced, suggesting that grant funding was a more decisive factor in selecting subject fields at NTUs compared with TUs (S8 Appendix Table 8a,8b). In contrast, for TUs (S8 Appendix Table 8b) and similar to the main results across all universities (Table 5), grant funding is not a significant predictor of excellence funding in the second phase if they received excellence funding already in the first phase (EV3). It is interesting to note that when total grant funding is calculated nonlinearly (logged variables: logvar =  ln(var + 1)), only fields with exceptionally high funding show significantly higher odds of receiving initial funding in the second phase, regardless of whether they were funded in the first phase. However, the influence of first-phase funding remains the dominant factor (S11 Appendix Tables 11e-g).

Broadly defined academic domains also differed. Although results for the natural sciences (S8 Appendix Table 8e-f) were quite similar to the analyses with all universities presented above, the humanities and social sciences were less complex. For the latter, grant funding (either EV2 or EV4) in the first “initiative” phase and prestige (EV3) in the second “initiative” phase were the only significant factors (S8 Appendix Table 8g-j). Furthermore, when we examined subject fields with good WoS coverage, scientific visibility emerged as a relevant factor, in addition to grant funding (first phase), professors (second phase), and prestige (both phases) (S8 Appendix Table 8k-l).

We also note that adding the university level to the logistic regression analyses did not add explanatory power to our models. Neither foundation year nor geographical location had significant effects in our two-level analyses (S9 Appendix Table 9a-b).

Taken together, these findings are largely consistent with the descriptive results: The more professorial staff (EV1) and the more external grant funding (EV2) associated with a subject field, the more likely it was to obtain “initiative” funding. Also, the second phase was dominated by the first phase (EV3): Subject fields that received support in the first phase were extremely likely to receive additional “initiative” funding in the second phase as well.

Conclusion

Our results provide strong support for the early and mostly critical literature in that absolute size and (socially constructed) scientific prestige emerged as important institutional factors predicting success in the “initiative”. Although early analyses focused mainly on the university level (“elite universities”), we show that size and prestige also can be found at the more disaggregated level of subject fields. In accordance with H1, we found that subject fields with many professors and thus large disciplinary entities were more successful in getting “initiative” support than smaller ones. As mentioned in the literature review above, this result contrasts to a growing body of literature showing that scientific excellence seems to be not so much a feature of “big science” but rather typical of “little science,” to borrow a book title from de Solla Price [63].

In addition, our results are supportive of H2 in that subject fields flush with research cash had better chances of securing “initiative” grants. This finding points to selection procedures that were biased towards those who already have considerable grant money. As mentioned above, there is evidence that large grants given to Centers of Excellence (CoE) had the highest impact when they were lesser-performing groups [49] and that the concentration of research funding produces diminishing marginal returns if distributed to very few elite entities [50]. With regard to the German context, Gerhards [45] argues that grant funding seems to be a “fetish” in the German higher education landscape, particularly with regard to the “initiative”: Although it should be regarded simply as an input for conducting research, organizational actors in German science policy, most notably the DFG and the German Science and Humanities Council (Wissenschaftsrat) [61,64], have cultivated the notion that grant funding is equivalent to research success. Gerhards writes (our translation): “It produces a biased picture when the DFG publishes mostly absolute measures which are not weighted by staff. Yet, this biased picture has been taken up gratefully by large institutions which use it for self-staging and self-advertising” [45, pp. 44-46]. Gerhards goes on to say that using grant funding as a proxy for research success would be legitimate only if grants and publications were highly correlated. However, empirical evidence suggests otherwise, and he recommends that “a better institutionalization of bibliometric methods in Germany would align incentives with international standards and, in the long run, could improve Germany’s research performance in international comparisons” [45, p. 50]. We will return to this latter point below.

Furthermore, only between 102 (first phase) and 124 (second phase) subject fields among 2,350 examined (4.3% and 5.3%, respectively) were funded under “initiative’s” umbrella. In line with H3, the vast majority of subject fields with one or two lines of “initiative” funding in the first phase received funding in the second phase as well, a finding that is also strongly supported by our logistic regression. Yet, given the low number of funded fields, the “initiative’s” potential impact on improving research capacities must have been very limited. Bonaccorsi et al. [13] showed that 16 German universities (out of 102, or 16%) with 34 subject fields produce research that scores among the top 10% most cited publications worldwide. This is a very small number compared to the Netherlands, which has 12 universities (out of 13, or 92%) and 37 subject fields producing research among the top 10% most cited publications worldwide. In other words, almost all Dutch universities host subject fields that produce publications among the top 10% most cited globally, whereas only a small share of German universities has such capabilities. It seems plausible to assume that the “initiative’s” impact would have been greater had it provided funding for a substantially larger number of subject fields. Of course, such an impact would have been possible only with a much larger investment in universities’ infrastructures [65,66].

Finally, there is the question of whether the “initiative” had any impacts so far, in particular with regard to its institutional mission of supporting “research excellence.” In our view, although the Matthew Effect in science has gained renewed attention in recent years, with several studies providing evidence for its continued relevance [6770], available empirical studies suggest that this social mechanism was less important for the “initiative” than some had initially thought. In financial terms, the “initiative” has not increased inequality between funded and non-funded universities so far. There has been an overall upward shift to a higher funding level for all universities [35] because those without “initiative” funding managed to find other (mostly public) sponsors [27]. These results are in line with our descriptive finding that Germany’s public university system was stable between 1995 and 2018 (Section 4.1).

In terms of research impact, current evidence suggests no major changes resulting from the “initiative”. Based on bibliometric findings [31,32], the initiative’s international evaluation commission concluded that although “bibliometric investigations show an impressive qualitative performance regarding publications stemming from Excellence Clusters,” it remains “unclear to what extent new research priority areas emerged due to the support from the Excellence Initiative or whether the Excellence Initiative has instead led to a bundling of existing research capacities and hence increased visibility” [71, p. 5]. Recent bibliometric evidence suggests that universities funded through the “initiative” have shown a decreasing citation impact, whereas universities without such support have increased their citation rates [29,36].

Perhaps most importantly, early commentators believed that there would be an increasing functional differentiation between teaching and research universities. For example, Hartmann argued (our translation): “The German university system faces a permanent split between two types of universities: research universities and vocational universities. Research will be concentrated at the former, while the latter will conduct almost no research (like today the universities of applied sciences) but quickly prepare students for their job” [23]. Similarly, Winnacker, president of the DFG during implementation of the “initiative’s” first funding phase, emphasized the increasing functional differentiation (our translation): “The differences in quality between the universities are already considerable, they will grow further through the Excellence Initiative. (…) The [university] system will differentiate further. In addition to pure research universities that will follow standards of modern scientific research in their education, there will be universities that will attempt such standards in a few subject fields only, universities that will not even strive for such standards, and universities that will develop their strengths in practical orientation” [43].

However, no empirical evidence so far supports the far-reaching claim of an increased functional differentiation between teaching and research universities due to the “initiative”. Even the international evaluation commission concluded that “it is not possible to demonstrate an increased differentiation of the German university system as a whole as a consequence of the Excellence Initiative” [71, p. 5].

Our study has two limitations. First, since we do not have access to the exact amounts of the “initiative’s” funding for subject fields in universities (“university subjects”), we were not able to calculate (OLS) regression analyses using a cardinal dependent variable. Therefore, we welcome future studies with more disaggregated funding data, preferably retrieved with additional support from the DFG. Second, productive academic faculty might favor and thus self-select into large universities and research institutions because such contexts provide better access to research opportunities, collaboration, and personal career goals than smaller ones. Hence, effects at the field level (as measured in this paper) might be partially caused by unobserved individual behavior. Therefore, we welcome future studies that shed light on this issue, preferably in a cross-country and comparative fashion [72].

In summary, when we consider that the “initiative’s” selection procedures were biased towards large disciplinary entities with considerable grant money, leading to the continued support of very few subject fields, two tentative policy recommendations seem justified.

First, we reiterate Gerhards’ point that the selection of fields should be based on research performance, using bibliometric methods that follow international standards [73]. The current practice of equating grant funding with research quality has clearly not improved the research performance of German universities. Rather, we have reason to believe that the exclusive focus on grant funding has set incentives for research groups to grow, irrespective of their marginal productivity and their capabilities to conduct breakthrough research.

Second, the limited reach of support for subject fields (4% of all possible fields) raises the question of the “initiative’s” effectiveness in upscaling internationally competitive research capacities. The current practice of selecting very few subject fields in very few universities has not improved such capacities on a measurable level. To be globally competitive, German universities need much larger long-term financial support; otherwise, they will continue to trail North American and increasingly also Asian universities [13,66,74].

Our analysis covers the first and second phase of the “initiative”, yet it is quite plausible that its successor, the “excellence strategy” (which started in 2018) continues the existing structural pattern. Therefore, we encourage future studies to look into continuities between the “excellence initiative” and the “excellence strategy,” with special focus on whether there has been upward or downward mobility of university subjects (and entire universities).

Supporting information

S1 Appendix. List of technical universities (alphabetical).

https://doi.org/10.1371/journal.pone.0300828.s001

(DOCX)

S2 Appendix. List of non-technical universities (alphabetical).

https://doi.org/10.1371/journal.pone.0300828.s002

(DOCX)

S3 Appendix. Data from the Federal Statistical Office (StBA).

https://doi.org/10.1371/journal.pone.0300828.s003

(DOCX)

S4 Appendix. Aggregation scheme of StBA examination groups.

https://doi.org/10.1371/journal.pone.0300828.s004

(DOCX)

S5 Appendix. Concordance table of StBA with Archambault classification.

https://doi.org/10.1371/journal.pone.0300828.s005

(DOCX)

S6 Appendix. Assignment of Excellence Initiative funding to subject fields.

https://doi.org/10.1371/journal.pone.0300828.s006

(DOCX)

S9 Appendix. Two-level logistic regression analyses.

https://doi.org/10.1371/journal.pone.0300828.s009

(DOCX)

S12 Appendix. Logistic regression analyses with full samples.

https://doi.org/10.1371/journal.pone.0300828.s012

(DOCX)

References

  1. 1. DFG. Exzellenzinitiative des Bundes und der Länder zur Förderung von Wissenschaft und Forschung an deutschen Hochschulen. Press Release. Bonn, 3p.: 2005.
  2. 2. DFG. Erste Entscheidungen in der zweiten Phase der Exzellenzinitiative des Bundes und der Länder. Press Release. Bonn, 5p.: 2011.
  3. 3. DFG/WR. Bericht der Gemeinsamen Kommission zur Exzellenzinitiative an die Gemeinsame Wissenschaftskonferenz. Bonn2015.
  4. 4. Hüther O, Krücken G. Higher Education in Germany—Recent Developments in an International Perspective. Dordrecht: Springer; 2018.
  5. 5. Hüther O. Wandelbarkeit von Forschungsstrukturen in deutschen Universitäten. Eine Analyse der Landeshochschulgesetze. In: Heinze T, Krücken G, editors. Institutionelle Erneuerungsfähigkeit der Forschung. Heidelberg: Springer VS; 2012. p. 127-55.
  6. 6. Hüther O. Von der Kollegialität zur Hierarchie? Eine Analyse des New Managerialism in den Landeshochschulgesetzen. Wiesbaden: VS Verlag; 2010.
  7. 7. Ben-David J. The Scientist’s Role in Society. A Comparative Study. Englewood Cliffs, N.J.: Prentice-Hall; 1971.
  8. 8. Parsons T, Platt GM. The American University. Cambridge, MA: Harvard University Press; 1974.
  9. 9. Clark BR. Places of Inquiry: Research and Advanced Education in Modern Universities. Berkeley/Los Angeles: University of California Press; 1995.
  10. 10. Cole JR. The great American university: its rise to preeminence, its indispensable national role, and why it must be protected. 1st ed. New York: PublicAffairs; 2009. xii, 616 p. p.
  11. 11. Heinze T, Tunger D, Fuchs JE, Jappe A, Eberhardt P. Research and teaching profiles of public universities in Germany. A mapping of selected fields: Wuppertal: BUW; 2019.
  12. 12. Klumpp M, de Boer H, Vossensteyn H. Comparing national policies on institutional profiling in Germany and the Netherlands. Comp Educ. 2013;1(1):1–21.
  13. 13. Bonaccorsi A, Cicero T, Haddawy P, Hassan S-U. Explaining the transatlantic gap in research excellence. Scientometrics. 2016;110(1):217–41.
  14. 14. Jappe A, Heinze T. Institutional Context and Growth of New Research Fields. Comparison between State Universities in Germany and the United States. In: Heinze T, Münch R, editors. Innovation in Science and Organizational Renewal Sociological and Historical Perspectives. New York: Palgrave Macmillan; 2016. p. 142-87.
  15. 15. Langfeldt L, Borlaug SB, Aksnes DW, Benner M, et al. Excellence initiatives in Nordic research policies: Policy issue tensions and options. Oslo: NIFU, 2013.
  16. 16. Aksnes D, Benner M, Borlaug SB, Hansen HF, et al. Centres of Excellence in the Nordic countries. A comparative study of research excellence policy and excellence centre schemes in Denmark, Finland, Norway and Sweden. Oslo: NIFU, 2012.
  17. 17. Pruvot EB, Estermann T. Define Thematic Report: Funding for Excellence. Brussels: European University Association; 2014.
  18. 18. OECD. Promoting Research Excellence: New Approaches to Funding. Paris: OECD, 2014.
  19. 19. Bloch C, Kladakis A, Sørensen MP. Size matters! On the implications of increasing the size of research grants. In: Lepori B, Joenbload B, Hicks D, editors. Handbook of Public Research Funding. Cheltenham: Edward Elgar; 2023.
  20. 20. Münch R. Die akademische Elite. Zur sozialen Konstruktion wissenschaftlicher Exzellenz. Frankfurt a.M.: Suhrkamp; 2007.
  21. 21. Münch R. Wissenschaft im Schatten von Kartell, Monopol und Oligarchie. Die latenten Effekte der Exzellenzinitiative. Leviathan. 2006;34(4):466–86.
  22. 22. Hartmann M. Die Exzellenzinitiative und ihre Folgen. Leviathan. 2010.;38369–87.
  23. 23. Hartmann M. Die Exzellenzinitiative–ein Paradigmenwechsel in der deutschen Hochschulpolitik. Leviathan. 2006.;34(4):447–65.
  24. 24. Münch R, Baier C. Institutional struggles for recognition in the academic field: The case of university departments in German chemistry. Minerva. 2012;50(1):97–126.
  25. 25. Münch R. Der Monopolmechanismus in der Wissenschaft. Auf den Schultern von Robert K. Merton. Berliner Journal für Soziologie. 2010;20(4):341–70.
  26. 26. Münch R. Globale Eliten, lokale Autoritäten. Bildung und Wissenschaft unter dem Regime von PISA, McKinsey & Co. Frankfurt am Main: Suhrkamp; 2009.
  27. 27. Buenstorf G, Koenig J. Interrelated funding streams in a multi-funder university system: Evidence from the German Exzellenzinitiative. Res Pol. 2020;49:103924.
  28. 28. Cunningham J, Menter M. Transformative change in higher education: entrepreneurial universities and high-technology entrepreneurship. Ind Innov. 2021;28:343–64.
  29. 29. Menter M, Lehmann E, Klarl T. In search of excellence: a case study of the first excellence initiative of Germany. J Bus Econ. 2018;88:1105–32.
  30. 30. Möller T. Same objectives, different governance. how the excellence initiative and the pact for research and innovation affect the german science system. fteval J Res Technol Policy Eval. 2018;45(1):4–8.
  31. 31. Möller T, Schmidt M, Hornbostel S. Assessing the effects of the German Excellence Initiative with bibliometric methods. Scientometrics. 2016;1092217–39.
  32. 32. Hornbostel S, Möller T. Die Exzellenzinitiative und das deutsche Wissenschaftssystem. Eine bibliometrische Wirkungsanalyse. 2015.
  33. 33. Bruckmeier K, Fischer G-B, Wigger BU. Status effects of the German Excellence Initiative. IJEF. 2017;9(3):177.
  34. 34. Wohlrabe K, Bornmann L, Gralka S, De Moya Anegon M. Wie effizient forschen Universitäten in Deutschland, deren Zukunftskonzepte im Rahmen der Exzellenzinitiative ausgezeichnet wurden? Ein empirischer Vergleich von Input- und Output-Daten. Zeitschrift für Evaluation. 2019;189–27.
  35. 35. Mergele L, Winkelmayer F. The relevance of the German Excellence Initiative for inequality in university funding. Higher Educ Pol. n.d.;3535.
  36. 36. Civera A, Lehmann E, Paleari S, Stockinger S. Higher education policy: Why hope for quality when rewarding quantity?. Res Pol. 2020;49(49):104083.
  37. 37. Leibfried S, Schreiterer U. Quo vadis, Exzellenzinitiative. Berlin: BBAW, Wissenschaftspolitik im Dialog, 2012.
  38. 38. Leibfried S. Die Exzellenzinitiative. Zwischenbilanz und Perspektiven. Frankfurt a.M.: Campus; 2010.
  39. 39. Gläser J, Weingart P. Die Exzellenzinitiative im internationalen Kontext. In: Leibfried S, editor. Die Exzellenzinitiative Zwischenbilanz und Perspektiven. Frankfurt a.M/ New York: Campus; 2010. p. 233–58.
  40. 40. Bloch R, Keller A, Lottmann A, Würmann C. Making Excellence. Grundlagen, Praxis und Konsequenzen der Exzellenzinitiative. Bielefeld: Bertelsmann; 2008.
  41. 41. Beaufaÿs S, Löther A. Exzellente Hazardeurinnen? Beschäftigungsbedingungen und Geschlechterungleichheit auf dem wissenschaftlichen Arbeitsmarkt. WSI Mitteilungen. 2017;69:348–55.
  42. 42. Gawellek B, Sunder M. The German excellence initiative and efficiency change among universities, 2001–2011. Working Paper. 2016;(142):Universität Leipzig, Faculty of Economics and Management Science.
  43. 43. Winnacker E-L. Im Wettbewerb um neues Wissen: Exzellenz zählt. Forschung - Das Magazin der Deutschen Forschungsgemeinschaft. 2006;2(2):V–IX.
  44. 44. Merton RK. The Matthew effect in science. The reward and communication systems of science are considered. Science. 1968;159(3810):56–63. pmid:5634379
  45. 45. Gerhards J. Der deutsche Sonderweg in der Messung von Forschungsleistungen. Berlin: BBAW, Wissenschaftspolitik im Dialog, 2013.
  46. 46. Wu L, Wang D, Evans JA. Large teams develop and small teams disrupt science and technology. Nature. 2019;566(7744):378–82. pmid:30760923
  47. 47. Hemlin S, Allwood CM, Martin BR. What Is a Creative Knowledge Environment? In: Hemlin S, Allwood CM, Martin BR, editors. Creative Knowledge Environments The Influences on Creativity in Research and Innovation. Cheltenham: Edward Elgar; 2004. p. 1-28.
  48. 48. Heinze T, Shapira P, Rogers J, Senker J. Organizational and institutional influences on creativity in scientific research. Res Pol. 2009;38(4):610–23.
  49. 49. Langfeldt L, Benner M, Sivertsen G, Kristiansen E, Aksnes D, Borlaug S. Excellence and growth dynamics: A comparative study of the Matthew effect. Science and Public Policy. 2015;42(5):661–75.
  50. 50. Mongeon P, Brodeur C, Beaudry C, Larivière V. Concentration of research funding leads to decreasing marginal returns. Research Evaluation. 2016;25396–404.
  51. 51. Sauder M, Lynn F, Podolny J. Status: insights from organizational sociology. Annu Rev Sociol. 2012;38267–83.
  52. 52. Collins R. Some comparative principles of educational stratification. Harvard Educational Review. 1977;47(1):1–27.
  53. 53. Brint S, Proctor K, Mulligan K, Rotondi M, Hanneman R. Declining academic fields in U.S. four-year colleges and universities, 1970-2006. J High Educ. 2012;83(6):582–613.
  54. 54. Brint S, Proctor K, Hanneman RA, Mulligan K, Rotondi MB, Murphy SP. Who are the early adopters of new academic fields? Comparing four perspectives on the institutionalization of degree granting programs in US four-year colleges and Universities, 1970-2005. Higher Education. 2011;61(5):563-85.
  55. 55. Fox J. Applied Regression Analysis and Generalized Linear Models. Third Edition. Thousand Oaks: Sage, CA; 2016.
  56. 56. Breen R, Karlson K, Holm A. Interpreting and understanding logits, probits, and other nonlinear probability models. Annu Rev Sociol. 2018;44:39–54.
  57. 57. Hosmer DWJ, Lemeshow SA, Sturdivant RX. Applied Logistic Regression. 3rd ed. Hoboken, NJ: Wiley; 2013.
  58. 58. Hox J, Moerbeek M, Schoot R. Multilevel Analysis: Techniques and Applications. New York: Routledge; 2017.
  59. 59. Archambault E, Beauchesne OH, Caruso J. Towards a Multilingual, Comprehensive and Open Scientific Journal Ontology. Proceedings of the 13th International Conference of the International Society for Scientometrics and Informetrics. 2011:66-77.
  60. 60. Moed HF. Citation Analysis in Research Evaluation. Dordrecht: Springer; 2005.
  61. 61. DFG/WR. Bericht der Gemeinsamen Kommission zur Exzellenzinitiative an die Gemeinsame Wissenschaftskonferenz. Bonn 2008.
  62. 62. DFG. Exzellenzinitiative auf einen Blick. Der Wettbewerb des Bundes und der Länder zur Stärkung der universitären Spitzenforschung. Bonn: 2013.
  63. 63. de Solla Price DJ. Little Science, Big Science. New York: Columbia University Press; 1963.
  64. 64. DFG WR. Bericht der Gemeinsamen Kommission zur Exzellenzinitiative an die Gemeinsame Wissenschaftskonferenz. Bonn und Köln: Deutsche Forschungsgemeinschaft und Wissenschaftsrat, 2015.
  65. 65. Jappe A, Heinze T. Research funding in the context of high institutional stratification. Policy scenarios for Europe based on insights from the United States. In: Lepori B, Jongbloed B, Hicks D, editors. Handbook of Public Research Funding. Cheltenham: Edward Elgar; 2023.
  66. 66. Lepori B, Geuna A, Mira A. Scientific output scales with resources. A comparison of US and European universities. PLoS One. 2019;14(10):e0223415. pmid:31613903
  67. 67. Bol T, de Vaan M, van de Rijt A. The Matthew effect in science funding. Proc Natl Acad Sci U S A. 2018;115(19):4887–90. pmid:29686094
  68. 68. Nielsen M, Andersen J. Global citation inequality is on the rise. Proceedings of the National Academy of Sciences. 2021.;118(7).
  69. 69. Madsen EB, Aagaard K. Concentration of Danish research funding on individual researchers and research topics: Patterns and potential drivers. Quant Sci Stud. 2020;1(3):1159–81.
  70. 70. Ma A, Mondragón RJ, Latora V. Anatomy of funded research in science. Proc Natl Acad Sci U S A. 2015;112(48):14760–5. pmid:26504240
  71. 71. IEKE. Internationale Expertenkommission zur Evaluation der Exzellenzinitiative. Endbericht. Berlin: Institut für Innovation und Technik, 2016.
  72. 72. Fumasoli T, Goastellec G. Global Models, Disciplinary and Local Patterns in Academic Recruitment Processes. In: Fumasoli T, Goastellec G, Kehm BM, editors. Academic Work and Careers in Europe: Trends, Challenges, Perspectives. Heidelberg: Springer; 2014. p. 69-93.
  73. 73. Jappe A. Professional standards in bibliometric research evaluation? A meta-evaluation of European assessment practice 2005-2019. PLoS One. 2020;15(4):e0231735. pmid:32310984
  74. 74. Heinze T, von der Heyden M, Pithan D. Institutional environments and breakthroughs in science. Comparison of France, Germany, the United Kingdom, and the United States. PLoS One. 2020;15(9):e0239805. pmid:32997679