Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Theoretical research without projects

  • Miguel Navascués ,

    Contributed equally to this work with: Miguel Navascués, Costantino Budroni

    Roles Conceptualization, Formal analysis, Investigation, Software

    miguel.navascues@oeaw.ac.at

    Affiliation Institute for Quantum Optics and Quantum Information (IQOQI) Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria

  • Costantino Budroni

    Contributed equally to this work with: Miguel Navascués, Costantino Budroni

    Roles Conceptualization, Formal analysis, Investigation

    Affiliation Institute for Quantum Optics and Quantum Information (IQOQI) Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria

Abstract

We propose a funding scheme for theoretical research that does not rely on project proposals, but on recent past scientific productivity. Given a quantitative figure of merit on the latter and the total research budget, we introduce a number of policies to decide the allocation of funds in each grant call. Under some assumptions on scientific productivity, some of such policies are shown to converge, in the limit of many grant calls, to a funding configuration that is close to the maximum total productivity of the whole scientific community. We present numerical simulations showing evidence that these schemes would also perform well in the presence of statistical noise in the scientific productivity and/or its evaluation. Finally, we prove that one of our policies cannot be cheated by individual research units. Our work must be understood as a first step towards a mathematical theory of the research activity.

1 Introduction

The introduction of performance-based funding at the end of the twentieth century, both at the level of institutions and single researchers, has had a significant impact on the current organization of the academic world. At the institutional level, research evaluation practices with the goal of distributing public funding have been introduced in several countries [1]. At the same time, universities and research institutions rely more and more on grants to pay salaries, which are obtained by single researchers or consortia via a competitive procedure, in parallel to an increase of the percentage of non-permanent positions and competition among scientists [25]. The goal of these policies is to improve the performance of the research system, e.g., at a national or European level. On the one hand, these policies are supposed to move funds from the “less efficient” to the “more efficient” institutions and researchers, thus optimizing the fund allocation; on the other hand, the competition among researchers is supposed to drive their productivity.

At present, most research policies rely on the unfortunate idea that there is just one way of conducting science. In fact, there are two: experiments and theory, and science cannot advance without either. Theoretical researchers propose mathematical models to predict the outcomes of future experiments; and their experimental counterparts test the validity of such models in the lab. The programs currently used by grant agencies to promote research seem to be solely conceived with experimental science in mind. Not surprisingly, they are quite unfit for grant applicants from the theoretical sciences, such as mathematicians, computer scientists and theoretical physicists.

Let us explain why. In order to apply for a grant, most funding agencies demand a research project, i.e., they require scientists to detail their research activities within a period of 2 to 5 years in the future. While this scheme reflects the way many experimentalists organize their research agenda, it is incompatible with the current practice of theoretical research. One cannot “plan” discovering mathematical calculus, quantum cryptography or neural networks. Quite the opposite, some of the most celebrated ideas in history have arisen during the course of an unrelated investigation (see, e.g., [6] for a few examples). It is by following these new threads that theorists keep their scientific productivity. On the contrary, stubbornly sticking to a single research line no matter what is a sound predictor of scientific sterility.

Agencies also demand theorists to lay out their “methodology”. Namely, they expect theorists to explain how they intend to prove this or that theorem. The honest answer is that they don’t know. If they did, the theorem would be proven already, and they would not be applying for funds to crack it.

For the working theorist, applying for research funds is therefore a long and unethical task. It involves concocting an elaborate fantasy where the theorist pretends to know what theorems he/she will be proving in the next few years and through which particular mental processes. This activity, not clearly correlated with the applicant’s success, takes a lot of time away from research [7]. The product of these efforts, the project proposal, has no value whatsoever for society, and yet it is kept secret on the grounds of avoiding plagiarism. This lack of transparency makes grant panels unaccountable of any decision they make.

Let us stress that the current grant system pushes theorists to lie in order to get funded. Indeed, if theorists carry their research sensibly, there will invariably be a mismatch between the original goals of the project and the final research output. So far no major scandal has transpired because the evaluators of a grant’s final reports are researchers themselves: acknowledging that the system is flawed, they almost always award a positive assessment. This state of affairs, though, could change from one day to another, making thousands of theoretical researchers liable to a civil suit for fraud. In this direction, Gillies documents grant rules which advocate for the punishment of researchers who do not achieve the project goals [6]. E.g.: forbidding them to apply again for the next two years, or reporting them to the head of their research institution.

We have reached this situation because, up to now, research policies have been based more on political fashion than on solid science. To progress beyond this point, we need an open scientific debate on research funding practices, where the scientific method is applied to the problem, i.e., with hypothesis, models, and experiments [8]. The problem of research funding can be roughly divided into two sub-problems: first, the identification of the best evaluation method and corresponding parameters, e.g., of productivity or impact, and then the problem of maximization of such parameters given the available financial resources.

Regarding the first problem, for the reasons above we believe that, at least for theoretical sciences, agencies and institutions should focus on funding people rather than projects [9]. That is, the evaluation of a researcher should be based on past scientific merits, as opposed to megalomaniac delusions.

The gross of the present paper addresses the second problem. Namely, presuming the existence of a quantitative measure of research productivity, estimated through the analysis of recent scientific activity, we investigate practical methods to optimize the total production of a global research system.

We start by modeling the research system as a collection of agents or research units. A research unit can represent an individual scientist, a research group or a whole research institute. Each research unit possesses a “scientific productivity function” that relates how much science a given research unit can produce with the funds it holds to conduct research. We allow productivity functions to be probabilistic and time-dependent. They are also unknown, i.e., neither the research agency nor the scientists themselves can tell how they look like.

Relying on our mathematical model of the research activity, we show that there exist systematic procedures to decide the budget distribution at each grant call with the property that the total productivity of the research community will be frequently not far off its maximum value.

The simplest of such procedures is what we call the rule of three, by which the funds for research unit i after grant call k + 1 are proportional to the research output of the unit during the kth term. If the total budget for science during the (k + 1)th term is X euros, this means that (1) The returns of this policy must be contrasted with those of “excellence” schemes, whereby, under equal research outcomes, researchers which were funded in the past have a greater chance of receiving further funds. Such policies can be shown to converge to configurations where the total scientific productivity is an arbitrarily small fraction of the maximum achievable by the research system. They are therefore better to be avoided.

We also study to what extent research policies can be cheated by dishonest research units. We conclude, for example, that hacks of the rule of three would require either influencing the evaluation stage or a coalition of research units. They are hence unlikely.

An important aspect of the current funding system is the fact that the increased competition and instability generate pressure among researchers, the so-called “publish or perish” culture, with possible negative consequences discussed in the literature, such as the focus on popular topics, short-term goals, and conservative research [6, 10, 11] and the proliferation of useless research or even dishonest practices [1215].

The new funding framework that we advocate for is probably not a solution to the above problems, which are also closely connected to evaluation practices. Our framework, however, does not force theorists to engage in unethical practices, it is transparent and does not require the applicant to waste months of working time in writing project proposals. In addition, our mathematical analysis of scientific populations suggests that our grant schemes are relatively free from the so-called Matthew effect (i.e., “the rich get richer and the poor get poorer.”) [16].

The paper is organized as follows. In Sect. 2, we will introduce and motivate the use of a funding scheme not based on project proposals, but rather on the evaluation of recent-past productivity of single scientists or research institutions. In Sect. 3, we will discuss which mathematical properties an idealized productivity function should possess. In Sect. 4 we will define mathematically the problem of maximizing the total productivity of the system, given constraints on the total funding, and discuss possible ways of solving it, assuming the knowledge of all the parameters of the problem. In Sections 5, 6 we will adapt the previous analysis to the more realistic case of unknown model parameters, and explain how to extract a funding policy in this situation. In particular, we will perform numerical simulations to compare the performance of the different funding schemes under noise. In Sect. 7 we will analyze the security of one of our policies against dishonest players. In Sect. 8, we will discuss possible extension of the model to handle more complex situations, e.g., competing funding agencies. Finally, we will present our conclusions.

2 Funding policies not based on research projects

The purpose of public research agencies is to help scientists generate useful knowledge, and the problem they face in each grant call is how to make sure that their funding reaches those in a position to generate such knowledge. Most funding agencies assign grants with a competitive process based on a peer-review evaluation of research proposals. As we argued at the introduction, this approach is not suitable for theoretical sciences because theorists cannot predict their future work activity.

A more appropriate indicator of the quality of future theoretical research is the recent past performance of the grant candidate. This motivates an alternative grant scheme for theoretical sciences based on the principle that, if candidates have recently shown a remarkable scientific productivity, it is worth funding them for the next few years so that they keep doing their good work. But what does “good work” mean?

There are many ways to quantify scientific productivity, and deciding which one suits best reflects a political stance. All such approaches fall in one of two main categories, namely, those based on bibliometrics and those based on peer-review. Bibliometric data has the advantage of being easy and cheap to obtain, e.g., through online databases containing publications and citations data, hence, it is often preferred by managers and administrators. However, there has been a proliferation of bibliometric indicators of scientific productivity and impact [17, 18], often without a clear understanding of their pros and cons, from the perspective of evaluation and decision making. It has being argued that many of these indicators reflect “what can be easily counted, rather than what really counts” [18].

From the point of view of researchers, some methods may be considered fairer, such as expert assessment of the most important recent papers of the candidate. However, peer-review may be impractical in terms of cost and time, and even be partially flawed (see, e.g., [19, 20] and references therein) showing, for instance, low reliability in the evaluations [21] or low ratings for highly novel ideas [22]. Several authors have investigated whether peer-review and bibliometrics can be used together and how much they agree [2325]. Bollen et al. go further and propose an alternative funding scheme whereby the evaluation is conducted by the whole scientific community [26, 27].

In summary, notwithstanding the growing interest in the problem of research evaluation and the important results achieved so far, we feel that there is no general agreement on what the best evaluation methods are. As a consequence, we will leave this problem open and just start from the assumption of the existence of an abstract indicator of “scientific productivity”.

Once a measurable figure of scientific productivity is established, the question is how to decide how much funds each research unit should receive. This is the problem we tackle in the rest of this paper. Curiously enough, we find that, given an agreed measure of scientific productivity, reaching an optimal allocation of research funds is not a political problem, but a mathematical one. In fact, we will show that under ideal conditions there exists a systematic procedure to decide the budget distribution at each grant call with the property that the total productivity of the research community will be frequently close to its optimal value.

At this point, it is important to remark which problems we are not addressing in our work and what possible use we can recommend or discourage. First, given the possible negative consequences of the publish-or-perish culture and the attitude towards experimenting with research policies, discussed in the previous section, we believe only a fraction of the entire research budget, e.g., at a national level, should be assigned through competitive grants. It is still an open question how much competition is desirable in academia, see, e.g., the discussion in Ref. [28] and references therein. Second, we leave open the question of at which level the evaluation and funding distribution should be applied. We will speak, generically, of “research units”. Each research unit could be a single scientist, a research group, a department, a small research institution, or a university. We will occasionally speak of “a scientist” to provide examples and motivations for our assumptions, but the results of our work are independent of this choice. Third, we would like to remark that the methods and computational tools presented in this work are intended to aid human decisions. We do not advocate for a scenario where scientists are constantly evaluated by an algorithm that decides and directly modifies their salary. Finally, it is important to remark that political decisions are sometimes disguised as technical or scientific ones, e.g., budget cuts for universities and research institutions may be justified as technical decisions for the optimal use of available financial resources. The distinction between technical and political decisions should be made as clear as possible. We hope that separating the problem of funding from that of evaluation may bring clarity to the political debate.

3 Scientific productivity functions: Definition and properties

Consider an idealized scenario where there is just one grant agency administering all public funds for research, and N “players” (using a game-theoretic terminology) or research units apply for funding in consecutive grant calls. For further simplicity, we will adopt first a simple model where each player i = 1, …, N has a time-independent productivity function gi. That is: if we award a player x euros and demand it to use these funds within a time span T, then the scientific productivity of this player after time T, however we measure it, will be gi(x). Moreover, this quantity will be the same, independently of when we awarded the player the research funds. In Sect. 6, we will present a generalization to probabilistic and time-dependent productivity functions.

We have three main assumptions about the productivity functions gi, which we will further discuss and motivate in the following. Namely,

  1. gi(0) = 0, i.e., the productivity is zero when the budget is zero;
  2. gi is non-decreasing, i.e., if we increase the budget we should not decrease the productivity;
  3. gi is concave, i.e., the slope of the function is not increasing.

An example of a productivity function satisfying (a)-(c) is presented in Fig 1.

thumbnail
Fig 1. Expected productivity gi of a research unit as a function of its budget x.

This picture illustrate the three main assumptions: the function is zero for zero budget, it is non-decreasing, and it is concave.

https://doi.org/10.1371/journal.pone.0214026.g001

Assumption (a) is quite straightforward: we do not expect a scientist without a salary to produce any science. There are indeed examples of outstanding individuals, such as Einstein, Bose or Gosset (Student), who carried out important theoretical contributions while working outside the academic world. However, all those individuals had also a salary, i.e., they had a monthly money input x to play with.

For assumption (b), we expect that if the player behaves rationally, the productivity function should not decrease with x. Indeed, suppose for example that g(x0) > g(x), for x > x0, see Fig 2. Then the player awarded with x > x0 could simply spend xx0 euros to organize a conference and use the remaining x0 euros to fund its research. Effectively, the player would then be operating according to a new increasing productivity function , with the property for all x.

thumbnail
Fig 2. The productivity of a research unit as a function of the funding x.

As a function of x, the scientific productivity g of a rational player cannot have: (a) decreasing regions; or (b) convex regions. In both cases, using the same budget, the research unit can switch to a more favorable productivity function (dashed line) that is increasing and concave.

https://doi.org/10.1371/journal.pone.0214026.g002

Similarly, one can argue that any productivity function must be approximately concave, i.e., assumption (c). For suppose that, on the contrary, the increasing function g(x) is convex in the region [x0, x1], see Fig 2. Fix and consider the following research strategy: if the funding x satisfies , then the scientist conducts research as usual, i.e., it will produce an output g(x). If, on the contrary, , then there exists a number 0 ≤ λ ≤ 1 such that . In this case, we require the scientist to spend euros for a fraction λ of the total duration of the grant; and , for the remaining time 1 − λ. Assuming that, under a constant monthly salary, scientific productivity is time independent (namely, a scientist working for 2t months will produce twice as much as the same scientist working for t months under the same salary), the total productivity will be . As shown in Fig 2, one can choose such that the new effective productivity function the scientist is operating under is concave. Moreover, for all x ≥ 0.

The only problem with the above argument is that scientific productivity is just approximately linear with time. Indeed, one cannot expect 1000 postdocs to advance significantly a new research line if they just have one day to do so (and we are overlooking the fact that very few would accept being employed for such a short time!). Hence, if and are very distant and , the previous scheme is not realistic.

In the following, though, we will assume for simplicity that the individual productivity is a concave function. This may not be very accurate to model the activity of a single scientist, but should be a good approximation to assess the productivity of a large group or a whole research institute. In sum, the shape of function gi(x) is expected to be approximately of the form depicted in Fig 1.

Note that, from the conditions of concavity and gi(0) = 0, it follows that (2) for x ≥ 0. Indeed, compute a first-order Taylor expansion of gi(0) on x. That gives us (3) for some c ∈ [0, x]. Since gi(x) is concave, its second derivative is smaller than or equal to zero [29]. It follows that the right hand side of the above equation is upper bounded by . Eq (2) will be extensively used throughout the paper. If the inequality in (2) is strict for x > 0, we will say that the function gi(x) is curved at the origin. Intuitively, this means that, for any a > 0, the productivity function gi(x) is not a straight segment from x = 0 to x = a.

A family of productivity functions satisfying all these properties and rich enough to model interesting grant scenarios is the one given by “power functions” of the form g(x) = Axα, where A > 0, α ∈ (0, 1). This family was already considered in [30], where an attempt was made to estimate the average productivity function of a group leader. Moreover, the same power function, with an exponent smaller than (but close to) 1, was obtained by analyzing total citation counts (across 26 scientific disciplines) versus funds (Higher Education expenditure on Research & Development expressed in Purchase Parity Power dollars) for OECD countries [31].

4 The problem of fund allocation

Let X be the total funding that the agency can award in a given grant call. The goal of the funding agency is to identify the distribution of funds that maximizes the research output, given upper and lower bounds of the form on each player’s budget xi. The upper bounds stem from both the unwillingness of the individual to coordinate a large research group/institution and/or the desire of the funding agency of not concentrating a large amount of research funds in the hands of a few players. The lower bounds could correspond to negotiated minimum budgets for each research institute or public servant. Through the rest of the paper, the set of constraints (4) will be denoted the funding conditions. In case , for i = 1, …, N, i.e., in case the only restriction of the individual budgets is that they are non-negative, we will speak of free funding conditions. In case , we will speak of capped funding conditions.

Ultimately, any funding agency wants to solve the optimization problem: (5)

Since the funding conditions define a convex set and the objective function is concave, any local maximum of is also a global maximum. In other words: independently of our current budget configuration , one can always identify the direction towards the optimal productivity by exploring how the objective function grows locally. It is also easy to prove that, as a function of the total funding X, g(X) is also concave.

For free funding conditions and fully homogeneous productivity functions, i.e., gi = g1 for i = 2, …, N, the best strategy turns out to be distributing the funding equally among the researchers, in order to exploit the greater initial gradient of their productivity functions. That is, the solution of the above problem is . If g1(x) admits a first derivative at x = 0, for N ≫ 1, the latter quantity tends to .

Unfortunately, scientists can have very different productivity functions. Consider a scenario where each scientist i has a power productivity function (6)

In Appendix A it is shown that the maximal productivity of this scientific population is given by (7) where μ(X) is computed by solving the equation (8)

Example

It is at this point instructive to try to apply this simple model to “real data”. Of course, this example is only illustrative of certain peculiar properties of the productivity functions and of the funding model. For instance, for simplicity, we will measure productivity simply by counting the number of papers, which is clearly a terrible quantifier, which we do not endorse. Moreover, contrary to the method present in Sect. 5, the current example uses the assumption of the specific productivity function of Eq (6), which is in general not necessary. Finally, we do not claim that the numerical values obtained are particularly realistic, as they are extracted from only two data points and we provide no statistical analysis. Let us first go through the details of the example and, then, discuss at the end.

Since 1996, FWF Austrian Science Fund’s START program provides the successful applicant with a funding amount between 0.8 and 1.2 million euros, to be spent in six years [32]. The elegibility requirements demand that the doctoral degree of applicants be completed no less than two years and no longer than eight years before the deadline for submission of applications. It is thus not unreasonable to assume that most successful candidates did not have any prior funding, other than their own postdoc salary, before receiving the START grant. We randomly selected six START awardees, all of which work either in theoretical physics or mathematics, and estimated their scientific individual productivities by counting their number of peer-reviewed published papers in the six years prior to the year of the award and also in the six next years. For each candidate i that provided us with two productivity points for each candidate. Complemented with the two funding inputs euros (the salary of a Senior Postdoc in Austria for six years) and euros (the maximum START funding), we had enough information to infer Ai, αi for each researcher.

Just for the matter of illustration, we have adopted the number of publications as a figure of merit. Since all the considered researchers received their START grants between 2007 and 2011, one would expect them to have been raised in the culture of “publish or perish”. It is therefore sensible that most of them dedicated a substantial amount of effort to maximize their publication number.

The parameters of the so-computed productivity function for each researcher are displayed in the table below:

Note that all exponents α are between 0 and 1, in agreement with our assumption that productivity functions are increasing and concave.

The total amount of funds destined to these six researchers was X = 6 × 1.2 million euros. Since the FWF distributed the funds equally among each researcher, the total productivity of this population due to the START grant is . Using Eq (7), however, we obtain a maximal productivity of g(X) ≈ 225. This is obtained by distributing budget X in the way shown in Fig 3.

thumbnail
Fig 3. Optimal distribution of START funds over the N = 6 researchers.

https://doi.org/10.1371/journal.pone.0214026.g003

If the aim of the START program is to maximize the number of publications, then the program is just operating at 100 × 184/225 ≈ 81% of its optimal yield. This percentage decreases as we increase the total funding X. Indeed, take X = 100 million euros. In that case, an equal redistribution of the budget would produce an research output of 592 publications. In contrast, the optimal allocation of such funds would give rise to g(X) ≈ 886. The performance of the egalitarian fund allocation would thus be 100 × 592/886 ≈ 67%. Of course, the aim of this example is to provide a simple illustration of our method, rather than to criticise the egalitarian fund distribution.

What is fundamental to notice is that, despite the extreme and unrealistic simplification of the model in this example, in particular the evaluation of “productivity” as a one-dimensional parameter, the player that obtains most of the funding in Fig 3 is not necessarily “the best researcher”. In fact, by changing the total amount X available, one would change the optimal distribution of funds, hence, the (purported) induced “ranking” among researchers. More specifically, researchers with αi ≪ 1 would receive most of the funds for low values of X, e.g., because they excel at working alone or in small groups, while researchers with αi ≈ 1 would claim the greatest portion of the science budget in the high X regime, e.g., because they excel at directing large research groups. This goes against the (possibly commonly accepted idea) that evaluation committees should choose, among a group of candidates for a grant, the one with the highest scientific productivity, irrespectively of the available resources. This concludes our example.

In alternative to the estimation of the function parameters, one can apply a very intuitive rule of thumb to decide fund allocation. It consists of transferring funds between the different players until their average productivity rates are as close as possible. That way, we arrive at a final distribution of funds such that, for any ij, i, j = 1, …, N, (9)

In principle, one can achieve such a configuration by solving the optimization problem: (10) where . It can be easily proven that, for each i, Gi(x) is a concave function. This means that the maximization (10), like (5), does not risk getting stuck in local maxima.

Define , where is the configuration of funds maximizing (10). Since , we expect that g(X) < g(X), i.e., this manner of allocating funds will not be optimal in general.

In this regard, consider a bipartite (N = 2) scientific population where, for x ∈ (0, 1] player 1 has an almost constant productivity function g1(x) = 1 for x > 0, while player 2’s productivity function is linear, i.e., g2(x) = x. Then the optimal funding configuration consists in assigning player 1 an infinitesimal amount of funds (), while giving player 2 the rest (). The supremal (not maximal) productivity is thus g(X) = 1 + X. On the contrary, using the above rule of thumb, it is easy to see that, for X < 1, , , and so g(X) = 1. As X gets close to 1, the fraction g(X)/g(X) tends to . Not only we do not achieve the maximal productivity, but the funding distributions in one case and the other are complete different!

Nonetheless, it is possible that, while not being optimal, g(X) is not that far off the optimal scientific productivity. In this line, we have the following result.

Theorem 1 Consider a scientific population characterized by productivity functions , subject to capped funding conditions . Then (11) In other words: even though the grant scheme (21) is suboptimal, its performance is, at worst, half of the optimal one. Moreover, due to the example above, the constant cannot be improved. See Appendix B for a proof.

As shown in Appendix D.1, for free-funding conditions, and provided that the slope of each function gi at x = 0 is “big enough” (more concretely: for all i, j), we have that the optimal configuration will satisfy (12) for some λ > 0.

Note that the slope condition holds for power productivity functions (since fi(0) = ∞ for all i). For populations of research units described by such functions, we can thus use Eq (12) to derive an explicit expression for g(X, namely: (13) where λ(X) is obtained by solving the equation: (14)

Applying formula (13) to the scientific population described in Table 1 of the example, with X = 6 × 1.2 million euros, we obtain g(X) ≈ 213 publications. This represents an efficiency of 100 × g(X)/g(X ≈ 95%. For X = 100 million euros, we obtain g♯(X) ≈ 861, with an efficiency of 100 × g(X)/g(X ≈ 97%. This allocation scheme thus seems to give a good performance when applied to real scenarios.

thumbnail
Table 1. Power productivity functions of a population of N = 6 scientists.

https://doi.org/10.1371/journal.pone.0214026.t001

5 Funding policies

Unfortunately, neither the funding agency nor the scientists themselves know the explicit form of their productivity functions. So how can a funding agency expect to solve problem (5)?

In the following two sections, we provide a number of automated methods to carry out this task. Under some assumptions on scientific production, some of these methods are guaranteed, on their own, to steer the productivity of a scientific population near its maximum possible value.

Nonetheless, these computational tools are intended to be used by human agents as an aid to reach a final budget decision. Note that the sole purpose of these tools is to maximize a given figure of merit, irrespective of any other considerations. We strongly doubt that the whole scientific enterprise can be reduced to an optimization problem. Thus, by removing human intervention completely from scientific policy decision-making, we risk reaching a dystopian scenario where any aspect of science other than an agreed objective function is viewed as an obstacle towards the maximization of the latter. On the other hand, we know, from the world of chess, that in some situations the best decision-makers are neither human nor artificial, but a team of both kinds of entities. We are therefore confident that funding agencies will greatly benefit from the ideas that we present next.

We will start by dividing time in terms, of s years each. At the end of the kth term, the funding agency announces the (k + 1)th call, and, after a proper evaluation, distributes the funds in such a way that player i receives euros to be spent on the (k + 1)th term.

Now, let us forget for the time being that we ignore the objective function . A very effective tool to solve maximization problems like (5) is the projected gradient method [33]. The output of this method is a sequence of feasible budgets with the property that (15) with being the optimal configuration. Each budget is obtained from the previous one by the following iterative equation: (16) where ϵ > 0 is a free parameter known as the learning rate and denotes the closest vector to (in Euclidean norm) belonging to the set , of allowed budget configurations. Computing can be cast as a semidefinite program [34], a type of optimization problems which we know how to solve efficiently.

Now suppose that, at the (k + 1)th call of the grant, we chose to distribute the funds according to in Eq (16), and that we repeated this operation in all subsequent calls. Since is (approx.) concave, by Eq (15), very frequently the current budget distribution would (approx.) maximize the total scientific output of the community.

Our problem is, however, that we don’t know . Consider then the following modification of the iterative equation: (17) where we have approximated the gradient (18) by the vector (19) where is the declared scientific production of player i at the end of the kth term (that should equal , if the player is being honest, see Section 7).

One can show that the iteration scheme (17) also satisfies Eq (15), see Appendix E. Note, though, that the vector component can be computed given the funding received by the candidate in the last two grant calls and the corresponding research outputs . Hence this procedure can be implemented in practice. By using Eq (17) to decide the budget distribution in the (k + 1)th term, we make sure that, in the long run, research funds are distributed in an optimal way. Note that there may be situations where we lack data to compute , e.g.: the candidate just finished the PhD studies, or had a child-raising break. In those cases, one can replace by , where are, respectively, the last known scientific production of the candidate and the science funds it was enjoying at the time.

The recursive method (17), that we will in the following refer to as the gradient scheme, is an instance of a grant policy. There are others. Consider, for instance, the following one: (20)

This is none other than the gradient method, applied to solve optimization problem (10). For ϵ small enough, the orbit will often be very close to the optimal configuration . Moreover, for capped funding conditions, Theorem 1 guarantees that the corresponding total productivity will be, at least, one half of the optimal one. Note that this scheme only requires knowledge of the total scientific budget X and the immediate past performance of the players: it is a zero-order scheme, as opposed to the first-order scheme (17), that requires information of the last two grant calls. We will dub this policy the average rates scheme A.

Alternatively, one can use (for free funding conditions) the iterative method: (21) This is also a zero-order scheme, where we do not even need to know the funds awarded to each researcher at call k in order to decide the funding distribution of call k + 1.

Interestingly, if the initial distribution of funds satisfies for all i and all productivity functions are curved at the origin, it can be proven that this policy converges exponentially fast to the configuration of Eq (12) (see Appendix D for a proof). We will refer to this grant policy as the average rates scheme B, or, more colloquially, as the rule of three, since, given , any other player i = 2, …, n can use and the rule of three to compute its future funding.

Finally, there is another grant policy, that we will hereby call the standard scheme, by which the funding of each researcher is proportional to the funds it received in the previous grant call and its productivity. That is: (22) where πB denotes, as before, the projection onto the set of budgets satisfying the funding conditions. The standard scheme reflects the growing perception in the theoretical physics community that the probability of being awarded a grant for a theoretical project grows with both the productivity of the candidate and the funding obtained in the past. For an evidence, consider the following extract from the Application Guidelines for Stand-Alone Projects, FWF Austria Science Fund: “Most important research projects funded in the past (no more than 5). […] For each project, please provide the following information: Project title, funding agency, project duration (from/to) and amount of funding granted”.

Both the distribution of funds and the final total productivity of this scheme depend significantly on the initial budget configuration . Indeed, consider free funding conditions, and assume that the productivity functions are all identical and equal to the power function (6). Then one can show that the standard scheme converges to a fund configuration where just the players i with maximum values of receive any funds whatsoever. For generic initial configurations , only one player i will satisfy this demand, in which case the asymptotic total productivity of the standard scheme will be . This has to be compared to the optimal productivity g(X) = N1−α AXα, achieved by the “egalitarian” configuration , for i = 1, …, N. In this example, the final configuration enforced by the standard scheme is thus maximally unfair, with just one player holding all the resources, instead of an equal distribution of funds. In addition, for large populations of scientists (N ≫ 1), the quotient becomes arbitrarily small. This gives some theoretical grounds for Nobel Laureate Jeffrey C. Hall’s remarks in [35]:

I can’t help feel that some of these [scientific] ‘stars’ have not really earned their status. I wonder whether certain such anointees are ‘famous because they’re famous’. So what? Here’s what: they receive massive amounts of support for their research, absorbing funds that might be better used by others.

6 Probabilistic time-dependent productivity functions

In realistic scenarios, it is expected that a player’s productivity will not only depend on funding, but also on a number of variables which escape our control (health, lack of sleep, love affairs…). We can model the effect of these variables by postulating that productivity functions must be probabilistic. Actually, in the real world things are even more complicated: productivity functions vary with time, as researchers acquire new knowledge and skills, or their motivations waver. In these conditions formula (17) is not guaranteed to generate orbits close to an optimal productivity.

Suppose then that gi is a probabilistic function that varies with time, i.e., the productivity of player i at the end of term k is a random variable of the form . Then deciding which quantity to optimize in this scenario is again a political (subjective) matter. A reasonable figure of merit, that we will use from now on, is the average scientific productivity at each term k. Note that one can repeat the arguments in Section 3 to suggest that 〈gi(x, k)〉 should also be increasing and approximately concave in x. Our goal is therefore to identify a policy to decide the fund allocation at each grant call j, such that, for k ≫ 1, (23) with high probability.

This puts us in a conundrum. On one hand, a single estimate of gi(xi, t) does not allow us to assess its average value, which we need to know in order to maximize the average productivity of the whole community. On the other hand, we cannot rely on the early past history of the candidate, because the productivity function also changes with time.

One possibility is to apply the gradient scheme (17), but with a correction that guarantees that random fluctuations do not squander the optimum budget. Note that, using the grant policy (17), it could be the case that a candidate receives the same funding twice consecutively, , but outputs different results . That would lead us to estimate an infinite gradient that would either put all the future budget in the hands of this candidate, or reduce its budget to 0 in the present grant call. In such a predicament, it is more convenient to use the corrected formula (24) where is a “filtered version” of , defined as: (25) The filter’s goal is to get rid of non-sensical estimations of the actual gradient of the objective function. Indeed, since gi(x) is increasing, it can’t be that . Similarly, by Eq (2), .

For g deterministic and time-independent, . The policy (24) is, in this scenario, equivalent to algorithm (17), and so its outputs will satisfy Eq (15). We leave as an open question under which conditions Eq (23) is satisfied in the probabilistic, time-dependent case. For the rest of the article, the use of formula (24) to decide the funding allocation will be dubbed the modified gradient scheme.

Alternatively, we can resort to the average rates schemes (20), (21). Since these policies only take into consideration productivity data and funds from the previous grant call, one would expect them to be even more robust against the time evolution of the productivity function. Moreover, by inspection of Eqs (20), (21) it is clear that small random fluctuations on the productivity will hardly affect the resulting budget configuration after a few calls. In fact, for the average rates scheme A, it can be proven (see Appendix F) that, if the productivity functions change slowly with time, then with high probability we have (26) Here yj denotes , as defined in (9), for the functions gi(x) ≡ 〈gi(x, j)〉. By Theorem 1, this means that, under capped funding conditions, the average rates Scheme A is guaranteed to produce on average at least half of the optimal productivity.

So far we have discussed three different grant policies. Which one shall we use in practice? To help us answer that question, we will next compare their performance in a number of numerical simulations.

In each simulation, we will consider a population of researchers with time-independent, non-deterministic productivity functions. Their average productivity functions will be given by Table 1. To model both the random fluctuations in productivity and the volatility of scientific evaluation, we assign to each scientist a measure of unpredictability 0 ≤ U ≤ 1: the actual productivity of the scientist is taken to be gi(x)(1 + ui), where ui is a random number chosen uniformly from the interval [−U, U]. In our simulations, we studied three cases of interest: U = 0 (no noise), U = 1/8, (low noise) and U = 1/2 (high noise).

Starting with the random funding distribution , we estimated the fraction of the maximal productivity achieved via different policies at each call k. The normalized average productivity is depicted in Fig 4, together with its variance. As we can see, even under low statistical noise the rule of three performs slightly better than the modified gradient scheme after a reasonable number of calls (5, 6), and substantially better than the standard scheme. In the asymptotic limit, the performances of the average rates schemes A and B are comparable, but the latter converges faster to the optimal value.

thumbnail
Fig 4. Productivity as a function of the grant call k, for different scientific policies.

The colors green, yellow, blue and red denote, respectively, the modified gradient scheme, the average rates scheme A, the rule of three and the standard scheme. Starting from a random distribution of funds, we study the performance of different policies under increasing amounts of statistical noise. For both the modified gradient scheme and the average rates scheme A, we chose . If each term lasts s = 4 years, then, in all cases, the rule of three would require 12 years to steer the community to a configuration where the average scientific production is greater than 90% of the maximal value.

https://doi.org/10.1371/journal.pone.0214026.g004

Which one of these policies should be chosen depends on the typical form productivity functions, a matter that can only be decided through experiment. If the relevant productivities exhibit almost no statistical noise, the modified gradient scheme shall be preferred. On the contrary, under free funding conditions and a fair amount of noise, the rule of three seems to be the wisest choice.

7 Dishonest players

So far we have been assuming that all players are honest, namely, that they won’t try to play the system to obtain more funds than they should. Such is a very naive position: ideally, we would like to have research policies which cannot be played. In this spirit, we will next study the security of the rule of three against dishonest participants.

For simplicity, we will carry the analysis under the assumption that the dishonest player, Daniel, belongs to a large scientific population where the budget distribution is almost stationary (i.e., it is close to an equilibrium). We will also assume that Daniel’s optimal productivity function is time-independent, deterministic and curved at the origin. Finally, we will suppose that, if Daniel plays honestly, in the asymptotic limit his budget will be greater than zero.

Following Eq (21), by producing an amount of science at the end of the kth term, Daniel will receive the funds at the (k + 1)th call. Since the population Daniel belongs to is large and its funding distribution close to stationarity, will hardly vary with and k, so we will take it constant, i.e., we will assume that , for some λ > 0.

Suppose that Daniel plays honest. Then, given an initial amount of funds , he will invest them all on research, thus producing and earning at the end of the process. Iterating, we find that the Daniel’s funds will follow the orbit . It can be shown that, in the limit, he will be receiving after each call, with defined by the relation . In other words, (27)

Now, we let Daniel be dishonest. How could he get more than on average? If the funding agency requires each player to spend all his/her funds by the end of the term, then the only thing Daniel can do is keep a fraction of his scientific results secret. Perhaps, by declaring a vast accumulation of scientific achievements in one go he may manage, on average, to squeeze more money out of the grant agency.

More specifically: starting with the funds and an amount of undeclared scientific results , Daniel would produce an output . He could then declare, at the end of the term, that he has produced an amount of results , with . In turn, the grant agency would reward him with an amount , that he would use to produce results. In the next term, he would declare , with results, and so on. Daniel’s declaration strategy is summarized in Table 2.

The second column denotes the funds received at the beginning of the call. Columns 3 and 4 denote, respectively, the reported and unreported scientific production when the funds run out. Note that, for this table to represent a valid strategy, and the elements of column 4 must be greater than or equal to 0. The question is whether Daniel can choose in such a way that, on average, he will obtain more funds than acting honestly. That is, whether, for high k, , with δ > 0. In Appendix G we show this not to be the case.

Thus, in the long run, Daniel will not win anything by delaying the publication of his scientific production. On the contrary, by not publishing his results as soon as he produces them, he risks being scooped by some other player, in which case he would be losing well-deserved science funds. It is therefore in Daniel’s interest to play honest and declare all his productivity by the end of each term.

Note that this security analysis required assumptions on both Daniel’s productivity function (determinism, time-independence and curvature at the origin) and the overall behavior of the scientific population. It would be interesting to find out whether security also follows if such assumptions are dropped. Similarly, it would be interesting to see if the modified gradient scheme and the average rates scheme A are also secure under dishonest participants.

Most crucially, we have not studied the possibility that the evaluation of the scientific production of the players is not impartial. Some studies suggest that for securying fund it is more important how researchers build their collaboration network than what publications they produce and whether they are cited [36]. Actually, in some fields such a behavior has greatly influenced the distribution of research funds in the past [37]. At the moment we do not have a solution for this problem, other than hoping that not so many scientists engage in this practice.

8 Discussion

Here we will examine some shortcomings of our model for research funding (5) and discuss how the latter can be improved for its use in real-world scenarios. This section is much more technical than the others and can be skipped on a first reading.

8.1 Assumptions on the productivity function

In section 3, we argued that the productivity function gi(x) of each player i must be increasing and approximately concave. We did so by reasoning that any rational player who knows the shape of its productivity function can improve it piece by piece until it becomes increasing and approximately concave. The underlying assumptions are that the agent knows its productivity function, that it is interested in maximizing it and that it acts rationally. These three conditions may not be met in practice.

Consider, for instance, the second one. Suppose that the goal of the funding agency were to maximize the total number of publications, while the personal goal of player i is to maximize the quality of the said publications. Then, given more funding, player i would not use it to increase its publication number, but to hire better researchers and hence produce better papers. In such circumstances, there may not be a simple relation between xi and the productivity gi measured by the agency. In principle, gi(x) could be decreasing, or convex.

The maximization of non-concave functions is a conventional problem in artificial intelligence, where the accuracy of the output of a neural network depends non-trivially on a number of continuous parameters. There exist a number of methods to achieve this effect, see [38]. Unfortunately, all of them require a reliable estimate of the gradient . Under very low statistical fluctuations and productivity functions independent of time, we can approximate by as in (17). In the general case, though, it is unclear what to do when one of the components of is negative or very high. Indeed, since gi may not be neither increasing nor concave, we cannot assume that neither nor Eq (2) holds, and so we are not entitled to filter as in Eq (25).

Another tacit assumption in (5) is that the productivity gi of a player i just depends on its funding xi (and not on the funding {xj: ji} of all the other players). This condition does not capture frequent real-world situations where two or more research institutes compete for the same gifted group leader. A more realistic model for scientific productivity would posit that there exists a global productivity function g(x1, …, xN) that does not necessarily decompose as a sum of independent productivities, i.e., g(x1, …, xN) ≠ ∑i gi(xi).

Funding agencies should therefore tackle the following optimization problem: (28)

Even under the assumption that g(x1, …, xN) is concave, deterministic and stationary, a blind application of the gradient method will soon lead to trouble. As before, the difficulty stems in estimating the gradient of g(x1, …, xN). One way to do so would be to keep the funding of all players but one constant and then compute the difference between the two productivities. For high N this is clearly impractical: even in the absence of statistical fluctuations, proximity to the optimal configuration of funds would only be achieved after O(N) grant calls.

Finally, one could question whether productivity functions exist at all. In the most general case, the productivity of a player at call k could also depend on his/her past success in securing grant funds, i.e., it could be a non-deterministic function, not only of , but also of . On the other hand, it is possible that the much simpler model (5) already represents an accurate description of the scientific practice. This question cannot be settled by pure mathematical reasoning, but through experimental work, e.g., via pilot research programs.

8.2 More than one funding agency

In real life there are several funding bodies at play. Depending on the goals of each funding body, there are different optimization scenarios. If these bodies use the same measure of scientific productivity and their goal is just to increase human knowledge, the best they can do is to create a common budget pool and act as if they were a single funding entity. If they fund completely different areas of research, they can use the policies above independently. If what these bodies fund is pretty much the same, and each of these bodies seeks for recognition, then we enter a complicated game-theoretic problem. One can then divide the funding xi of scientist i between its sources, i.e., ∑s xi,s = xi, and credit each funding agency t with a proportional amount of the total productivity of each scientist, i.e., . The goal of each funding agency t would be to maximize , disregarding the performance of all the other agencies.

First of all, as a function of xi,s, it is immediate to see that gi,s satisfies gi,s(0) = 0. One can also prove easily that it is also an increasing function of xi,s, since (29) Here the last inequality follows from Eq (2) and xixi,s.

In addition, assuming for all x (this is the case, e.g., for the power functions (6)), one can prove that gi,s is also concave. Indeed, note that (30) Define . It can be verified that h(0) = 0 and . Hence the term between brackets on the right hand side of Eq (30) is non-negative for xi ≥ 0. Since , we thus have that the right hand side of Eq (30) is upper bounded by .

Since, as a function of xi,s, {gi,s}i is concave, the problem of maximizing for fixed values of {xi,t: ts}i is a convex problem. Moreover, {gi,s}i also vanishes at zero, and is increasing, so agency s can apply the grant policy (17) to find an orbit of configurations close to the optimal value. The conditions of Theorem 1 are also met, and so (under capped funding conditions) agency s can similarly use the grant schemes (20), (21) to approximate this maximum.

All this under the assumption, of course, that the other agencies ts meanwhile keep their budget configurations fixed. If all agencies tried to maximize their total credited productivity at the same time, the system would converge towards a Nash equilibrium, where, of course, would not in general coincide with the maximum total productivity achievable. Note that these conclusions also hold when the productivity functions used by the agencies differ, i.e., when the (raw) productivity of scientist i is evaluated differently by each agency.

9 Conclusion

In this paper, we have proposed a family of schemes to fund theoretical research. Contrary to the rule in academic funding, these schemes do not rely on a project proposal, but on recent academic performance, as quantified by a given figure of merit. We observed that, once the figure of merit is accepted, the distribution of grant funds becomes an academic problem as opposed to a political issue.

In this regard, we proposed an algorithm to decide the allocation of funds on each grant call. Under certain idealized assumptions, the algorithm is guaranteed to drive the system, via successive grant calls, to budget distributions maximizing the total scientific productivity. We also introduced alternative schemes, based on the notion of average rates, to tackle scenarios with high statistical fluctuations in the scientific productivity or its evaluation. We explored numerically the performance of the gradient and average rate schemes on real data and compared it with the usual way funding agencies deal with theoretical project proposals.

One of the flaws of the proposed framework for research funding is that, like most others, it may discourage theorists from conducting creative or very original research. Indeed, it is a well-documented fact that creative and unusual ideas usually take time to be accepted by experts [22]. A creative grant applicant may thus receive a poor evaluation on his/her recent research, thus depriving him/her from a well-deserved funding. A reasonable policy to address this matter, proposed in [37], would be to move researchers with a very high variance in their expert evaluations to an entirely different funding program, perhaps relying on random grant schemes, see [39].

Most worryingly, our models of scientific productivity are plagued with ad hoc assumptions. In order to propose a realistic grant scheme, we need basic information regarding the regular practice of research, information that can only be acquired through experiment. How do productivity functions look like? How are they distributed among theoretical researchers? What is the volatility of expert referee scores? The answers to the questions will teach us whether the research policies presented here work better when applied at the level of individual groups or whole research institutes.

In any case, the purpose of this article is not to provide funding bodies with the ultimate grant scheme, but to contribute to the ongoing academic discussion on the problem of research funding. This problem won’t be solved by university administrators or politicians. The solution, if it exists, will be reached through the scientific method. Because whenever science comes in, reason and truth follow.

Appendix A Computation of g(X) for geometric productivity functions

Let gi(x) be of the form (6). Then the derivative of gi(x) diverges at x = 0. This implies that the solution of of problem (5) satisfies for all i. Under these conditions, can be determined by demanding that any infinitesimal transfer of funds between players i and j should not increase the value of the objective function . This implies that independently of the sign of δ. This can only be true if, for some μ > 0, (31) for i = 1, …, N. See also a detailed discussion of these conditions in Appendix D.1.

It follows that . The condition ∑i xi = X is thus translated to (32)

Given a value of X, solving the above equation we can determine the value of μ, whose explicit dependence with X we will express by μ(X). Once μ(X) is known, the final total productivity is given by: (33)

Appendix B Proof of Theorem 1

We will first prove the theorem for free funding conditions and functions gi(x) such that is invertible and ranges in (0, ∞).

Note that for any configuration satisfying , the quantity 1/λ in Eq (12) must belong to the interval . In fact, if x is the optimal configuration, i.e., the one satisfying for all i, for any other configuration , one would have at least two indices i and j, such that and , since . Then and .

On the other hand, . It follows that (34)

Eq (34) implies that the statement of Theorem 1 holds iff, for any feasible distribution of funds {xi}, there exists another feasible distribution of funds such that (35)

Let us see why. If Theorem 1 is true, then (36) for i = 1, …, N, and for all feasible {xj}j. Dividing by X and identifying with x, we arrive at Eq (35).

Conversely, if Eq (35) holds for x = x, then by Eq (34) we have that (37)

Assuming that {fi}i are invertible (and decreasing), we have that Eq (35) is equivalent to: (38) Summing on i and taking into account the normalization constraint we arrive at (39) Conversely, if the above condition is satisfied, then one can define (40) Then one can verify that satisfy (38) and the normalization constraint. Eq (39) is hence a reformulation of teh statement of Theorem 1.

Let us rewrite Eq (39) as (41) where . Define , , and note that both {pi}i and are normalized probability distributions on the variable i = 1, …, N.

Now, it can be seen that there exists a non-negative number f0 such that (42) Observe that the second equation implies that (43)

Putting all together, we have that (44) where the last inequality follows from Eq (42).

In Appendix C, we prove that, under the assumption that gi(x) admits a second derivative, Fi(x, y) is a decreasing function of y. This means that, for , . This concludes the proof for free funding conditions and productivity functions such that fi(x) is invertible and ranges in (0, ∞).

Now, suppose that fi(x) is not invertible, or doesn’t range from (0, ∞), and suppose also that . Then, for any δ > 0, we can always find a new concave, increasing function , with , and such that

  1. satisfies the conditions of invertibility and range.
  2. , for .
  3. for .

Indeed, it suffices to consider the function , for , with . For , once can find a concave, increasing extension of gi(x) + δ such that has an infinite slope at x = 0 and conditions 2,3 above are satisfied. The reader may have a look at Fig 5 to understand why this is always the case.

thumbnail
Fig 5. Cosmetic surgery.

We modify gi(x) + δ from to 0 such that the new function has an infinite slope at x = 0. Similarly, we modify gi(x) + δ from onwards so that the slope of is zero from onwards.

https://doi.org/10.1371/journal.pone.0214026.g005

Now consider the optimization problem (5) over the productivity functions , under free funding conditions. Since the slope of is zero from onwards, this implies that the optimal solution will satisfy for all i. It therefore coincides with the solution of (5) for capped funding conditions. Since we can choose δ > 0 at will, we can do so such that , where g(X) denotes the optimal solution of the capped problem.

Let be the solution of problem (10) for the functions , assuming free funding conditions. We know, by the previous proof, that (45)

However, in general . Now, define . Then it is evident that and . The solution of the capped problem with the productivity functions will be the result of distributing the excess funds over the players i such that . The result can just increase the total productivity, and hence we have that (46)

Finally, it is easy to see that, by decreasing δ, the total productivity of the optimizer of (10) can be made arbitrarily close to the left hand side of the above equation. It follows that in the general case.

Appendix C Fi(x, y) is a decreasing function of y

One can easily check that (47) Call . Then, written in terms of gi(x), the numerator of the above equation is proportional to . Now, by Eq (2), is non-negative. Since gi(z) is also non-negative, it hence follows that , and so , i.e., Fi(x, y) is decreasing on y.

Appendix D Convergence for zero-order method

D.1 Conditions on the functions gi

In this section, we will discuss under which conditions the zero-order method will converge to the optimal solution. To simplify the notation, in the following we will assume that the total budget X is normalized, i.e., X = 1. This physically corresponds to a change of unit for measuring the budget, so it will not affect the solution. From a mathematical perspective, the same arguments hold for the general case of X ≠ 1. We will not consider the case of capped funds.

Our assumptions on the single productivity functions {gi}i are as follows:

  • dom gi = [0, 1],
  • gi(0) = 0,
  • , monotonicity (exclude flat case, for uniqueness of solutions)
  • concavity (exclude linear case).

By concavity, it follows that (48) which, together with g(0) = 0 implies (49)

The problem in Eq (10), then, becomes (50) It is convenient to define the functions fi(x) ≔ gi(x)/x, for all i. We can now derive the conditions for the functions fi such that the optimal solution x for the problem (50) satisfies (51) A necessary condition for a point x to be optimal is given by the Karush-Kuhn-Tucker (KKT) conditions [40]. Moreover, since the problem is concave, with linear inequality constraints and an interior feasible point, by Slater’s condition [40], the KKT conditions are also sufficient. We can write KKT conditions for the optimal point x. (52) The last two conditions imply that when , then μi = 0. We want to find conditions on , such that there is no solution of Eq (52) with . In this case, we can identify and obtain the condition in Eq (51).

For example, one could ask that , for all i. In fact, let us assume that f1(0) = ∞ and , then to satisfy , at least another , say must be strictly greater than zero. But then μ2 = 0 and the condition cannot be satisfied for μ1 ≥ 0. More generally, one could simply ask that fi in zero is “big enough” with respect to the other functions fj, ji. A sufficient condition to exclude the case for some i is given by (53) This correspond to the configuration in which we assign 0 to i and an equal amount to ji, i.e., and for ji. If fi(0) is too big, then Eq (52) cannot be satisfied. In order to increase the value of some fj we would have to decrease , however, given the condition , some other should be increased, consequently decreasing the value of fj.

Finally, let us comment on the assumptions on our productivity such as Eq (53). First, notice that such conditions involve only the local behavior of the function around x = 0. As a consequence, given any “actual” productivity function g, we can modify it in a neighborhood of x = 0 to obtain such that, e.g., but for all x > ε, for some ε > 0. Applying the iterative method to gi or for each iterative step k such that will give the same results. Since ε can be chosen arbitrary small, we can always chose a value such that the values xiε correspond, as a fraction of the total budget X, to, e.g., 10−3 euros. This implies that the difference between gi and will be relevant only at the step k where we have to redistribute funds of the order of 10−3 euros. Thus, in practical applications, the assumption on the behavior of g in a neighborhood of x = 0 implies no loss of generality.

D.2 Proof of convergence

We have seen in the previous section that, under condition (53), the optimal solution x for the problem (50) satisfies (54) Now, notice that , due to Eq (49), hence f is monotone and the solution of Eq (54) is unique. In fact, if there were two solutions x and , then for all i.

Moreover, we have seen in Appendix B that for any normalized budget distribution , we have (55) Given the total productivity P(x) = ∑i gi(xi), the iterative method is defined as (56) with initial point x0 such that for all i.

To show that the method converges to x, we will show that for any initial point x0, with for all i and the sequences of intervals satisfies (57) where for I = [a, b] we define |I| ≔ ba.

By substituting gi(xi) = xi fi(xi) in the definition of Eq (56), we have (58) which implies, by the strict monotonicity of fi that (59) Moreover, by the definition of P(xk) and ∑i xi = 1, we have (60)

Next we want to prove a condition on the increase (and decrease) for fi in the iteration, namely (61) Let us consider first the case 0 < α < 1 with . We have (62) It is sufficient, then, to notice that (63) since gi(x) is monotonically increasing in [0, 1] (), 0 < α < 1 and . Analogously, one can prove that in the case α > 1, (64)

It remains to be proven that limk → ∞|Ik| = 0. We will argue by contradiction. Let us assume that limk → ∞ Ik = [a, b], with ba > 0. Since {xk}k is bounded, there exist a converging subsequence with limit , with and . However, since ba > 0, at least one of the following must be true: either P(x*) ≠ a or P(x*) ≠ b. Let us assume that P(x*) ≠ a, as the other case is identical. Then, by applying the iterative map, we obtain a new interval , in contradiction with the assumption that [a, b] was the limit.

D.3 Speed of convergence

In the following, we will show that the iterative method converges exponentially. First, we need to prove that there exists β < 1 such that sequence {xk}k obtained via the iterative method of Eq (56) satisfies (65)

Let us define γP(xk)−1. We can rewrite the iterative step as and . Let us first assume with ε > 0, we will treat the other case below. We then have (66) Let us simplify the expression, using also , the expression (66) becomes (67) where we used Eqs (48),(49), respectively, for the two inequalities, and defined . Notice that such a maximum exists since is a continuous function, as it is continuous in 0, and [0, 1] is a closed and bounded interval.

The case with ε > 0 is slightly more complicated. Repeating the initial steps, we obtain (68) again using Eq (48). Now, let us drop the indices i, k to make the notation lighter and define (69) Eq (68) becomes Hx(ε) < H1x(ε). We can then verify that the derivative w.r.t. ε is strictly positive, i.e., (70) again using Eq (48), (compare also to Eq (68)). As a consequence, Hx is monotonically increasing. Notice that this implies that Hx is continuous in 0 since it is positive and limε→0 Hx(ε) ≤ limε→0 H1x(ε) = xg′(x)/g(x) < 1. Its maximal value for ε ∈ [0, 1] is given by Hx(1) = 1. However, such a value of ε cannot be reached, since by assumption (71) where a0, b0 are the endpoints of the interval , computed by evaluating all {fi}i on the first iteration point x0, with for all i.

We then obtain, for the case 1 − ε, βi ≔ max(x, ε)∈[0, 1]×[0,1−a0/b0] Hx(ε). Since, for x ≥ 0, Hx(ε) is strictly increasing in ε and equals 1 at ε = 1, it follows that βi < 1. Finally, β appearing in Eq (65) can be obtained as β ≔ maxi βi.

To complete the proof of exponential speed, we will first show that for each iterative step k, and each pair of indices i, j such that and , (72) In fact, and , hence, we can write (73) Finally, denoting by m the index associated with the minimum at the step k + 1 i.e., and M the index associated with the maximum, i.e., , we can write. (74) which completes the proof of exponential convergence.

Appendix E Proof of convergence of the gradient scheme for deterministic productivity functions

In this Appendix, we will prove that funding policy (17), under deterministic, time-independent productivity functions, generates an orbit over the space of budget distributions that stays for most of the time near the optimal productivity. That is, it satisfies Eq (15).

To do so, we will follow the lines of [33]. We will assume that for ; that the diameter of the set B of valid budget distributions is D; and that , for .

First, by contractivity of projections, we have that (75)

Now, , where , for some . It follows that , where Γk is a diagonal matrix whose (negative) entries are lower bounded by −Γ. This implies, by Eq (75), that (76)

Now, let be the budget distribution that maximizes the total scientific productivity. Again, by contractivity of projections, we have that (77) By induction, we arrive at (78) Invoking the inequalities , and putting all this together, we have that (79)

By concavity of g, we have that . Putting all together, we arrive at (80)

In the limit k → ∞, the right hand side of the equation above can be approximated as , i.e., it can be made arbitrarily small by decreasing the learning rate ϵ.

Appendix F Convergence of the average rates scheme A for time-dependent, non-deterministic productivity functions

The proof follows the same steps as the convergence of the stochastic subgradient method, see [41]. It is also very similar to the proof in Appendix E. Call yk the feasible budget maximizing , and suppose that ‖ykyk + 1‖ ≤ δ. Call (xj)j the sequence of budgets produced by the average rates scheme A. We will prove that, for high k and suitably chosen learning rate ϵ, with probability , .

By Taylor’s theorem, we have that , where H is the Hessian of G(x, k) evaluated at a point c within the set . Such is a diagonal matrix with diagonal elements of value . By Eq (2), we have that each of them is negative. We will assume that, for i = 1, …, N, there exists a number h > 0 such that for all . This can be seen equivalent as taking gi(x) to be curved at the origin. On the other hand, since yk is a maximum, we have that . This allows us to write (81) We will use this relation soon.

Similarly, we will assume that there exists γ > 0 such that γ|g(x, k) − g(y, k)| ≤ ‖xy‖ for all feasible x, y. Calling the random vector , we will also assume that . Of course, by assumption . We will denote by R the radius of the feasible region of budgets.

Now, fix the values of {gj: j = 1, …, k − 1}. Following Appendix E, we have that (82)

In turn, ‖xkyk+1‖ ≤ ‖xkyk‖ + ‖ykyk+1‖ ≤ ‖xkyk‖ + δ. It follows that ‖xkyk+12 ≤ ‖xkyk2 + δ2 + 2. Also, . Putting all together, we have that (83) with r(δ, ϵ) = 2Γϵδ + δ2 + 2 + Γ2 ϵ2.

Taking an average over the possible values of gk, we have that (84)

Now we can fix {xj: j = 1, …, k − 1} and use the same idea to get rid of the term ‖xkyk2. Iterating, we have that (85) Rearranging, we have that (86)

Taking the limit k → ∞, we have that the right hand side is bounded by . On the other hand, it can be verified that the value of ϵ that minimizes is , in which case we have that (87)

By (81), G(yj, j) − G(xj, j) can be lower bounded by hyjxj2, and, in turn, the term ‖yjxj‖ can be lowerbounded by γ|g(yj, j) − g(xj, j)|. Putting all together, we have that (88)

Using the relation , valid for any non-negative random variable Z, we conclude that (89) (90)

Appendix G Security of the rule of three

In Section 7, we considered the possibility that Daniel, a member of a large scientific community subject to the rule of three, could win more funds by suitably choosing when to report his research achievements. The purpose of this Appendix is to prove that, in the long run, Daniel cannot expect to obtain more funds than by acting honestly.

Following Table 2, in the (k − 1)th call, Daniel’s undeclared scientific output equals . Multiplying by λ, invoking the identity and taking into account that undeclared scientific outputs are non-negative, we have that (91) where the last inequality follows from the concavity of gi.

Define sk via the relation , for k > 0, and . Then, the above equation implies (92)

Now, . In turn, by Eq (2), we have that . Here we have assumed that and that gi is curved at the origin. Putting all together, we have that (93) with .

Applying this relation to the right-hand side of (92) and rearranging, we end up with (94)

Since αi < 1, it follows, from the above formula, that the sequence (sk)k can neither keep growing indefinitely nor converge to a value greater than 0. This finishes the argument.

Supporting information

S1 Table. Publication records of six anonymous theoretical physicists and mathematicians before and after they were awarded FWF’s START grant.

https://doi.org/10.1371/journal.pone.0214026.s001

(XLSX)

Acknowledgments

We acknowledge motivating discussions with C. Brukner, A. Spanu, S. Singh, Z. Wang, N. Villanueva and M. Lewenstein.

References

  1. 1. Hicks D. Performance-based university research funding systems. Research Policy. 2012;41(2):251–261.
  2. 2. Huisman J, de Weert E, Bartelse J. Academic Careers from a European Perspective. The Journal of Higher Education. 2002;73(1):141–160.
  3. 3. Afonso A. Varieties of Academic Labor Markets in Europe. PS: Political Science & Politics. 2016;49(4):816–821.
  4. 4. Kwiek M, Antonowicz D. The changing paths in academic careers in European universities: Minor steps and major milestones. In: Academic work and careers in Europe: Trends, challenges, perspectives. Springer; 2015. p. 41–68. Available from: https://link.springer.com/chapter/10.1007/978-3-319-10720-2_3.
  5. 5. Afonso A. How Academia Resembles a Drug Gang. https://alexandreafonsome. 2013.
  6. 6. Gillies D. How should research be organised? College Publications; 2008.
  7. 7. von Hippel T, von Hippel C. To Apply or Not to Apply: A Survey Analysis of Grant Writing Costs and Benefits. PLOS ONE. 2015;10(3):1–8.
  8. 8. Azoulay P. Turn the scientific method on ourselves: how can we know whether funding models for research work? By relentlessly testing them using randomized controlled trials. Nature. 2012;484(7392):31–33.
  9. 9. Ioannidis JPA. More time for research: Fund people not projects. Nature. 2011;477:529–531. pmid:21956312
  10. 10. Rzhetsky A, Foster JG, Foster IT, Evans JA. Choosing experiments to accelerate collective discovery. Proceedings of the National Academy of Sciences. 2015;112(47):14569–14574.
  11. 11. Wang J, Veugelers R, Stephan P. Bias against novelty in science: A cautionary tale for users of bibliometric indicators. Research Policy. 2017;46(8):1416–1436. https://doi.org/10.1016/j.respol.2017.06.006.
  12. 12. Steen RG, Casadevall A, Fang FC. Why Has the Number of Scientific Retractions Increased? PLOS ONE. 2013;8(7):1–9.
  13. 13. Necker S. Scientific misbehavior in economics. Research Policy. 2014;43(10):1747–1759. https://doi.org/10.1016/j.respol.2014.05.002.
  14. 14. Retraction Watch. https://retractionwatchcom/.
  15. 15. Herteliu C, Ausloos M, Ileanu B, Rotundo G, Andrei T. Quantitative and Qualitative Analysis of Editor Behavior through Potentially Coercive Citations. Publications. 2017;5(2):15.
  16. 16. Bol T, de Vaan M, van de Rijt A. The Matthew effect in science funding. Proceedings of the National Academy of Sciences. 2018;115(19):4887–4890.
  17. 17. Waltman L. A review of the literature on citation impact indicators. Journal of Informetrics. 2016;10(2):365–391.
  18. 18. Abramo G. Bibliometric Evaluation of Research Performance: Where Do We Stand? Educational Studies Moscow. 2017;(1):112–127.
  19. 19. Birukou A, Wakeling J, Bartolini C, Casati F, Marchese M, Mirylenka K, et al. Alternatives to Peer Review: Novel Approaches for Research Evaluation. Frontiers in Computational Neuroscience. 2011;5:56. pmid:22174702
  20. 20. Marsh HW, Jayasinghe UW, Bond NW. Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. American psychologist. 2008;63(3):160. pmid:18377106
  21. 21. Jayasinghe UW, Marsh HW, Bond N. A multilevel cross-classified modelling approach to peer review of grant proposals: the effects of assessor and researcher attributes on assessor ratings. Journal of the Royal Statistical Society: Series A (Statistics in Society). 2003;166(3):279–300.
  22. 22. Boudreau KJ, Guinan EC, Lakhani KR, Riedl C. Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science. Management Science. 2016;62(10):2765–2783. pmid:27746512
  23. 23. Baccini A, De Nicolao G. Do they agree? Bibliometric evaluation versus informed peer review in the Italian research assessment exercise. Scientometrics. 2016;108(3):1651–1671.
  24. 24. Traag V, Waltman L. Systematic analysis of agreement between metrics and peer review in the UK REF. ArXiv e-prints. 2018.
  25. 25. Campbell D, Picard-Aitken M, Côté G, Caruso J, Valentim R, Edmonds S, et al. Bibliometrics as a Performance Measurement Tool for Research Evaluation: The Case of Research Funded by the National Cancer Institute of Canada. American Journal of Evaluation. 2010;31(1):66–83.
  26. 26. Bollen J, Crandall D, Junk D, Ding Y, Börner K. From funding agencies to scientific agency. EMBO reports. 2014;15(2):131–133. pmid:24397931
  27. 27. Bollen J. Who would you share your funding with? Nature. 2018;560:143–143. pmid:30089925
  28. 28. Sandström U, den Besselaar PV. Funding, evaluation, and the performance of national research systems. Journal of Informetrics. 2018;12(1):365–384.
  29. 29. Bertsekas DP, Nedić A, Ozdaglar AE. Convex Analysis and Optimization. Athena Scientific optimization and computation series. Athena Scientific; 2003. Available from: https://books.google.at/books?id=DaOFQgAACAAJ.
  30. 30. Fortin JM, Currie DJ. Big Science vs. Little Science: How Scientific Impact Scales with Funding. PLOS ONE. 2013;8(6):1–9.
  31. 31. Cimini G, Gabrielli A, Sylos Labini F. The Scientific Competitiveness of Nations. PLOS ONE. 2014;9(12):1–11.
  32. 32. FWF Austrian Science Fund. START programme. 2018.
  33. 33. Boyd S, Xiao L, Mutapcic A. Subgradient methods. lecture notes of EE392o, Stanford University, Autumn Quarter. 2004.
  34. 34. Vandenberghe L, Boyd S. Semidefinite Programming. SIAM Review. 1996;38(1):49–95.
  35. 35. Hall JC. Current Biology. 2008;18:R101–R103.
  36. 36. Ebadi A, Schiffauerova A. How to Receive More Funding for Your Research? Get Connected to the Right People! PLOS ONE. 2015;10(7):1–19.
  37. 37. Smolin L. The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next. Houghton Mifflin Harcourt; 2007. Available from: https://books.google.at/books?id=d6MIUlxY-qwC.
  38. 38. Ruder S. An overview of gradient descent optimization algorithms. arXiv:160904747. 2016.
  39. 39. Gillies D. Selecting applications for funding: why random choice is better than peer review. Roars Transactions, a Journal on Research Policy and Evaluation (RT). 2014;2.
  40. 40. Boyd S, Vandenberghe L. Convex optimization. Cambridge university press; 2004.
  41. 41. Duchi J. EE364b: Lecture Slides and Notes. https://webstanfordedu/class/ee364b/lectureshtml. 2018.