### I'm very confused...

#### Posted by pamedeo on 30 Jul 2008 at 03:33 GMT

I'd Appreciate if the authors could explain to me figure #3, especially the lower-right panel.
If a random sample of at least 10 of the 40 voters have voted 5, how could be that the average vote of all 40 voters is 1?
Even in the statistically least likely case that all the other 30 voters have voted 1, the average vote of the whole population should be 2: (10 x 5 + 30 x 1) / 40 = 80 / 40 = 2.
Anyhow, I'm quite skeptical that increasing to 4 the number of the reviewers would improve much the process. It would provide more challenges in creating review panels large enough to stand the load.
I agree on the shorter and better written proposals.
Sadly, in general, there are more well-deserving grant proposals than funds. Therefore, instead of trying to "rationalize" more a process that is extremely subjective, it would be better, probably, to randomly draw the winners among all the proposals awarded with good scores.

Paolo

### RE: I'm very confused...

#### dakaplan replied to pamedeo on 01 Aug 2008 at 14:16 GMT

Figure 3 shows the rank order based on the scores not the actual scores. The point of the figure is that the rank order changes erratically with additional reviewers. Also, the initial rank orders for the first 10 reviewers does not faithfully approximate the rank orders with over 30 reviewers used. We conclude that there is a significant degree of arbitrary decision-making involving NIH peer review.

We believe a statistically sound system of peer review can be established. Of course, it is first necessary to recognize that the current structure of peer review at NIH and other funding agencies does not use robust statistics. We are not in favor of a system of random drawing for making social choice. The current NIH system gives incredible discretion to the administrators to make decisions.

### RE: RE: I'm very confused...

#### pamedeo replied to dakaplan on 01 Aug 2008 at 19:01 GMT

Hmm... So the final ranking of the 5 movie proposal would be 1, 2, 4, 4, 4, being the sums of the 40 scores each of three proposals exactly the same?!

Anyhow, I don't want to defend NIH review approach. However, my point is that I really don't think that your concept of reviewing is correct: reviewing grants means evaluating an extremely heterogeneous sample. In short, it means comparing apples with peaches and bananas. It is very different from reviewing the different bids for a contract, where you are defining all the key characteristics of the proposal and the bidder could enrich it with some extra, etc.
Each person/group applying for a grant has much more freedom in terms of what can be proposed under the given theme. Assuming that both grants of the following imaginary example are well written and properly documented, evaluating if it is more important to grant the funding to somebody creating a machine that makes tomato juice with the tomatoes still on the plant or award it to the study of a machine that harvests grapes by cutting the vines with a laser beam, leaves a lot of room to subjectivity. Probably the outcome would be biased by the likes of grapevine, wine and tomato juice....

Yes, probably adding a fourth reviewer (however, it's always nice to have a odd number, where the decision of one would change the total balance) would help to better discard the score of a bad reviewer that has either misinterpreted the proposal or didn't pay enough attention to his/her job.
The main scope of the meeting of a review panel is to "eliminate" the proposals that definitely don't deserve attention from the better ones. There is also an attempt of providing an approximate global ranking of the good ones, based on criteria, like their relevance, degree of innovation, technical and environmental "soundness" and how they would impact the scientific community. The review panel does not judge the proposals based on the budget requests. However, after voting on each grant, comments are collected about budgeting issues. It is analyzed if the money and time requested are proportioned to the money requested or not. The panel can therefore make recommendations to either increase or reduce the budget of a given grant.
Another thing that is out of the scope of the review panel is the evaluation of how the different grants are overlapping.
Administrators have then the tasks of putting together this jigsaw puzzle, choosing the proposal that they overlap the least in scope, provide the most impact to the community, while making sure that there is enough money to fund the selected ones.
Personally I prefer allowing them this kind of discretion rather than having them funding three grants on three machines to produce tomato juice directly on the field, ignoring completely the grape harvesting problem.

Paolo