Peer Review History

Original SubmissionJanuary 13, 2025
Decision Letter - Ulf Sandström, Editor

PONE-D-25-01557Registered report protocol: Factors associated with inter-rater agreement in grant peer reviewPLOS ONE

Dear Dr. Hesselberg,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 28 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Ulf Sandström

Academic Editor

PLOS ONE

Journal Requirements:

1. When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match.

When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section.

3. Thank you for stating the following financial disclosure: [The study is part of JOHs PhD. JOH is the Chief Program Officer at the Norwegian Foundation Dam (www.dam.no) and the PhD is funded through his salary at the foundation. The foundation has also contributed data to the study]. 

Please state what role the funders took in the study.  If the funders had no role, please state: ""The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.""

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

4. Thank you for stating the following in the Competing Interests section: [JOH and SHIs roles as the Chief Program Officer and Senior Advisor at one of the funders (Foundation Dam) in the study might be perceived as a competing interest.].

We note that one or more of the authors have an affiliation to the commercial funders of this research study. [Foundation Dam].

1. Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form.

Please also include the following statement within your amended Funding Statement.

“The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.”

If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement.

2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc.

Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf.

5. We note that you have indicated that there are restrictions to data sharing for this study. For studies involving human research participant data or other sensitive data, we encourage authors to share de-identified or anonymized data. However, when data cannot be publicly shared for ethical reasons, we allow authors to make their data sets available upon request. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

Before we proceed with your manuscript, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible.

Please update your Data Availability statement in the submission form accordingly.

6. We are unable to open your Supporting Information file [S1 Script.R and S2 Script.Rmd]. Please kindly revise as necessary and re-upload.

7. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions?

The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses?

The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory.

Reviewer #1: Partly

Reviewer #2: Yes

**********

3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors described where all data underlying the findings will be made available when the study is complete?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics.

You may also provide optional suggestions and comments to authors that they might find helpful in planning their study.

(Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: See attachment, See attachment, See attachment, See attachment, See attachment, See attachment, See attachment, See attachment

Reviewer #2: The report protocol presents a very interesting research plan.

My only observations at this stage regards the following.

1. novelty statements:

“there is a lack of studies with the aim of identifying how multiple factors contribute to low agreement. To our knowledge only the above-mentioned study that used data from 23,414 ratings at the Austrian Science Fund (10) has done this”

and

“To our knowledge this will be the first study on factors associated with agreement, that uses data from multiple funders”

The cited paper from Seeber et al. (2021):

- do consider multiple factors (possibly some more are relevant for this study)

- it combines data from four programs from two funders: 3 from MSCA and one from COST

I agree instead it is the first study with “predetermined research questions, methods and code.”

2. One additional comment regard the dependent variable: perhaps you could consider the gap in score between couples of reviewers. This would provide a more robust test of similarity hypothesis. Suppose we have 3 reviewers, 2 males and 1 female, one could test if the agreement is lower for the 2 males, than the 2 couples m-f.

3. You could also clarify if the analysis is based on which stage of the evaluation – considering that at least DAM has a two-stage process (Seeber, Svege, Hesselberg 2024).

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .

Reviewer #1: Yes:  Steven Wooding

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.

Attachments
Attachment
Submitted filename: Protocol review.docx
Revision 1

Is the rationale for the proposed study clear and valid?

The rationale is generally clear and well explained.

The authors helpfully raise the idea of helpful and unhelpful disagreement from line 55. However, they then dispense with it on line 63 stating that ‘even in the utopian case of all disagreement being of the desirable type, there are limits to how much disagreement can be tolerated’ - this seems conceptually wrong. If all the disagreement is because of the different perspectives being bought to bear then this is important and not information that should be thrown away, or considered the fault of the system of assessment. Their metaphor is also problematic - if the only thermometer available varies by 3°C and you need to know your body temperature, then you will use it, but you will have to take many measurements to find the true temperature. I think this section needs a little revision to take into account that there are two forms of disagreement (at least)

1. Reviewers working from the same premise and experience and coming to different conclusions (unhelpful)

2. Reviewers with different experience/expertise who would be expected to see a different set of issues (helpful and must be managed)

The current project only examines and quantifies these two types of disagreement aggregated together - which is still a valuable contribution (it might be possible to separate these forms of disagreement using the dataset the authors have - see suggestion in next section)

Answer from authors (JOH):

Thank you for this valuable feedback. We agree that “helpful” disagreement, stemming from legitimate differences in perspective or expertise, should not necessarily be minimized. At the same time, high levels of disagreement—whatever the cause—will diminish confidence in final funding decisions and will make the application process unreliable.

We think the apparent paradox might be solved by stressing the difference between disagreement in single cases and in the decision-making process overall and have tried to make this clearer (also in the metaphor).

Unfortunately, our dataset does not include enough reviewer-level background information to separate “helpful” from “unhelpful” disagreement in a rigorous manner, and in practice, it is often difficult to draw that line. We will revise the text to clarify that our measure of disagreement inevitably encompasses both desirable and undesirable forms, and we will note the importance of future research aimed at distinguishing these two types more precisely.

Is the protocol technically sound?

The protocol is largely technically sound, I have two main concerns:

1. The level of analysis. My impression (and apologies if I’ve missed something) is that the response/dependent variable (“Disagreement”) is being calculated at the review level (hence the ij subscript); but all the explanatory variables are being calculated at the application level (ie across the pool of reviewers providing reviews for that application). I think the response variable and explanatory variables have to be calculated for the same entities. If “Disagreement” is being calculated at the application level by aggregation this needs to be explained.

Answer from authors (JOH):

Thank you for pointing this out. We agree it is crucial to clarify precisely how the outcome (disagreement) and the explanatory variables align in our analysis.

We do not believe that the outcome (response variable) and all predictors must be computed at the exact same level. In multilevel or cross-classified models, it’s valid to have an outcome measured at one level (e.g., the review level) while incorporating explanatory variables that reflect higher-level or lower-level characteristics (e.g., application-, program-, or reviewer-level factors).

What matters is that each observation in the dataset has a properly aligned value for the response variable and for each explanatory variable.

If set up correctly, the modeling framework (e.g., cross-classified random effects) accounts for the fact that reviews are simultaneously nested within both reviewers and applications.

2. Multiple hypothesis testing. The authors suggest they will use p=<0.05 significance testing - but they are conducting multiple tests (at least 16, one for each explanatory variable and presumably one for each of the seven “Research Area”; and will they test the effect of time in each area?). I think this means they need to correct for testing multiple hypotheses - as with 16 tests and a p=<0.05 stringency they are quite likely to reject a null hypothesis simply by chance. I think there are approaches available to correct for multiple hypothesis testing in frequentist approaches (although I’m am not very familiar with them) - an alternative approach would be to move to a Bayesian approach to analysis.

Answer from authors:

Thank you for pointing this out. We agree that correcting for the multiple comparisons is important and suggest we use Benjamini–Hochberg and adjust the analysis script accordingly add the following to the protocol:

“We will use the Benjamini–Hochberg (BH) approach to correct for multiple comparisons because it controls the false discovery rate rather than the family-wise error rate—providing a balance between discovering genuine associations and avoiding false positives. By contrast, Bonferroni corrections often become overly conservative in settings with many tests, increasing the risk of overlooking real effects (Benjamini & Hochberg, 1995).”

Minor suggestions

1. Grant size is likely to be hugely variable over orders of magnitude, so it might make sense to transform this variable before statistical analysis - eg by using log(grant size) - this would also be true for the ‘variability’ in grant size.

Answer from authors:

Thank you for pointing this out. Yes, we suspect that the data regarding application amount will be right skewed, and that log-transformation is a good solution if this is the case. We suggest we add the following reservation to the methods section: “The distribution of scores for each variable will be examined prior to inclusion in the analyses. Variables that deviate considerably from a normal distribution will be considered for transformation to achieve a more normal distribution before inclusion."

2. Is Research Area being treated as categorical or can one grant be from multiple areas - table at line 374 suggests there might be multiple TRUE Research areas.

Answer from authors (JOH):

Thank you for highlighting this point. Indeed, some applications in our dataset span multiple research areas—for instance, an interdisciplinary grant that combines social sciences and technology. Consequently, we code each area as a separate binary (TRUE/FALSE) variable rather than forcing each application into a single category. This means an application can have multiple “TRUE” values if it legitimately covers more than one area. We will clarify this in the Methods section to avoid confusion.

We have added sub-numbering to the variables in table 2. Each of the variables under 3 Research area are numbered 3a to 3g. This is also done in the analysis script.

Possible additions

1. Different types of disagreement - if information was available on the field of expertise of the reviewers (which is may well not be) then it might be possible to see if reviewers with similar expertise were more likely to agree when reviewing the same application than reviewers with differing expertise. I don’t think this is quite the same as looking to see if applications that have more reviewers of the same type have more similar review scores.

Answer from authors:

Thank you. We agree that this is an important distinction and an interesting research question. However, we do not have the information needed in our data set to the suggested analyses. We will however point out that this is a topic for further research in the discussion.

Will it effectively achieve its aims, and test the stated hypotheses?

If my two concerns about the technical soundness of the protocol are addressed it will test the stated hypotheses.

Is the methodology feasible and detailed enough to make the work replicable?

Yes

General comments

In the introduction the authors use the term ‘reliability’ - but it is not clear what is being warranted by a reliable score - I think the terminology of ‘agreement between reviewers’ which they adopt in much of the rest of the article is clearer. In the next section they do use ‘agreement’ it should probably be changed in the initial section.

Answer from authors:

Thank you for raising this terminology issue. We agree that “inter-rater agreement” may be clearer and less ambiguous than “reliability” in this context. We have made revisions in the introduction to use “agreement” (and explicitly link it to the broader concept of reliability where appropriate) to maintain consistency throughout the manuscript and avoid confusion.

Line 139 - ‘Is reviewer experience of reviewing positively associated with agreement?’ Is clearer

Answer from authors:

Thank you. This has been adjusted

Line 207 - ‘Is the reviewers level of specialist knowledge associated with agreement?’ Is clearer

Answer from authors:

Thank you. This has been corrected throughout the text

Line 302 - I think ‘overall’ score might be better than ‘total’ score, as I don’t think things are being added together/totalled to provide the ‘total’ score.

Answer from authors:

Thank you. This has been adjusted

Figure 1 - there are two 'Level 2’s in the diagram

Answer from authors:

Thank you for pointing this out. We have changed this to Level 2a (application) and Level 2b (reviewer) in the protocol, the figure and the script

Attachments
Attachment
Submitted filename: Response to Reviewer.docx
Decision Letter - Ulf Sandström, Editor

Registered report protocol: Factors associated with inter-rater agreement in grant peer review

PONE-D-25-01557R1

Dear Dr. Hesselberg,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Ulf Sandström

Academic Editor

PLOS ONE

Formally Accepted
Acceptance Letter - Ulf Sandström, Editor

PONE-D-25-01557R1

PLOS ONE

Dear Dr. Hesselberg,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Ulf Sandström

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .