Peer Review History

Original SubmissionSeptember 6, 2023
Decision Letter - Tim Mathes, Editor
Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

PONE-D-23-26404Estimating the replicability of highly cited clinical research (2004-2018)PLOS ONE

Dear Dr. da Costa,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Mar 22 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Tim Mathes

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please be informed that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript.

3. Please remove your figures from within your manuscript file, leaving only the individual TIFF/EPS image files, uploaded separately. These will be automatically included in the reviewers’ PDF.

4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

5. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of Estimating the replicability of highly cited clinical research (2004-2018)

This paper examines the replicability of highly cited (>2000 citations) clinical research over the period 2004-2018. In so doing, the paper attempts to provide evidence for or against prior work that has suggested that highly cited articles are often unexpectedly contradicted or found to have inflated effects in subsequent research. The replicability of papers in their sample are evaluated by two criteria (with some reasonable exceptions) by the presence of a statistically significant effect (p <0.05) in the same direction or a finding within the confidence interval (CIs) of the predictor of the original study.

Methodologically, the authors do not do original replications. Rather, they cleverly search for “replications” by other authors that were published in PubMed, or as part of meta-analyzes, and had a reasonable PICO alignment with the original study (Population, Intervention, Comparator and Outcome). They find that in the 89 highly cited they located in the literature, 24 had valid replications (17 meta-analyses and 7 primary studies), of which 21 (88%) had effect sizes with overlapping CIs. Of 15 highly cited studies with a statistically significant difference in the primary outcome, 13 (87%) had a significant effect in the replication as well. When both criteria were considered together, the replicability rate in our sample was of 20 out of 24 (83%). No evidence for inflation was found.

The authors also compare the characteristics of replicating and non-replicating paper in an effort to provide descriptive data on how replicating and non-replicating papers may differ such as citation per year and effect size. Table 3 is a useful qualitative analysis of the papers studied that adds rich detail and context that few replication studies do. The writing is tight and the methodological work does its best to deal with sampling problems and the use of meta-analysis data.

Bottom line, the authors find there was little evidence of inflation and the replicability rate of highly cited papers was higher than seen in previous studies.

I think the paper should be published but conditional on the following changes being made.

1. Report data limitations of the study up front. While it is admirable that the authors are repurposing the replications by other authors to cost-effectively examine the replication rate of highly cited papers, the authors must clearly state that the results must be cautiously interpreted and are preliminary. These highly cited paper were likely put through replication tests by other researchers for reasons that do not generalize (e.g., the papers are especially surprising, have a large or special population sample, present the first of its kind clinical treatment, etc. The sample size of papers is small. Consequently, the authors must clearly report to readers that while their results show higher levels of replicability than previous work, the results rest on methodological and data constraints that have unknown sample selection and sample size biases that limit generalization beyond the papers in the study and that future research is needed before firm conclusions can be drawn.

2. Bring the study up to the literature. Currently, the paper lacks mention of work that makes the paper feel behind the literature. I would suggest updating the paper in several ways. First, the paper appears to ignore the work that has already shown that replicating and non-replicating papers diffuse through the over the first 5 year literature at indistinguishable rates 1,2. Second, in regards to arguments supporting your sampling issue, I would cite 3,4. Finally, a paper that looked at highly cited papers in psychology over a 20 year period showed that highly cited paper replicate at a significantly higher frequency that papers with low citations, bolstering your finding of a higher rate of replication than found in earlier studies 5.

1. Yang Y, Youyou W, Uzzi B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences. 2020;117(20):10762-10768.

2. Hardwicke TE, Szűcs D, Thibault RT, et al. Citation patterns following a strongly contradictory replication result: Four case studies from psychology. Advances in Methods and Practices in Psychological Science. 2021;4(3):25152459211040837.

3. Isager PM, Van Aert R, Bahník Š, et al. Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints. Psychological Methods. 2021;

4. Cohn A, Fehr E, Maréchal MA. Selective participation may undermine replication attempts. Nature. 2019;575(7782):E1-E2.

5. Youyou W, Yang Y, Uzzi B. A discipline-wide investigation of the replicability of Psychology papers over the past two decades. Proc Natl Acad Sci. 2023;120((6)):e2208863120. doi:https://doi.org/10.1073/pnas.2208863120

Reviewer #2: In the context of the ‘reproducibility crisis’, it is highly important to examine how robust, reliable and replicable clinical trial results are. Furthermore, it also is necessary to examine how replicable the replicability analyses themselves are. In this vein, the present manuscript is a valuable addition to previous studies.

The main issue of the current manuscript is the low rate of replication with matching studies being found for only 27% of primary studies. As only 24 study pairs could be analysed, the results come with some uncertainty. A second problem is the fact, that many studies were replicated by metaanalyses, which included the index study and thus cannot be considered as a fully independent replication. The authors nevertheless address these two points well, so the present analysis is a very useful addition to the existing body of literature. I have only a three minor comments:

1. On page 10, “overall response rates” are mentioned. It appears as if ‘objective response rates’ are meant.

2. On page 26, several reasons are given, why “study pairs were … not perfect replicas of each other”. Perhaps, it would be useful also to take a look at the duration of index and replication study, because it is common that short-term effects wane over time.

3. On page 25, the present analysis is judiciously compared with Ioannidis’ study from 2005. It would be helpful to know the median (and IQR) sample size of the index trials used in Ioannidis’ and those used here, because smaller trials are clearly less reliable than larger ones. It could well be that major journals have become more reluctant to publish smaller trials, even if they show surprisingly positive results. If so, the slightly higher replicability found in the present analysis could be interpreted as a possible improvement in publication policies (or citation patterns).

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Dear Dr. Mathes,

Please find the responses to the reviewer comments below. We hope that our changes to the manuscript will make it acceptable for publication in PloS One.

Yours sincerely,

Gabriel Gonçalves da Costa

Reviewer #1:

This paper examines the replicability of highly cited (>2000 citations) clinical research over the period 2004-2018. In so doing, the paper attempts to provide evidence for or against prior work that has suggested that highly cited articles are often unexpectedly contradicted or found to have inflated effects in subsequent research. The replicability of papers in their sample are evaluated by two criteria (with some reasonable exceptions) by the presence of a statistically significant effect (p <0.05) in the same direction or a finding within the confidence interval (CIs) of the predictor of the original study.

Methodologically, the authors do not do original replications. Rather, they cleverly search for “replications” by other authors that were published in PubMed, or as part of meta-analyzes, and had a reasonable PICO alignment with the original study (Population, Intervention, Comparator and Outcome). They find that in the 89 highly cited they located in the literature, 24 had valid replications (17 meta-analyses and 7 primary studies), of which 21 (88%) had effect sizes with overlapping CIs. Of 15 highly cited studies with a statistically significant difference in the primary outcome, 13 (87%) had a significant effect in the replication as well. When both criteria were considered together, the replicability rate in our sample was of 20 out of 24 (83%). No evidence for inflation was found.

The authors also compare the characteristics of replicating and non-replicating paper in an effort to provide descriptive data on how replicating and non-replicating papers may differ such as citation per year and effect size. Table 3 is a useful qualitative analysis of the papers studied that adds rich detail and context that few replication studies do. The writing is tight and the methodological work does its best to deal with sampling problems and the use of meta-analysis data.

Bottom line, the authors find there was little evidence of inflation and the replicability rate of highly cited papers was higher than seen in previous studies.

I think the paper should be published but conditional on the following changes being made.

1. Report data limitations of the study up front. While it is admirable that the authors are repurposing the replications by other authors to cost-effectively examine the replication rate of highly cited papers, the authors must clearly state that the results must be cautiously interpreted and are preliminary. These highly cited paper were likely put through replication tests by other researchers for reasons that do not generalize (e.g., the papers are especially surprising, have a large or special population sample, present the first of its kind clinical treatment, etc. The sample size of papers is small. Consequently, the authors must clearly report to readers that while their results show higher levels of replicability than previous work, the results rest on methodological and data constraints that have unknown sample selection and sample size biases that limit generalization beyond the papers in the study and that future research is needed before firm conclusions can be drawn.

We thank the reviewer for the comment, and generally agree with the limitations pointed out above, both in terms of the selectivity (and potential biases) of the sample and the small sample size. These limitations are now pointed out more explicitly in the abstract (page 2), discussion (page 29) and conclusion (page 29), using some of the references suggested in subsequent comments by the reviewer.

2. Bring the study up to the literature. Currently, the paper lacks mention of work that makes the paper feel behind the literature. I would suggest updating the paper in several ways. First, the paper appears to ignore the work that has already shown that replicating and non-replicating papers diffuse through the over the first 5 year literature at indistinguishable rates 1,2.

1. Yang Y, Youyou W, Uzzi B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences. 2020;117(20):10762-10768.

2. Hardwicke TE, Szűcs D, Thibault RT, et al. Citation patterns following a strongly contradictory replication result: Four case studies from psychology. Advances in Methods and Practices in Psychological Science. 2021;4(3):25152459211040837.

Both of these studies are now cited in the discussion (pages 28-29), as well as other studies that have looked at this issue – i.e. Serrano-Garcia & Gneezy, 2021 (https://doi.org/10.1126/sciadv.abd1705), Youyou et al, 2023 (https://doi.org/10.1073/pnas.2208863120) and Tatsioni et al., 2007 (https://doi.org/10.1001/jama.298.21.2517)

Second, in regards to arguments supporting your sampling issue, I would cite 3,4. 5.

3. Isager PM, Van Aert R, Bahník Š, et al. Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints. Psychological Methods. 2021;

4. Cohn A, Fehr E, Maréchal MA. Selective participation may undermine replication attempts. Nature. 2019;575(7782):E1-E2.

We have cited Isager et al.’s study as suggested (page 28), although we also note that this is a recent proposal from another field of research, and that it is not clear how much the issues brought up are actually taken into account by clinical researchers building upon previous work. Cohn et al.’s study is added as well on page 27.

Finally, a paper that looked at highly cited papers in psychology over a 20 year period showed that highly cited paper replicate at a significantly higher frequency that papers with low citations, bolstering your finding of a higher rate of replication than found in earlier studies.

5. Youyou W, Yang Y, Uzzi B. A discipline-wide investigation of the replicability of Psychology papers over the past two decades. Proc Natl Acad Sci. 2023;120((6)):e2208863120. doi:https://doi.org/10.1073/pnas.2208863120

As far as we could understand, the finding in this paper is that work from highly cited authors have a higher rate of replication success. The correlation between article citations and replicability was found to be non-significant for empirically replicated papers, and weakly negative for the authors’ metric of predicted replication success. This article is now cited along with references 1 and 2 in the discussion of possible selection bias (page 28).

Reviewer #2:

In the context of the ‘reproducibility crisis’, it is highly important to examine how robust, reliable and replicable clinical trial results are. Furthermore, it also is necessary to examine how replicable the replicability analyses themselves are. In this vein, the present manuscript is a valuable addition to previous studies.

The main issue of the current manuscript is the low rate of replication with matching studies being found for only 27% of primary studies. As only 24 study pairs could be analysed, the results come with some uncertainty. A second problem is the fact, that many studies were replicated by metaanalyses, which included the index study and thus cannot be considered as a fully independent replication. The authors nevertheless address these two points well, so the present analysis is a very useful addition to the existing body of literature. I have only a three minor comments:

1. On page 10, “overall response rates” are mentioned. It appears as if ‘objective response rates’ are meant.

We thank the reviewer for pointing out this mistake, which has been corrected in the revised version (page 9). We noted the same mistake in page 10 and corrected it as well.

2. On page 26, several reasons are given, why “study pairs were … not perfect replicas of each other”. Perhaps, it would be useful also to take a look at the duration of index and replication study, because it is common that short-term effects wane over time.

This is an interesting suggestion, which led us to try to systematically compare follow-up duration between initial studies and replications. However, this turned out to be much more complicated than we thought. Many papers are based on survival analyses (particularly applied to progression-free survival in cancer studies), in which the duration of the study (measured as the median or maximum follow-up) is inevitably linked to the treatment’s efficacy: thus, longer studies by definition correlate with larger treatment effects. Moreover, many of the replications are meta-analyses, in which the duration of the included studies is variable – and not necessarily measured by commensurable metrics (e.g. some studies might use survival analyses, while others might use evaluations at a fixed point in time).

Mean duration could be compared for some study pairs and did not seem to be consistently different in these; that said, we did not want to base any hard conclusions on those few studies, and chose not to include this analysis in the revised manuscript. That said, we did include a sentence in the discussion (page 23) to acknowledge the possibility that variation in these or other methodological issues could influence results, as suggested by the reviewer.

3. On page 25, the present analysis is judiciously compared with Ioannidis’ study from 2005. It would be helpful to know the median (and IQR) sample size of the index trials used in Ioannidis’ and those used here, because smaller trials are clearly less reliable than larger ones. It could well be that major journals have become more reluctant to publish smaller trials, even if they show surprisingly positive results. If so, the slightly higher replicability found in the present analysis could be interpreted as a possible improvement in publication policies (or citation patterns).

We have performed the analysis, which shows that the median sample size in Ioannidis’s sample is actually larger (median = 1500, IQR= 633 – 4382) than in ours (median = 332, IQR = 194 – 730). Thus, the difference (which may be partly due to the presence of cohort studies and to the lower prevalence of phase 1 studies (which were highly prevalent in our sample) does not seem to support the hypothesis brought up by the reviewer (i.e., that journals have become more reluctant to publish smaller trials). This is now mentioned in the discussion session (page 22).

Other changes:

- In the process of revising the manuscript, we realized that some highly cited studies had their sample sizes missing in our primary data file. This led to a minor revision in the third row of Table 6 (predictors of replication success) and Table S4, which does not qualitatively change the conclusions of our analysis (which remains underpowered to accurately detect predictors).

- Small clarifications or updated references were added to other parts of the manuscript (i.e., pages 23-25).

Attachments
Attachment
Submitted filename: Responses to Reviewers - Costa et al.docx
Decision Letter - Tim Mathes, Editor

Estimating the replicability of highly cited clinical research (2004-2018)

PONE-D-23-26404R1

Dear Dr. da Costa,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tim Mathes

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed all my comments sufficiently. I am satisfied with the revision and recommend that the paper be accepted.

Reviewer #2: The reviewer thanks the authors for revising the manuscript in accordance to the reviewers' comments.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Tim Mathes, Editor

PONE-D-23-26404R1

PLOS ONE

Dear Dr. da Costa,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Tim Mathes

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .