Peer Review History
| Original SubmissionAugust 20, 2024 |
|---|
|
Dear Dr Hall-McMaster, Thank you for submitting your manuscript entitled "Neural Prioritisation of Past Solutions Supports Generalisation" for consideration as a Research Article by PLOS Biology. Your manuscript has now been evaluated by the PLOS Biology editorial staff and I am writing to let you know that we would like to send your submission out for external peer review. However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire. Once your full submission is complete, your paper will undergo a series of checks in preparation for peer review. After your manuscript has passed the checks it will be sent out for review. To provide the metadata for your submission, please Login to Editorial Manager (https://www.editorialmanager.com/pbiology) within two working days, i.e. by Sep 06 2024 11:59PM. If your manuscript has been previously peer-reviewed at another journal, PLOS Biology is willing to work with those reviews in order to avoid re-starting the process. Submission of the previous reviews is entirely optional and our ability to use them effectively will depend on the willingness of the previous journal to confirm the content of the reports and share the reviewer identities. Please note that we reserve the right to invite additional reviewers if we consider that additional/independent reviewers are needed, although we aim to avoid this as far as possible. In our experience, working with previous reviews does save time. If you would like us to consider previous reviewer reports, please edit your cover letter to let us know and include the name of the journal where the work was previously considered and the manuscript ID it was given. In addition, please upload a response to the reviews as a 'Prior Peer Review' file type, which should include the reports in full and a point-by-point reply detailing how you have or plan to address the reviewers' concerns. During the process of completing your manuscript submission, you will be invited to opt-in to posting your pre-review manuscript as a bioRxiv preprint. Visit http://journals.plos.org/plosbiology/s/preprints for full details. If you consent to posting your current manuscript as a preprint, please upload a single Preprint PDF. Feel free to email us at plosbiology@plos.org if you have any queries relating to your submission. Kind regards, Christian Christian Schnell, PhD Senior Editor PLOS Biology cschnell@plos.org |
| Revision 1 |
|
Dear Sam, Thank you for your patience while your manuscript "Neural Prioritisation of Past Solutions Supports Generalisation" was peer-reviewed at PLOS Biology. I'd like to apologize again for this very long delay. As I mentioned before, we had difficulties with multiple reviewers dropping out at different stages in the process, meaning I had to sign on new reviewers late in the process. In any case, your manuscript has now been evaluated by the PLOS Biology editors, an Academic Editor with relevant expertise, and by several independent reviewers. In light of the reviews, which you will find at the end of this email, we would like to invite you to revise the work to thoroughly address the reviewers' reports. As you will see below, the reviewers think that your study is well executed and provides important insights. Reviewer 1 requests only textual clarifications and additional discussions, while Reviewer 2 requests a deeper examination of both the behavioral and neural data to strengthen the study's claims and provide further support for the conclusions. We encourage you to revise your manuscript carefully in light of the reviewers' detailed suggestions. Given the extent of revision needed, we cannot make a decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is likely to be sent for further evaluation by all or a subset of the reviewers. We expect to receive your revised manuscript within 3 months. Please email us (plosbiology@plos.org) if you have any questions or concerns, or would like to request an extension. At this stage, your manuscript remains formally under active consideration at our journal; please notify us by email if you do not intend to submit a revision so that we may withdraw it. **IMPORTANT - SUBMITTING YOUR REVISION** Your revisions should address the specific points made by each reviewer. Please submit the following files along with your revised manuscript: 1. A 'Response to Reviewers' file - this should detail your responses to the editorial requests, present a point-by-point response to all of the reviewers' comments, and indicate the changes made to the manuscript. *NOTE: In your point-by-point response to the reviewers, please provide the full context of each review. Do not selectively quote paragraphs or sentences to reply to. The entire set of reviewer comments should be present in full and each specific point should be responded to individually, point by point. You should also cite any additional relevant literature that has been published since the original submission and mention any additional citations in your response. 2. In addition to a clean copy of the manuscript, please also upload a 'track-changes' version of your manuscript that specifies the edits made. This should be uploaded as a "Revised Article with Changes Highlighted" file type. *Re-submission Checklist* When you are ready to resubmit your revised manuscript, please refer to this re-submission checklist: https://plos.io/Biology_Checklist To submit a revised version of your manuscript, please go to https://www.editorialmanager.com/pbiology/ and log in as an Author. Click the link labelled 'Submissions Needing Revision' where you will find your submission record. Please make sure to read the following important policies and guidelines while preparing your revision: *Published Peer Review* Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Please see here for more details: https://blogs.plos.org/plos/2019/05/plos-journals-now-open-for-published-peer-review/ *PLOS Data Policy* Please note that as a condition of publication PLOS' data policy (http://journals.plos.org/plosbiology/s/data-availability) requires that you make available all data used to draw the conclusions arrived at in your manuscript. If you have not already done so, you must include any data used in your manuscript either in appropriate repositories, within the body of the manuscript, or as supporting information (N.B. this includes any numerical values that were used to generate graphs, histograms etc.). For an example see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5 *Blot and Gel Data Policy* We require the original, uncropped and minimally adjusted images supporting all blot and gel results reported in an article's figures or Supporting Information files. We will require these files before a manuscript can be accepted so please prepare them now, if you have not already uploaded them. Please carefully read our guidelines for how to prepare and upload this data: https://journals.plos.org/plosbiology/s/figures#loc-blot-and-gel-reporting-requirements *Protocols deposition* To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Thank you again for your submission to our journal. We hope that our editorial process has been constructive thus far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Christian Christian Schnell, PhD Senior Editor PLOS Biology cschnell@plos.org ------------------------------------ REVIEWS: Reviewer #1: In the current paper, the authors investigate an interesting and timely question about the neural basis of a particular form of generalization, where human participants could re-use previous solutions in new problems. Behavior matched the predictions of their SF&GPI model, as previously behaviorally validated in an earlier publication. fMRI results focused on decoding representations of choice options, where the previously favored options were reactivated in new situations. Unexpectedly, the effect in the visual cortex was related to generalization behavior. The neural findings are interesting and novel. Overall, the paper was a pleasure to read. It has a very clear and comprehensive description of the experiment and results, and it is obvious that considerable care went into designing an interesting and well-controlled task and selecting appropriate methods. My concerns are primarily about interpretation, regarding a potential alternative account and the comparison to a model-based alternative. 1. The SF&GPI model is clear and it fits behavior well. However, I think it is important to describe how this model overlaps with or differs from an alternative perspective, based on configural / conjunctive learning, as highlighted by recent human imaging papers (e.g. Duncan et al. 2018 Neuron; Ballard et al. 2019 Nat Comm). In the current task, one could construct an MF learning model that operates over configurations (cue screen displaying a +$ associated with a single gem), linking configurations to options (cities) based on reinforcement. This seems to be the same as a 'policy' in the SF&GPI model, but derived instead from an MF model based on conjunctions (and supported in implementation by the papers cited above). For example, during training, consider the case where the cue screen configuration of [gem1 with +$ below] is followed by reward when city4 is chosen, while a cue screen configuration of [gem3 with +$ below] is followed by reward when city1 is chosen. Individual gems are not associated with reward, and individual cities are not associated with reward, but specifically the conjunction. Presenting that conjunction again will favor a response of the associated city, based on the stored values. In the weather prediction task (e.g. Duncan et al.) this situation would be similar to a cue configuration of AB followed by reward when 'sun' is selected, while the configuration of AC or DB is followed by reward when 'rain' is selected. In the current task, if these reinforced configuration-city links are carried over into the test trials, then the prediction is that participants would choose them instead of re-evaluating the alternatives. The behavioral and neural predictions from this configural account seem like they would be very similar or exactly the same as those of the SF&GPI account. The authors should explain the overlap with the SF&GPI model, and whether and how a configural account could explain the current data. Then a minor note about the neural predictions - in the configural account and also I think in the SF&GPI model, there is no extra need to activate the features (gem numbers), because the favored training policy can be selected without retrieval of these specific details. In a configural account, it also isn't clear that reactivation of the options would be expected to occur at all - especially if the configuration is well-learned. Perhaps this will not happen in the current experiment, as training is short and there are effectively many 'reversals' of configural responses across all the blocks. 2. I have two concerns about the strong interpretation of the support for the SF&GPI account of behavior over a model-based account. It seems that an MB agent with some reasonable additions - weighing noisy memory and trading of incentives versus costs of computation - would provide very similar results as the SF&GPI account, and so perhaps the interpretation can be tempered a bit. First, subjects' actual task experiences during training, as shown in the individual subject data points shown in Figure 2C, only rarely include the two non-favored cities. Because they only have rare experience of the features (gem numbers) in those cities, and then even when experienced, they receive negative feedback, memory for them will be noisy and inaccurate, relative to the two other cities. The current MB agent assumes perfect memory for features in all cities, but this produces a bit of a straw model for comparison. In an MB agent that takes into account the strength of memory, when gem value estimates are noisy and inaccurate, then when faced with a new situation in a test trial, this agent would weigh options by the degree of expected estimation errors, and, if the weight on memory in the decision were high enough, the MB agent would be biased to choose the previously optimal options - mirroring the SF&GPI agent. The memory strength issues is something that the authors obliquely address in the supplement, to their credit. In the last block only, participants reported final gem count estimates and confidence ratings for each gem in each city. As this was limited to the last block, it likely reduces the power to detect some effects of memory errors and confidence and optimal policy. But nevertheless, the authors find a strong correlation between memory errors and the use of the SF&GPI policy, the same policy that would also be favored by an MB agent with uncertain memory. In the behavioral results, it would help to point to the correlation between error in gem count estimates for options disfavored during training & policy re-use. Second, and less importantly, an MB agent with reasonable constraints may weigh whether full re-evaluation is worth the effort costs. In the current case, the relative reward improvement at test between the previously favored option and the new optimal option may have been insufficient to overcome the cost of doing a full MB evaluation (perhaps on top of the consideration of noisy memory). Specifically, at test, it seems like the earnings benefit is only yielding improvements of at most 12.5% (or 10-20 points, as noted in the discussion section) over the option preferred during training. In contrast, during training, the earnings benefit over the next-best option seems to be at least a 200% improvement (though some alternatives are all negative). To estimate the reward difference, of course, an MB agent would need to try this evaluation a few times in the first place, but once the minimal point gain in testing is checked in early blocks, the full evaluation could be dropped to save costs. (Adding this effort cost tradeoff involves some extra machinery than the simpler memory noise point.) In the task description, I would suggest the authors add a note on the difference in the average delta between the optimal and next-best options during training and during test. (Based on quick calculations, that delta ranges from 60 to 180 points during training, but as described it is only 10-20 during test.) Of course, the authors were constrained in the experimental design by the arithmetic and conditions, so a greater benefit in test trials may not have been possible, but being clear about the values would help. Taken together, these points about a more reasonable MB agent suggest that the current strong favoring of the SF&GPI agent be tempered a bit. These points do not affect the interpretation of the neural results. 3. In the behavior-to-neural correlation analysis shown in Fig. 3F, it isn't clear how much this result may be associated with the response made on those trials. Controlling for the response made is an extra analysis done in the previous section, but it is omitted here. Please explain. Minor points: - The number of subjects in the abstract isn't what went into the reported analysis, which seems to be 38. - For classifier training, the authors report the mean number of trials that go into training, but what is the minimum (and maximum) for each of these analyses? - Did the associative reactivation of points on day 1 lead to significant classification of the point features, when tested on day 1 alone? - Did the associative reactivation of cities on day 1 work on day 1 data, and did it lead to a similar-performing classifier the one built from day 2 data? - Regarding the effects of feature triplet 4, was this related to the relative point difference between the best and next-best training option? I wasn't sure whether the features listed in order in the methods actually mapped to the order in the relevant supplemental figure, so it was hard to check. Reviewer #2: This study by Hall-McMaster and colleagues aims to test a few predictions about the neural substrates of generalization. Using a task called the gem collector task, the authors present behavioral evidence suggesting that when participants transferred prior knowledge of the environment to adapt to changing reward contingencies, their behavior aligned most closely with the successor features (SF)-generalized policy improvement (GPI) algorithm. By decoding fMRI data from several regions of interest (ROIs), they found that representations of optimal training policies remained stronger during test trials, even when those policies were no longer optimal. Furthermore, among the optimal training policies, the one that yielded higher rewards during test trials was more strongly activated. Overall, the results are original and compelling, offering insights into the neural substrates of flexible learning in humans. The experimental design builds on a previous study by Tomov et al. (2021), and the data analyses consist of straightforward behavioral analyses and neural decoding. The statistical analyses appear sound and align with standard procedures in fMRI research (though we note that we are not experts in fMRI). The supplementary information and figures are helpful, and both the behavioral and neural data appear to be available online. Although the findings are clearly presented, a deeper examination of both the behavioral and neural data could enhance their impact. Below, we present several questions and propose additional analyses that could further strengthen the study's claims and potentially uncover additional insights. Some of the proposed analyses are essential to fully support the current conclusions. 1. The overall behavioral results are clear and convincing. However, there are a few subtle but important behavioral effects that could benefit from more explanation. First is that there are significant differences between the proportion of option 1 and option 4 choices, which is not predicted by any of the models. The discussion section mentioned that a Universal Value Function Approximator (UVFA) could explain this discrepancy. It would be informative to include predictions from that model to demonstrate that it cannot account for the dominant patterns observed in the behavior, and potentially in the neural data, as effectively as SF-GPI. Additionally, some subtleties in participants' behavior are discussed in the discussion section. These would make valuable supplementary analyses for the main findings and could be integrated into the results section for greater clarity and impact. 2. Based on the choice proportions (Fig. 1C), it seems that participants did not use a strictly optimal policy during training, since the non-optimal options were chosen with some frequency. Did participants choose among the non-optimal options randomly (i.e., an epsilon-greedy policy) or did they favor any one of the non-optimal options in each task? Do individual differences in training trials predict their behavior in test trials (maybe explaining the subgroups in Fig. S2)? If so, this would provide even stronger evidence for transfer of training policies to test trials. 3. For the fMRI results, we wonder why TRs after feedback onset were used for training the decoder, since it seems sensible that signal related to upcoming city choices could appear as early as cue onset. However, in Fig. S6, it looks like the representation of chosen city only arose after feedback onset. Does this suggest that during choice, the representation of city is different from during feedback? If so, would one expect different results if the TRs around cue onset were used for training? 4. It is surprising that there is such a strong and robust effect in occipitotemporal cortex (OTC) but not frontal regions for this value-based task. In general, what is the implication for this result for the neural substrates/mechanisms of reinforcement learning? Where does the value information that modulates OTC activity come from, if not in OFC, DLPFC, or MTL? Could these regions represent some other task-related variables (maybe related to motor response)? 5. Since behavioral results suggest that participants are not simply performing SF&GPI, we wonder if the lack of activation and prioritization of optimal training policies in OFC and MTL reflect a true lack of encoding, or that these areas encode some other type of decision signals (like more model-based signals), so that on average no policy is favored more than other ones. Maybe looking at the decoding evidence on a task-by-task basis will resolve this. Minor point: The caption for Fig. 3F is incorrectly labeled as Fig. 3D. Additionally, it would be helpful to label each panel in this figure individually for clarity. |
| Revision 2 |
|
Dear Dr Hall-McMaster, Thank you for your patience while we considered your revised manuscript "Neural Prioritisation of Past Solutions Supports Generalisation" for publication as a Research Article at PLOS Biology. This revised version of your manuscript has been evaluated by the PLOS Biology editors, the Academic Editor and the original reviewers. Based on the reviews and on our Academic Editor's assessment of your revision, we are likely to accept this manuscript for publication, provided you satisfactorily address the remaining points raised by the reviewers and the following data and other policy-related requests: * We would like to suggest a different title to improve its accessibility for our broad audience: "Human choices on new tasks rely on reusing strategies that were successful in previous tasks" * Please add the links to the funding agencies in the Financial Disclosure statement in the manuscript details. * Please include the approval/license number from the institutional review board. * Please include information in the Methods section whether the study has been conducted according to the principles expressed in the Declaration of Helsinki. DATA POLICY: You may be aware of the PLOS Data Policy, which requires that all data be made available without restriction: http://journals.plos.org/plosbiology/s/data-availability. For more information, please also see this editorial: http://dx.doi.org/10.1371/journal.pbio.1001797 Note that we do not require all raw data. Rather, we ask that all individual quantitative observations that underlie the data summarized in the figures and results of your paper be made available in one of the following forms: 1) Supplementary files (e.g., excel). Please ensure that all data files are uploaded as 'Supporting Information' and are invariably referred to (in the manuscript, figure legends, and the Description field when uploading your files) using the following format verbatim: S1 Data, S2 Data, etc. Multiple panels of a single or even several figures can be included as multiple sheets in one excel file that is saved using exactly the following convention: S1_Data.xlsx (using an underscore). 2) Deposition in a publicly available repository. Please also provide the accession code or a reviewer link so that we may view your data before publication. Regardless of the method selected, please ensure that you provide the individual numerical values that underlie the summary data displayed in the following figure panels as they are essential for readers to assess your analysis and to reproduce it: 2 (all panels), 3BCHI, S1ABC, S3 (all panels), S4 (all panels), S5 (all panels), S6 (all panels), S7CD, S8 (all panels), S9 (all panels) and S12 NOTE: the numerical data provided should include all replicates AND the way in which the plotted mean and errors were derived (it should not present only the mean/average values). Please also ensure that figure legends in your manuscript include information on where the underlying data can be found, and ensure your supplemental data file/s has a legend. Please ensure that your Data Statement in the submission system accurately describes where your data can be found. * Please ensure that you are using best practice for statistical reporting and data presentation. These are our guidelines https://journals.plos.org/plosbiology/s/best-practices-in-research-reporting#loc-statistical-reporting and a useful resource on data presentation https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002128 * CODE POLICY Per journal policy, if you have generated any custom code during the course of this investigation, please make it available without restrictions. Please ensure that the code is sufficiently well documented and reusable, and that your Data Statement in the Editorial Manager submission system accurately describes where your code can be found. Please note that we cannot accept sole deposition of code in GitHub, as this could be changed after publication. However, you can archive this version of your publicly available GitHub code to Zenodo. Once you do this, it will generate a DOI number, which you will need to provide in the Data Accessibility Statement (you are welcome to also provide the GitHub access information). See the process for doing this here: https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content As you address these items, please take this last chance to review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the cover letter that accompanies your revised manuscript. In addition to these revisions, you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests shortly. We expect to receive your revised manuscript within two weeks. To submit your revision, please go to https://www.editorialmanager.com/pbiology/ and log in as an Author. Click the link labelled 'Submissions Needing Revision' to find your submission record. Your revised submission must include the following: - a cover letter that should detail your responses to any editorial requests, if applicable, and whether changes have been made to the reference list - a Response to Reviewers file that provides a detailed response to the reviewers' comments (if applicable, if not applicable please do not delete your existing 'Response to Reviewers' file.) - a track-changes file indicating any changes that you have made to the manuscript. NOTE: If Supporting Information files are included with your article, note that these are not copyedited and will be published as they are submitted. Please ensure that these files are legible and of high quality (at least 300 dpi) in an easily accessible file format. For this reason, please be aware that any references listed in an SI file will not be indexed. For more information, see our Supporting Information guidelines: https://journals.plos.org/plosbiology/s/supporting-information *Published Peer Review History* Please note that you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. Please see here for more details: https://plos.org/published-peer-review-history/ *Press* Should you, your institution's press office or the journal office choose to press release your paper, please ensure you have opted out of Early Article Posting on the submission form. We ask that you notify us as soon as possible if you or your institution is planning to press release the article. *Protocols deposition* To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Please do not hesitate to contact me should you have any questions. Sincerely, Christian Christian Schnell, PhD Senior Editor cschnell@plos.org PLOS Biology ------------------------------------------------------------------------ Reviewer remarks: Reviewer #1 (G. Elliott Wimmer): Thank you for your consideration of a model-based alternative with noisy memory; I believe those adjustments have significantly improved the paper. Thank you also for elaborating on the SF&GPI model relative to a configural learning account. I miss-estimated what would be needed for test phase performance in a configural account, which would indeed involve some kind of generalization or additional combination of features during training, e.g., for the 4th test task [1, 1, -1], as listed in the methods. Otherwise, a simple (but not at all formalized!) 'pick the highest combination that has been previously reinforced' kind of generalization would appear to solve 3 of 4 tests, given that three have a positive gem value that aligns with training (where there was only 1 positive value in the sets). As an aside, it is interesting that the behavior for test [1, 1, -1], which has the only mismatch with a (not formalized) configural learning + generalization account, shows the lowest level of behavior consistent with the SF&GPI account. If a configural + generalization strategy was dominating in general, in this case, the lack of match could prompt the use of additional evaluation for this specific test case. Reviewer #2 (Alireza Soltani): The authors have adequately addressed all of our concerns. We have no further comments. |
| Revision 3 |
|
Dear Sam, Thank you for the submission of your revised Research Article "Neural evidence that humans reuse strategies to solve new tasks" for publication in PLOS Biology. On behalf of my colleagues and the Academic Editor, Matthew Rushworth, I am pleased to say that we can in principle accept your manuscript for publication, provided you address any remaining formatting and reporting issues. These will be detailed in an email you should receive within 2-3 business days from our colleagues in the journal operations team; no action is required from you until then. Please note that we will not be able to formally accept your manuscript and schedule it for publication until you have completed any requested changes. Please take a minute to log into Editorial Manager at http://www.editorialmanager.com/pbiology/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. PRESS We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with biologypress@plos.org. If you have previously opted in to the early version process, we ask that you notify us immediately of any press plans so that we may opt out on your behalf. We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/. Thank you again for choosing PLOS Biology for publication and supporting Open Access publishing. We look forward to publishing your study. Sincerely, Christian Christian Schnell, PhD Senior Editor PLOS Biology cschnell@plos.org |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .