Peer Review History
| Original SubmissionJune 28, 2024 |
|---|
|
Dear Velazquez-Vargas, Thank you very much for submitting your manuscript "The Role of Training Variability for Model-based and Model-free learning of an Arbitrary Visuomotor Mapping" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. The reviewers only had a few minor comments, which should be easy to address. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Ulrik R. Beierholm Academic Editor PLOS Computational Biology Daniele Marinazzo Section Editor PLOS Computational Biology *********************** A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: I am satisfied with the authors' revisions and recommend that the work be published. My last lingering concern with the work is minor -- in the authors' response to my questions about experiment 3, they presented a justification for the experiment that did not make the text. In particular, I found the following paragraph helpful to my understanding of their work, and suggest similar language be included: We saw two possibilities, which are outlined in the main text on Lines 611-621: 1) forcing participants to initially learn the mapping may afford generalization even after a long period of repetitive training at a single target (the null hypothesis as the reviewer points out) or 2) participants could eventually forget the mapping if they were only required to repeat movements to one target location. This is like if a budding pianist first learned their scales (“mapping”) but then practiced only a single melody (“sequence”, for an extended period, they might forget the meaning of the piano keys in relation to the scale degrees – the senior author has experienced this firsthand. In addition, in Experiments 1 and 2, it was unclear if the participants in the Single group ever learned the mapping since the task didn’t necessarily demand it. Instead, they could simply memorize a sequence of responses and repeat it throughout the entire experiment. With these two points in mind, we designed Experiment 3 to test whether repetitive training to a single target would afford generalization even after a long period of training or it would cause forgetting of (or interference with) the full mapping, while also ensuring that the participants had learned the mapping in the first place. Reviewer #2: Overall, the rebuttal comments are satisfactory to the most extent. I commend the authors in conducting additional experiments in addressing the comments and clarifying various things where they offer a rebuttal. Here are few suggestions for improvement. I refer to their point numbers for presenting my concerns/suggestions. 1. Point 2: “However, we were able to replicate our findings of Experiment 1 with an online study where we reduced the number of trials to 100 (80 training and 20 generalization).” A statement to this effect can be incorporated in the manuscript/supplement to highlight that this protocol used a longer training period than what is typically found in the literature. 2. Point 7: The authors misinterpreted the point of this comment. We meant to suggest that SSG group participants are not exposed to the multi-SG condition. Hence, a peak in RT during the generalization period might suggest adaptation to some extent. A potential way to look at this would be to analyze trial-wise comparison of differences in RT during the generalization period. 3. Point 8: The authors do not seem to address this comment – point number 5 refers to a different kind of analysis. We intended to suggest looking at the normalized RT plots (RT divided by the number of moves) here to control for the number of moves. 4. Point 12: While we agree with the authors’ argument that RL models with sparse rewards performed poorly as compared to those with dense rewards, there is a bit of contradiction here in terms of the interpretation. The chessboard distance reward metric used in these models may not exactly correlate with the KM-based distance. The observation made by the authors that sometimes participants ‘seemingly’ move the cursor away from the target in terms of visual distance while still getting closer in terms of the KM distance can be considered as evidence for model-based learning should be interpreted with caution. Moreover, the point that SARSA with sparse rewards performed poorly could be mentioned in the manuscript or supplement. 5. Point 13: The authors can potentially include a candidate model that is a parametric arbitration model. This model would introduce a good baseline comparison for how an arbitration model (that calls for a model-free or model-based evaluation at each time step in a binary sense unlike the hybrid model that employs weighted estimates from the two systems) performs when the weights are not defined using a series of free parameters. 6. Point 14: Given a substantial amount of variance in RT plots as well as an (in our opinion) unnecessarily long training period, we think it is crucial to introduce perseveration and attention-lapse parameters in the modeling. Without introducing these parameters, our interpretation of best-fit parameters may be conditioned on some model mimicry effects wherein the model may try to adjust the parameters in order to capture the attention lapses and perseveration in the behavioral data. 7. Point 15: We are not sure what the authors are trying to suggest with “Given the recursive structure of …. It was not efficient to use Gibbs sampling”. A full posterior estimation should be plausible even with the kind of model definitions used in this manuscript. Additionally, even with the point estimates, the models should be simulated with the best-fit parameters and the simulated data should be compared to the empirical data. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Nicholas Franklin Reviewer #2: Yes: Raju Surampudi Bapi Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References: Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. |
| Revision 1 |
|
Dear Velazquez-Vargas, We are pleased to inform you that your manuscript 'The Role of Training Variability for Model-based and Model-free learning of an Arbitrary Visuomotor Mapping' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Ulrik R. Beierholm Academic Editor PLOS Computational Biology Daniele Marinazzo Section Editor PLOS Computational Biology *********************************************************** |
| Formally Accepted |
|
PCOMPBIOL-D-24-01082R1 The Role of Training Variability for Model-based and Model-free learning of an Arbitrary Visuomotor Mapping Dear Dr Velázquez-Vargas, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Jazmin Toth PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .