Peer Review History

Original SubmissionDecember 3, 2019
Decision Letter - Darrell A. Worthy, Editor

PONE-D-19-33458

Quantifying exploration in reward-based motor learning

PLOS ONE

Dear Mrs. van Mastrigt,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I was able to obtain one review from an expert in the field.  Overall, this reviewer had a positive impression of the paper, but also noted some issues that should be addressed in a revision.  From my reading of the manuscript I agree with the reviewer, and I invite you to submit a revision.  If you choose to submit a revision I will likely ask the same reviewer to review the revised manuscript.

We would appreciate receiving your revised manuscript by Feb 17 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Darrell A. Worthy, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: In this study the authors systematically investigate ways of quantifying exploration in motor learning by separating it from variability due to sensorimotor noise. Using a target directed weight shift task, they designed an experiment wherein multiple different baselines for calculating variability due to exploration (which have all been variously used in the literature) can be used and compared. They conclude that using trial-to-trial change following rewarded trials provided a better baseline than trials in a baseline block or trials in no-feedback blocks. They also conclude (based on assessment of the uncertainty in the estimates) that the typical number of trials used may be insufficient for accurate estimations of variability due to exploration.

This is a nice study with a clever design that provides a valuable contribution that can help improve methods for future studies. The methods and analyses seem appropriate and sound. The conclusion seems justified and makes theoretical sense to me. Quantifying the uncertainty in the estimations was a nice touch. I think that it should be accepted for publication, but I think there are some issues the authors can address first.

--

How long did the whole experiment take participants approximately? This should be stated in the methods section somewhere. It is particularly important since you recommend an order of magnitude greater number of trials to be used in the future, and it would be good to have an idea of how long an experiment with that many trials would take. Would it be unreasonably long, or long enough that the participants’ motivation would become an issue?

Here is one suggestion: you could expand on the bootstrapping analysis uses to quantify uncertainty by doing some simulations with larger numbers of trials and seeing how large the uncertainty of the estimate is. For example, there could be a graph of uncertainty as a function of number of trials. This could be really useful in determining the minimum number of trials to get a decent estimate. It might be that fewer additional trials are needed than your claim. At the other extreme, it’s possible that it asymptotes and even an unreasonably large number of trials gives a poor estimate. It would be cool (and potentially useful) to see what the curve of uncertainty by number of trials looks like.

Pg. 14 “No marked trends in variability over blocks were observed (Fig 3C).” I think that this point could use a little more discussion. I guess that it is good for the goals of the study that variability was constant across blocks, but shouldn’t you expect some decrease over time? Learning should decrease variability, and in general exploration should be highest early in a task when there is more to learn and should taper off later. Is it that most of the learning happens in the familiarization phase and beginning of the baseline block? Or is it that the stochastic feedback effectively prevents any real learning from occurring? A little discussion of these issues could be helpful. I think it makes sense that variability was stable, but my initial expectation was that it would go down over time.

At the end of the introduction (pg. 4) you say “We also aimed to explore the statistical assessment of sensorimotor noise and exploration: which summary statistic should be used, how does variability change over time and how uncertain are sensorimotor noise and exploration estimates?” It would be nice to return to these questions a little bit more directly in the Discussion. The last question is discussed in detail, but the other two are only briefly touched on. Do you have anything more to say about which summary statistic to use, or the potential implications of that choice? You use median instead of mean because of positively skewed distributions, but is it your general recommendation to use the median? Also see my point above about the second question (variability over time); the Discussion might be the place to address that issue.

Minor points:

In the results section, when talking about the measurements of exploration you state that no statistical tests were performed. It makes sense that these tests were unnecessary since they should be identical to the tests performed above, but it might be worth pointing that out for clarity. I admit to being confused as to why no tests were done at first.

It would be helpful for each graph in the figures to have a title. The information is in the caption, but it would be quicker and easier to understand if each panel had an informative title at the top.

In Figure 2A, what are the vertical bars? I assume the solid line is the median and the dashed on is the mean, but it would be nice to have that information clarified in the figure or caption

On page 9 you state “We approximated Δ by the median of trial-to-trial changes, instead of the mean” without explanation of why. It is explained later in the text, but it would help to have a quick explanation here since it is the first time it comes up (e.g. “due to extreme outliers” or “because the distributions were positively skewed”)

Typos etc:

On Pg. 7 in this sentence: “They were informed that after a familiarization phase, they would perform two phase phases”, the word phase is repeated

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Response to the reviewers

Quantifying exploration in reward-based motor learning

PLOS ONE

Dear dr. Worthy,

Thank you for the constructive review. We feel that the reviewers’ comments have led to an improvement of the manuscript. We respond to the reviewers’ comments in detail below. We have also addressed the additional requirements that you mentioned in your decision letter: we have renamed the files according to the guidelines on your website, and we specified that participants provided informed written consent in the Methods section on page 5 of our manuscript. Furthermore, we slightly changed the figure colors to make them understandable for people who are colorblind and when printed in grayscale.

Kind regards, also on behalf of my co-authors,

Nina van Mastrigt, MSc

Reviewer #1: In this study the authors systematically investigate ways of quantifying exploration in motor learning by separating it from variability due to sensorimotor noise. Using a target directed weight shift task, they designed an experiment wherein multiple different baselines for calculating variability due to exploration (which have all been variously used in the literature) can be used and compared. They conclude that using trial-to-trial change following rewarded trials provided a better baseline than trials in a baseline block or trials in no-feedback blocks. They also conclude (based on assessment of the uncertainty in the estimates) that the typical number of trials used may be insufficient for accurate estimations of variability due to exploration.

This is a nice study with a clever design that provides a valuable contribution that can help improve methods for future studies. The methods and analyses seem appropriate and sound. The conclusion seems justified and makes theoretical sense to me. Quantifying the uncertainty in the estimations was a nice touch. I think that it should be accepted for publication, but I think there are some issues the authors can address first.

--> Thanks for investing your time and energy in reading our manuscript. We are happy to hear that our conclusion makes theoretical sense to you, and that you consider the contribution of this study to the field valuable.

How long did the whole experiment take participants approximately? This should be stated in the methods section somewhere. It is particularly important since you recommend an order of magnitude greater number of trials to be used in the future, and it would be good to have an idea of how long an experiment with that many trials would take. Would it be unreasonably long, or long enough that the participants’ motivation would become an issue?

--> The experimental procedure lasted maximally 30 minutes. Performing all trials of the two experimental phases lasted only 13 minutes in total, but with intake, practice, and answering the Quick Motivation Index questions, the full experimental procedure took longer. We agree with you that the duration of the experiment should be stated in the methods section, and therefore added this information on page 7 and in more detail on page 8. In addition, based on one of your other comments on the number of trials in an experiment, we constructed a graph of confidence interval size for different trial numbers. We think that the reader could use this graph to determine the trade-off between motivation and number of trials needed for quantification of variability his- or herself, as motivation also depends on the experimental task. Thank you for this suggestion.

Here is one suggestion: you could expand on the bootstrapping analysis uses to quantify uncertainty by doing some simulations with larger numbers of trials and seeing how large the uncertainty of the estimate is. For example, there could be a graph of uncertainty as a function of number of trials. This could be really useful in determining the minimum number of trials to get a decent estimate. It might be that fewer additional trials are needed than your claim. At the other extreme, it’s possible that it asymptotes and even an unreasonably large number of trials gives a poor estimate. It would be cool (and potentially useful) to see what the curve of uncertainty by number of trials looks like.

--> We really like this suggestion. We have added a graph of uncertainty of the estimates as a function of trial number, but only until 50-100 trials. We could not extend the number of trials to an amount higher than the number that we measured: with a higher number of trials, distributions of variability might change as a result of motivation (as you mentioned in your previous comment), fatigue or learning (as you mentioned in one of your next comments). We therefore sticked to simulating the estimation uncertainty for trial numbers up to 50-100. This resulted in figure 3A, which we replaced the original figures 3A&B with. Indeed, the confidence interval size depends on the number of trials, and decreases with higher numbers of trials.

Pg. 14 “No marked trends in variability over blocks were observed (Fig 3C).” I think that this point could use a little more discussion. I guess that it is good for the goals of the study that variability was constant across blocks, but shouldn’t you expect some decrease over time? Learning should decrease variability, and in general exploration should be highest early in a task when there is more to learn and should taper off later. Is it that most of the learning happens in the familiarization phase and beginning of the baseline block? Or is it that the stochastic feedback effectively prevents any real learning from occurring? A little discussion of these issues could be helpful. I think it makes sense that variability was stable, but my initial expectation was that it would go down over time.

--> As we prevented participants from learning by providing them with stochastic feedback, indeed we did not expect a decrease in variability over time due to learning. As participants could not learn, we also did not expect higher exploration early in a task. This effect is probably driven by participants experiencing low success rates early in a task. In our task, the success rate was constant. Alternatively, you could have expected an increase in variability over time due to loss of motivation, but we found no significant difference in motivation between phases. We have added a paragraph in the discussion to clarify this for the readers.

At the end of the introduction (pg. 4) you say “We also aimed to explore the statistical assessment of sensorimotor noise and exploration: which summary statistic should be used, how does variability change over time and how uncertain are sensorimotor noise and exploration estimates?” It would be nice to return to these questions a little bit more directly in the Discussion. The last question is discussed in detail, but the other two are only briefly touched on. Do you have anything more to say about which summary statistic to use, or the potential implications of that choice? You use median instead of mean because of positively skewed distributions, but is it your general recommendation to use the median? Also see my point above about the second question (variability over time); the Discussion might be the place to address that issue.

--> We agree with your comment. In the introduction, we state that the choice of summary statistics to describe variability could be a problem, especially when extreme values are present. These extreme values may either represent exploration or outliers. For this reason, using the median as a summary statistic provides a “safe” way to quantify the central tendency of the data. In our method, we quantify variability based on squared trial-to-trial changes, resulting in positive values by definition. This resulted in skewed distributions, whereas signed trial-to-trial changes might not have resulted in those distributions. We therefore recommend the use of the median as a summary statistic in our method. We have added this in the discussion.

Minor points:

In the results section, when talking about the measurements of exploration you state that no statistical tests were performed. It makes sense that these tests were unnecessary since they should be identical to the tests performed above, but it might be worth pointing that out for clarity. I admit to being confused as to why no tests were done at first.

--> Thanks for pointing this out. We explicitly mentioned it now in the Statistics section on page 11, to make sure that readers will not get confused. We again clarified this in the Results section, on page 14.

It would be helpful for each graph in the figures to have a title. The information is in the caption, but it would be quicker and easier to understand if each panel had an informative title at the top.

--> At first, we understood that this is not allowed by Plos One, but upon checking this a second time, we found out that we can indeed add a title to each graph. We agree with you that this will inform readers more efficiently about the content and meaning of each graph. We added titles now in all figures.

In Figure 2A, what are the vertical bars? I assume the solid line is the median and the dashed on is the mean, but it would be nice to have that information clarified in the figure or caption.

--> Thank you for discovering this omission! Your assumption was correct. We now clarified the meaning of the lines in the caption of figure 2A on page 13.

On page 9 you state “We approximated Δ by the median of trial-to-trial changes, instead of the mean” without explanation of why. It is explained later in the text, but it would help to have a quick explanation here since it is the first time it comes up (e.g. “due to extreme outliers” or “because the distributions were positively skewed”)

--> We agree with your idea to clarify this earlier. We therefore extended the sentence on page 9 with your latter suggestion.

Typos etc:

On Pg. 7 in this sentence: “They were informed that after a familiarization phase, they would perform two phase phases”, the word phase is repeated.

--> We have removed the redundant word on page 7.

Attachments
Attachment
Submitted filename: Response to Reviewers.docx
Decision Letter - Darrell A. Worthy, Editor

Quantifying exploration in reward-based motor learning

PONE-D-19-33458R1

Dear Dr. van Mastrigt,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Darrell A. Worthy, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors did a good job addressing all of my comments and suggestions, and I think that the paper is ready for publication. It is clearly written, the methods and analyses are sound, and the implications are well explained. The new versions of the figures look nice as well.

I noticed one small typo: at the top of pg. 4 the phrase "an estimate sensorimotor noise" seems to be missing an "of"

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Formally Accepted
Acceptance Letter - Darrell A. Worthy, Editor

PONE-D-19-33458R1

Quantifying exploration in reward-based motor learning

Dear Dr. van Mastrigt:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Darrell A. Worthy

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .