Peer Review History
| Original SubmissionMarch 3, 2022 |
|---|
|
PONE-D-22-06419Assessing the behavioural profiles following neutral, positive and negative feedbackPLOS ONE Dear Dr. Dyson, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 10 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alberto Antonioni, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section. Additional Editor Comments: I recommend a major revision following both reviewers' suggestions to improve the manuscript before being considered it for publication. In particular, all the comments by Reviewer 2 should be addressed. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The current paper presents a neat set of experiments that adds nicely to the literature on post-error/post-loss responses by adding a third possible outcome such as a draw. This draw somewhat resembles near-misses in the gambling literature, however not entirely and therefore is a good addition in order to investigate decision making in response to different outcomes. I just have a couple of points that the authors might want to take into consideration: On the theoretical level, I am not entirely sure whether the frequency argument made by the orienting account is the only explanation for speeding vs. slowing. In addition to the fact, that Eben et al. (2020) observed speeding after losses with an equal amount of wins and losses (which seems so do you), recent research (e.g. Damaso et al. 2020) suggests that speeding vs. slowing might be determined by the controllability of the outcome (which in turn allows the idea that speeding is not necessarily maladaptive ...). Personally, I would find a sample size rationale helpful. I am curious why the authors chose to determine whether rock, paper or scissors were chosen instead of predetermining the outcome to be win, draw or loss? We usually followed the latter approach to be sure we indeed have an equal amount of outcomes, so I am happy to hear other options. In the spirit of openness I always encourage authors not only to share their data but also the analyses scripts and the material which increases the reproducibility of your experiments and analyses. Reviewer #2: The current paper reports three experiments that examined the behavioral profiles following wins, losses and draws. The effects of draws on subsequent responses are interesting, and as noted by the authors, certainly largely neglected in the literature so far. From this perspective, the current paper makes a useful contribution. However, the way the data is analyzed and the results are presented seem to hinder the understanding of the findings. Furthermore, I am not totally convinced that based on the results, the authors can claim that draws differ from both wins and losses because they show different sensitivity to value manipulation (see major comment 8). Hope my comments will be helpful to the authors in preparing the manuscript for publication. Major comments: 1. First of all, I would like to commend the authors for making the aggregate data publicly available on the Open Science Framework. Aggregate data will allow other researchers to check the accuracy of the reported results, and is thus probably one of the most important products of a research project. That being said, other products generated during the process of research are also important, and I would encourage the authors to publicly share these products as much as possible. For instance, these may include the experimental materials, the Presentation code for the experiment, raw data files, analysis code used to analyze data etc. By making these materials open, other researchers will be able to repeat the analysis, re-analyze the raw data in new ways, or run replications and extensions in the future. I think this will be essential for a cumulative and robust science. 2. The introduction set the stage up very well to highlight the importance of examining 'draw' outcomes, a type of outcome that has largely been neglected so far in the literature. The following paragraphs on reaction times, FRN and shift/stay behavior also did an excellent job of reviewing relevant previous work. However, what seem to be missing are clear statements of the research questions that the current paper aims to address. After learning that both RT and FRN show similar patterns after a draw and a loss, it seems clear that draws are more like unambiguously negative loss outcomes. If that is the case, what are the remaining questions or gaps that the current paper aims to address? A paragraph at the end of the introduction on what remains unknown, and what this paper will examine, would be helpful. 3. The sample sizes of the experiments seem relatively low (N = 40, 36 and 36), especially given that the designs involve multiple factors and interaction effects are of interest (see e.g., Brysbaert, M. (2019). How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables. Journal of Cognition, 2(1), 1–38. https://doi.org/10.5334/joc.72). Of course, whether a certain sample size provides sufficient statistical power or not depends on the expected effect size. I would therefore like the authors to provide more information on how the sample sizes were determined, and the achieved statistical power. 4. Figure 1 (and the corresponding analyses on stay/shift behavior) is difficult to follow, partly because it shows the proportion of stay for the win condition, and the proportion of shift for the lose and draw conditions. While stay and shift are complementary (they add up to 100%), they are not directly comparable. That the values in the win-stay condition are lower than the values in the lose-shift and draw-shift conditions in Figure 1 is thus not very informative. I would suggest showing the proportion of shift (or the proportion of stay) for all three outcomes. The authors may then add a horizontal line at 66.6%, showing the expected shift proportion of a player who chooses randomly. Values higher than 66.6% indicate a tendency to shift, while values lower than 66.6% indicate a tendency to stay. 5. The analyses on stay/shift behavior can be organized accordingly. For Experiment 1 for example, this would mean to start with a 3 (previous outcome, win, loss vs. draw) by 3 (draw value, 0, 1 vs. -1) ANOVA on the proportion of shift, followed up by separate ANOVAs or t tests if necessary. The proportions of shift can then be compared against the baseline of 66.6%, to establish (1) whether they differ from the baseline, and if yes, (2) in which direction (stay vs. shift). Note that in the approach adopted by the authors, they needed to look at the shift behavior after wins (in the very last analysis) anyways. I think consistently using the proportion of shift as the dependent variable may provide a better structure for the analyses and the results. 6. Statistical results are selectively reported, in the sense that for ANOVAs, only the details for significant effects but not non-significant effects are presented. This is OK for the sake of brevity and simplicity, but I think it is still important to show the detailed results for all effects from all analyses, regardless of statistical significance. For instance, the authors may report the full results in the Supplemental Materials and refer readers to it in the main text should they be interested. Related, some of the conclusions are based on non-significant results. However, absence of evidence is not evidence of absence. Non-significant results can also mean that the evidence is inconclusive (given the relatively small sample sizes), rather than supporting the null hypothesis. To obtain evidence for the absence of an effect, the authors will need to adopt other statistical approaches, such as Bayesian analysis. 7. In the discussion of Experiment 2, the authors noted that "losses carry twice the subjective magnitude of their objective value". My understanding of this statement is that in the value function in the Prospect theory, the decrease in subjective value from 0 to -1 is about twice as large as the increase in subjective value from 0 to 1. Thus, losses carry twice the subjective magnitude, in comparison to wins. I'm not sure if this would necessarily imply that the difference between -2 and -1 in the loss domain would still be twice as large as the difference between 2 and 1 in the win domain. After all, the value function has an S shape, and as the magnitude of wins and losses increases, any additional win or loss will have an increasingly smaller influence on the subjective value. 8. Related, one of the conclusions of the paper is that "wins and losses do not respond to value manipulations in the same way as draws". However, changing the value from 1 to 2 or from -1 to -2 is probably different from changing the value from -1 to 1. So I am not sure if the authors can ascribe the difference to wins and losses being different from draws. To make such a claim, a better test may be to give a value of -1 or -2 to a draw , and compare it to e.g. loss when the value for a loss is also varied from -1 to -2. Some of the data may already provide some initial insights into this issue. For instance, what results would the authors get from Experiments 1 and 2 if they would compare shift/stay between (a) draw (-1) and loss (-1) and (b) draw (+1) and win (+1)? When the values are matched, are there still any differences between the different types of outcomes? Minor comments: 1. Page 4: While the rest of the paper mainly used the term "post-loss speeding" (which is also the term used in Verbruggen et al. and Eben et al.,), the term "post-error slowing/speeding" was used here. Losses and errors may both be seen as failures or sub-optimal outcomes, but they are often used in different contexts: losses often denote losing a reward, while errors indicate incorrect responses in mostly cognitive psychology tasks (and may not be explicitly rewarded or punished). I think a short discussion/clarification on how the two may be related would be useful, instead of directly treating them as the same. 2. Page 5: Related, while Notebaert et al. indeed showed that the frequency of errors influences post-error slowing or speeding, Eben et al. (Experiment 3) has observed post-loss speeding while losses, wins and non-gamble outcomes occur equally often. Outcome frequency alone thus does not seem sufficient to explain post-loss speeding. 3. Page 5: "A second metric to ascertain the subjective interpretation of draws is the flexibility of responding following outcomes.". The paragraph then went on to discuss previous findings on feedback-related negativity, rather than response flexibility. 4. Page 6: I feel the paragraph on behavior flexibility would be easier to follow if the authors would first start with the phenomenon (loss-shift is not sensitive to objective values while win-shift appears to be sensitive) and then explain the underlying mechanism (that loss subjectively looms larger than win). 5. Page 6: "a mixed-strategy-style approach guaranteeing the absence of exploitation". Please explain what this strategy is when it first occurs. Does it mean choosing randomly? 6. Page 7: The introduction of Experiment 1 may be substantially shortened, by saying that since outcome frequencies influence reaction times (Notebaert et al.) and potentially the subjective interpretation of draws (e.g., when wins are frequent, draws may be more likely to be interpreted as failing to win), the authors used an opponent that chose randomly to ensure equal probabilities of win, loss and draw. 7. Page 9: One of the predictions for Experiment 1 is that "the degree to which participants stray from optimal performance should be greater for draw-shift than win-stay". My interpretation of this prediction is that the deviation from 66.6% for draw-shift would be larger than the deviation from 33.3% for win-stay. Or, if the authors adopt the suggestion of consistently using the proportion of shift (see above), this would mean the absolute difference between draw-shift and 66.6% would be larger than the absolute difference between win-shift and 66.6%. This does not seem to be tested. Or did I misunderstand this prediction? 8. An inconsistency in the description of the method. Page 9: "white-gloved (left; opponent) and blue-gloved (right; participant)". Page 10: "the opponent (blue glove on the left) and the participant (white glove on the right)". 9. Did participants get a reward based on their performance in the task? In other words, was the task incentivized? 10. In the instructions given to the participants, were there any indications as to whether the opponent could potentially be exploited? For instance, were participants explicitly told that they were playing against the computer, which would select the options randomly? 11. Which statistical software (and version) was used to analyze the data? 12. Page 14: "we note changing the value of a draw to -1 did not significant impact". Significant should be significantly? 13. Figure 1 currently only plots the shift/stay proportions. Since reaction times are of interest as well, plotting RT would be useful. So Figure 1 would contain 6 subplots, with one column for RTs and one column for shift/stay. 14. Page 19: The first paragraph of the Discussion seems better suited for the Results section. 15. Page 23: What is 'sterr'? 16. Page 23: What is the dependent variable in the two-way, repeated-measures ANOVA? 17. When participants shift, there are two different options that they can shift to. I wonder if the authors also looked at different types of shift to see if draw and loss differ. 18. Page 25: "a win followed by a draw causes an increase in shift behaviour whereas a loss followed by a draw causes shift behaviour to decrease". This sounds like a win leads to an increase in shift behavior and a loss leads to a decrease, but I think the authors meant that the draws have such an effect. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Charlotte Eben Reviewer #2: Yes: Zhang Chen [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Assessing behavioural profiles following neutral, positive and negative feedback PONE-D-22-06419R1 Dear Dr. Dyson, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Alberto Antonioni, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Considering both reviewers' positive evaluation, the work can be accepted after a very minor revision without going through any additional review process. The authors can just include constructive suggestions from Reviewer 2 in the final version of their manuscript. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you very much for your responses. My questions have been answered adequately and I don't not have any further comments. Congratulations to this interesting paper. Reviewer #2: Thank you for the opportunity to review the revised manuscript. I appreciate the authors’ detailed and thoughtful responses, and most of my comments have been addressed very satisfactorily. I have only a few relatively minor comments left for the revised manuscript. 1. P9. The power analysis is based on the difference between draw (0)-shift and loss (-1)-shift observed in two previous studies. However, this particular comparison does not seem to be the main focus of the paper. For instance, none of the four predictions listed for Experiment 1 (on p.8-9) is about this particular comparison. Given the large number of tests, I think it would be helpful if the authors could clearly state which analyses are theoretically most relevant, and base the power consideration on the theoretically informative effects. Since the sample sizes are known, sensitivity analysis rather than a priori power analysis may be more appropriate here. 2. P5. “However, outcome frequency does not provide a complete account of post-loss slowing since post-loss slowing is intact when positive and negative outcomes are experienced to the same degree (eg., Eben et al., 2020)”. Eben et al. found post-loss speeding, rather than slowing. 3. P5. “A second metric to ascertain the subjective interpretation of draws is the flexibility of responding following outcomes.” Perhaps the authors forgot to change this sentence in the manuscript? ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Charlotte Eben Reviewer #2: Yes: Zhang Chen ********** |
| Formally Accepted |
|
PONE-D-22-06419R1 Assessing behavioural profiles following neutral, positive and negative feedback Dear Dr. Dyson: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alberto Antonioni Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .