Peer Review History
| Original SubmissionMarch 21, 2022 |
|---|
|
PONE-D-22-08343The causal role of affect sharing in driving vicarious fear learningPLOS ONE Dear Dr. Müllner-Huber, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Your manuscript has been assessed by two peer-reviewers and their reports are appended below. The reviewers comment that your study could be strengthened by improved reporting of the effects observed and the correlational analyses. For example, one of the reviewers suggests that the study should apply a correction for multiple testing. In addition, the reviewers comment that the discussion could be improved by including comments on the key factors driving the effects. Could you please carefully revise the manuscript to address all comments raised? Please submit your revised manuscript by Oct 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Maria Elisabeth Johanna Zalm, Ph.D Editorial Office PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that Figure 1 includes an image of a participant in the study. As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”. If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript reports an experiment testing whether affect sharing modulates vicarious threat learning in humans. Participants undertook a vicarious Pavlovian threat conditioning procedure including a learning and a test phase under two different hypnotic suggestion conditions inducing either high or low affect sharing. During the learning phase, a higher unconditioned response (measured with skin conductance response), increased dwell time on the demonstrator’s face, and more intense self-reported unpleasantness were found when watching the demonstrator receiving an electric stimulation in the high compared to the low affect sharing condition. During the test phase, participants exhibited a greater conditioned response (operationalised as the difference in skin conductance response to the CS+ minus to the CS-) in the high relative to the low affective sharing condition, whereas no effect of the affective sharing manipulation was observed at the level of the explicit reports of the CS-US contingencies. Overall, the study is interesting, sound, and carefully designed and conducted. I found the manuscript to be clear, the hypotheses well motivated, and the conclusions balanced, with the limitations of the study being clearly and transparently acknowledged and discussed. I would also like to commend the authors for openly sharing their data and analysis scripts in a well-documented manner. I have a few comments that could be worth considering in a revision, notably with respect to the specificity of the effects observed and the correlational analyses. I describe these comments in detail below, along with some other minor points and a suggestion. Primary points 1) It is somewhat unclear whether the effects observed were mainly driven by (a) increased vicarious threat conditioning in the high affect sharing condition, (b) impaired vicarious threat conditioning in the low affect sharing condition, or (c) a combination of both. Even though I totally understand it is difficult to draw strong conclusions on this point without the inclusion of a control condition (e.g., hypnotic suggestion without a manipulation of affect sharing) and that adding such a condition in the current design would likely not be feasible (or advisable), I think that a discussion of the key factors driving the effects could be very beneficial to the manuscript. This could provide some deeper insights into the specificity and characteristics of the role of affect sharing in vicarious threat learning (e.g., is the influence of affect sharing an “all or none”, dichotomous effect or is it more granular, with affect sharing acting as a parametric or continuous modulator of vicarious threat learning?) 2) The fact that the hypnotic suggestions may have manipulated other cognitive factors that affect sharing is well acknowledged and discussed in the manuscript. However, I was wondering whether this manipulation could have also had an impact on other affective phenomena that do not necessarily rely on a social component. For instance, the hypnotic suggestions might have affected the relevance of emotional information in general, without being specific to its social nature. This could have in turn impacted vicarious threat learning. Although I agree this alternative explanation is rather speculative and doesn’t question the current findings, a discussion thereof could be of interest and possibly contribute to strengthening the manuscript even further. 3) It was unclear to me whether the correlational analyses performed were controlled for multiple testing. Given the number of correlations run, I would strongly recommend applying a correction for multiple testing, for instance by using false discovery rate (FDR; Benjamini & Hochberg, 1995). If the authors prefer not to apply such a correction, this should be clearly motivated and mentioned, and a clear notice warranting extra caution in the interpretation of the results from the correlation analyses should be made explicit. In any case, more information on how risks of false positives were mitigated or considered in those analyses would be good to report. Minor points 4) On line 94: I wonder if more information could be provided on the meaning of a hypnotic suggestibility score of 7. This would help the readers who are not expert in hypnosis (like me) have a bit more context. 5) In the section “Sample and screening”, some demographic information about the characteristics of the sample (e.g., age, gender, sex, etc.) tested in the experimental study should be provided. 6) Section “Sample and screening”, ll. 105-106: I believe it would be helpful to also report the rule used for terminating data collection here (see Simmons et al., 2011). 7) Section “Test stage”, ll. 171-177: I think it would be beneficial for clarity purposes to already mention here that the test stage is performed without any electric stimulation being delivered. 8) Section “Physiological measures”, ll. 219-231: For transparency purposes, I think it would be good practice to mention in the main text (in addition to the supplementary materials) that electrocardiography was measured during the experiment. 9) Section “Eye tracking”, l. 304: I would suggest that the default settings to extract fixations be briefly described or reported. 10) Concerning Figure 2, I was not sure why the results were displayed as a function of blocks as this factor was not included in the statistical analyses. My suggestion would be that Figure 2 should specifically illustrate the results associated with the statistical analyses reported in the main text, with the current Figure 2 being moved to the supplementary materials. 11) In the legend figures (see l. 371, 398, 434, 511): The plots are described as raincloud plots. However, raincloud plots typically also include a visualisation of the distribution (i.e., “the cloud”) of the individual datapoints (and not only the individual datapoints, “the rain”; see Allen et al., 2021). Therefore, I would generally recommend avoiding referring to these plots as raincloud plots. That said, the plotting of the individual datapoints is extremely valuable and the figures are very nice. 12) In the results section, I would suggest that the specific panel of the figure in question be referred to (e.g., “as shown in Fig 4A” instead of “as shown in Fig 4”, see l. 418; see also l. 495, 516) when describing the associated results as it facilitates the identification of the relevant information in the figure. 13) In the results section, please report the degrees of freedom associated with the correlation analyses. 14) Discussion, l. 626, 632, 641: I would strongly recommend specifying “vicarious (or social) fear learning” instead of using “fear learning” only when describing the findings. Suggestion 15) For the sake of completeness in results reporting, I would suggest that a confidence interval (e.g., 90% for tests relying on a one-sided distribution, 95% for tests relying on a two-sided distribution) around the effect size estimates be calculated and reported (see http://daniellakens.blogspot.com/2014/06/calculating-confidence-intervals-for.html). A useful R package in that regard is effectsize (https://cran.r-project.org/web/packages/effectsize/index.html; https://easystats.github.io/effectsize/). Signed, Yoann Stussi REFERENCES Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R., van Langen, J., & Kievit, R. A. (2021). Raincloud plots: A multi-platform tool for robust data visualization. Wellcome Open Research, 4, Article 63. https://doi.org/10.12688/wellcomeopenres.15191.2 Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289-300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.x Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. https://doi.org/10.1177/0956797611417632 Reviewer #2: This is an excellent paper that I am happy to recommend for publication pending minor revisions. I only have a few minor comments that the authors might want to take into account if asked to revise the paper. 1. Line 84: It might be good to make explicit why you expected poorer performance under low affect sharing. 2. Line 188: The fact that the results were clearly different in Round 2 than in Round 1 suggests that the attempts to avoid carry-over effects were unsuccessful. Because of these carry-over effects, would it not make sense to analyze and report only Round 1 data? 3. Line 633: It was not entirely clear to me what you mean with “was correlated between learning and test stage”. Please clarify. 4. Line 642: It might be good to clarify whether the lack of interaction was due to ceiling effects. The figure suggests that it was not, but still. Also, higher estimates do not necessarily mean better contingency awareness. There were four CS-US pairings so if someone gives an estimate of 6, he or she has poor memory for the pairings. For this reason, would it not be better to calculate the absolute differences between the number of actual CS-US pairings (i.e., 4 for the CS+) and the estimated number of CS-US pairings? Finally, what if you test the impact of suggestion on CS+ estimates only? Is this effect significant? 5. Discussion: Nothing is said about the theoretical implications of the results. In a way, I am happy about this because these data (like most other data) do not differentiate between different accounts of observational fear conditioning (e.g., association formation vs. propositional theories). The results as such are interesting because they reveal an important moderator of the effect. Nonetheless, it might be worthwhile to briefly discuss the results in relation to these models. Signed, Jan De Houwer ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Yoann Stussi Reviewer #2: Yes: Jan De Houwer ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
The causal role of affect sharing in driving vicarious fear learning PONE-D-22-08343R1 Dear Dr. Müllner-Huber, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Thiago P. Fernandes, PhD Academic Editor PLOS ONE Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have provided a thorough revision of their manuscript. They have carefully and convincingly addressed all my previous comments. Overall, the revision contributed to strengthening the manuscript and I have no further comment. I believe the paper is ready for publication and will make a valuable contribution to the field of social threat learning in humans. Signed, Yoann Stussi Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Yoann Stussi Reviewer #2: Yes: Jan De Houwer ********** |
| Formally Accepted |
|
PONE-D-22-08343R1 The causal role of affect sharing in driving vicarious fear learning Dear Dr. Müllner-Huber: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Thiago P. Fernandes Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .