Peer Review History
| Original SubmissionMarch 4, 2022 |
|---|
|
PONE-D-22-06537Tailored interventions into broad attitude networks towards the COVID-19 pandemicPLOS ONE Dear Dr. Chambon, Thank you for submitting your manuscript to PLOS ONE. I had the pleasure of having two exert reviewers who, as you will read, provide an extremely detailed and constructive review of your manuscript. I share their positive appreciation and invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jul 07 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Adriaan Spruyt, Ph.D Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that Figure 2 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: a. You may seek permission from the original copyright holder of Figure 2 to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this paper, the authors present the results of a study in which they surveyed 6093 participants regarding the covid-19 pandemic at five time points and examined structure in responding. The structure observed during the first measurement was also used to select specific factors as intervention targets based on their relation to other factors. During the third and fifth measurement, two factors were the target of a persuasion intervention and it was examined whether the interventions produced (1) changes in these factors and (2) produced changes in related factors. The former hypothesis was confirmed in most cases whereas the second hypothesis was dis-confirmed in most cases. Overall, I found this paper very interesting to read. It reports a very extensive and well developed research project that addresses an innovative and timely research question. The question how interventions for important behavioral problems (e.g., related to the covid-19 pandemic) can be designed well is of crucial importance. Taking a network approach here and testing the value of this network approach for changing behavior related to real-world problems is quite novel and may have great potential. The authors are very thoughtful in their explanation of the research rationale, design and results. The writing is clear and the paper presents an interesting story that is easy to follow with sufficient detail (e.g., in terms of design, results,…). The experiments were also adequately powered and well-designed and the analyses seemed suitable. The data and analysis scripts were also made available on OSF (together with a clear codebook). That said, there are a number of things that came to mind while I was reading the paper that the authors might use to further improve their paper. (1) Most importantly, the authors should try to more clearly separate the level of description (behavior) and the mental level of explanation (nodes). The authors often talk about effects on “nodes” but if these nodes are defined at the mental level, then such effects cannot be directly observed. The authors may use an intervention to TARGET specific nodes as defined within the framework, but they cannot know whether they INFLUENCE these nodes (or whether they exist). The authors are only postulating a relation of this node to specific questions in the survey and they can only observe responses on these questions. It would therefore be helpful if the authors clearly separate the behavioral effects and their explanation within the framework. In this way, it can also be clarified that the framework is only a tool - e.g., to build more effective treatments - rather than an accurate representation of the human mind. (2) Related to the previous point, there was some unclarity about the research aims and, relatedly, the relevance of the research question. What were the authors' aims with this research? Was it to test the value of using the framework to influence behavior? Was it to test the value of using the framework to predict behavior? Was it to test the validity of the framework (I hope not, as that would seem impossible with this data – or with any data for that matter)? Given this aim, what was the conclusion of the research? It was a bit difficult to find out what would be the take home message after reading this paper. The authors did specify hypotheses in their introduction but the hypotheses were not clearly related to the goals nor were they as specific as one might want. How are we to interpret the results in light of these hypotheses? I guess many crucial hypotheses were not confirmed (except for the one's about the manipulation influencing the variable of interest - but I guess that is simply a matter of choosing a targeted intervention rather than a test of the framework), so does this mean that using the framework would not be as valuable as expected? Note that it is of course entirely OK if this research was very exploratory, but it would be good then if this is indicated more clearly. (3) The authors sometimes seem to interpret their results in terms of causal relations. For instance, p. 24 : “The interventions aimed at increasing the central node Trust also resulted in significant effects on two connected nodes (i.e., Measures Support and Social Norm), which indicates a causal relation.” This seems unwarranted. It is not because you targeted change in specific questions (related to the trust Node) that the construct that these questions are assumed to probe is indeed what was changed by the intervention. It is entirely possible that the intervention produced changes in several related constructs. Indeed, evidence in this regard was found for the economic consequences interventions which only influenced responses on questions for a related construct. The authors also seem to use mediation analyses related to the causality question but it should be noted that mediation analyses are not straightforward to make such inferences (e.g., see Agler & De Boeck, 2017) which makes it difficult to understand what these analyses add. It would be good to discuss why the authors want to use mediation analyses here (in relation to their limitations). (4) It is unclear why the authors did not choose to engage in pre-registration (or did they?). This can be quite valuable especially for these very extensive research projects with a large number of variables and possible analyses. (5) There is little information about statistical power. Given the high drop-out, was there still sufficient power to find between-subject effects in the fifth wave? (6) Unclear what this sentence meant: “Finally, although network analysis appears to successfully provide targets, the network as a whole appears resilient against the interventions, especially in the long” (p.29). Reviewer #2: The authors present what I think is a very important contribution to the applied network psychometrics literature, and I enjoyed reading it. For the editor’s information, the network literature is largely missing intervention studies testing some of the hypotheses that have come from network theory. In particular, the question of whether cross-sectional network structures can inform interventions has been an open question and arguably heavily criticised idea in recent years. In the present piece, the authors study targeted intervention on attitude networks within the context of the COVID-19 pandemic. That is, they estimate a cross-sectional network, and then design and test an intervention based on metrics that are purported to potentially capture the ‘importance’ of variables in the network. They present preliminary evidence that the network metric of strength centrality may be useful to inform intervention at the population level. I recommend that the article is considered for publication with minor revisions, by either incorporating some changes/edits regarding some queries outlined below, or of course, whether the authors can clarify/argue some methodological points I allude to below. The rest of my review works through the paper in order. Introduction: • Line 78: starting from “Calculating…..” I think this is a mis-leading sentence because it implies the study is about network analysis of panel data, but no panel network models are estimated in the present article. • Line 87: start from “Such network properties…….” To the end of the paragraph. I think this would be strengthened with a few edits to include the following points, which I think would further highlight the strengths and importance of the study: o References missing that initially suggested the idea of the centrality hypothesis o I think it should be acknowledged the criticisms that the centrality hypothesis has had….. o Related to the above bullet point, to my knowledge, strength centrality as a metric to inform intervention is not undisputed just because of direction of effects but also because of the possibility of missing latent common causes, the boundary specification problem, and whether between subject networks are appropriate to inform intervention strategies? If I am not incorrect, then I think this should be included, because the importance of the current paper is that it provides evidence cross-sectional networks and strength centrality might be useful for informing interventions at the population between subjects level. o From line 98 when discussing Zwickers study, from my recollection, Zwicker et al, whilst intervening on a high centrality node made no comparison to a low centrality intervention, which the present study does. I think the introduction would be strengthened by including how this study has gone beyond what other intervention studies like Zwicker et al have in the literature i.e. formally compared high v low centrality & tested mediation. Methods: • Line 174: I think the sample size information is unclear. The references cite simulations on Ising and GGM models not MGMs. If sample size was not determined a priori based on simulations of an expected MGM structure, I think this should be made clearer. (the stability of estimates as provided in the R code is useful, but in the main text what exactly the decisions around sample size were is not clear, if it was as broad as ‘collect as large a sample as possible but make sure we check stability of estimated network’, I think this should be made clear). • Around line 197 - It is unclear to me whether subjects that participated in the 1st intervention (3rd wave) were excluded or legible for inclusion in the 2nd intervention (5th wave). I think should be made clearer and the possible implications discussed. • Line 214 – 219 starting from “Node strength…..” a minor point, but here it defines strength centrality as the average conditional association, but then explains calculating it as simply the sum of edge weights (and not divided by the number of edge weights, so not an average?). • The attitude network has lots of negative edge weights – was this expected? Can the authors mention why strength centrality rather than say expected influence was used? • Line 267: I don’t know what PLOS policy is on this, but should how randomisation was performed be explained? • Line 283 paragraph: The authors use 10-fold cross validation for selecting the tuning parameter in the regularised MGM in a large sample. I think it would be useful to justify the choice of CV for model selection (and perhaps also justify the choice of regularisation?), compared to say EBIC. Did the authors want to ‘err on the side discovery rather than caution’? • Line 294-295 - It mentions the alpha level was set at p < 0.01 to “focus on the strongest effects”. Is this a correct interpretation of a p value? It reads as though referring to effect size, but would lowering the alpha level not just reduce the type II error rate (i.e. not necessarily focus on “strongest effects”). • I think it would be useful for the reader to have all the information related to the intervention comparison statistical analysis in the same section rather than having to refer to footnotes • Line 316 – Can the authors clarify here what the edge weights are in a MGM for readers unfamiliar with the model • One thought I had here is about the validity of strength centrality in a weighted MGM. To my understanding, edges in a MGM can only be compared to edges of the same type e.g. a gaussian – gaussian edge (G – G) cannot be compared to a gaussian – binary edge (G – B) because one is an average of the same type of coefficient, but the other is an average of two different types of coefficient. In this model, you have three edge weight types: G – G, G – B, and B – B. Is strength centrality effected by whether the edges present have more or less of one type of edge weight? Assume you have two gaussian nodes, with exactly the same strength centrality, but one nodes edge set is {G-G, G-G, G-B}, and the others edge set is {G-G, G-B, G-B}, can strength centrality be validly compared? I haven’t looked at all the edge types for the intervened nodes, but is this an issue for the current study? If it’s not, then this isn’t a problem. I’m not sure what the answer is, but it was a thought I had, that maybe other readers will have so I think it should be considered here? • Related to the MGM, the authors use edge weight comparisons and test these for significance with results supplied on the OSF link and a few results mentioned in text. Related to the above bullet point, I again wonder whether comparing all edges to each other for significance is valid in an MGM? Is it only that edges of the same type can be compared to each other for significance, and edge weights of different types cannot be compared to each other? Results: • Table 3 presents the means and standard deviations for all variables, but the text reports the medians. Is there a reason for this? Is one not more appropriate than the other for a given variable and should be used consistently? • I may have missed something here but regarding the group comparisons and footnote IV: It mentions Kruskal-Wallis test was used to compare differences between the three groups but it doesn’t mention what was used for pairwise comparisons? Was this pairwise Mann Whitney U tests? • Line 366 – this is a qualitative statement about the possible role of negative affect, should this be caveated? e.g. “possibly plays a role”, especially given bridge centrality was not calculated • This might not be a problem when in print but in the PDF supplied the resolution of the network figure is not very good making some nodes difficult to identify. • Line 374 – should “condition association” be “conditional association”. Same for line 388? • Line 390 - ….”and therefore important” should probably be caveated e.g. “and therefore potentially important”. • Line 395 – I think the evidence for “importance” is over-stated here. The edge weight connections simply suggest they may be important factors that affect compliance. Relatedly on line 397, start of second sentence, I think “Most important” should be ‘toned down’. • This is a minor point that shouldn’t affect any conclusions, but can the authors check the strength centrality measures and perhaps clarify how values were rounded because this isn’t reported in text (unless I missed it)? For example, in text (line 438), Trust is reported as 1.80 and Social Norm as 1.00, using the data and R script provided I have Trust as 1.83 (rounded to 2dcp) and Social Norm as 1.02 (rounded to 2dcp). Likewise for Wave 5, the text (line 489) reports the wave 1 centralities of Measures Support as 2.10 and Economic Consequences as 1.10, but I have Measures Support as 2.05 and Economic Consequences as 1.09 • Line 456 – If I have read this correctly (and assuming I understand the non-parametric tests used correctly!), the text says that participants in the high trust condition scored significantly higher on Social Norm than participants in the low trust condition and control condition. But the median of social norm in high trust was 5, and the median in the control condition was 5. I could well be wrong here because I’m not a statistician, but it’s worth mentioning because other readers may have the same thought……can we conclude that participants scored higher (with a presumably Mann Whitney post-hoc test after a significant Kruskal Wallis test?) when they have the same median? Does this not just imply the distributions are different if the median is the same? – can we conclude the direction of distributional differences is in the favour of the high trust group? (Same for line 466 to 468). • Intervention wave 5 – with the measures support intervention, there is a significant effect on social norm but this is not mentioned in the text (just the table), I think this should be added to the text as well so it contains the overall picture of the intervention effects. • Intervention wave 5 from line 503 – There was a significant effect on negative affect in this intervention, I think this should also be mentioned in the text for same reason as previous bullet point. • I don’t know enough about formal mediation analysis to comment on this, but I do think this is a particular strength of the paper that mediation analysis was conducted to test whether the effects could be explained via mediation of the target node Discussion: • Line 632 (starting from the “longitudinal design…”) to line 634. I think this is just a wording issue but the sentence talks about the current design enabling estimating directed networks from the panel data structure. It references a study in which this was done, but I think this is mis-leading because it reads as though this is what was done in the current study. Perhaps consider rewording? • Line 650: “….interventions, especially in the long” – missing word, ‘run’? Miscellaneous: • I was able to reproduce the network structure with the code and data supplied, but the R script didn’t contain the intervention comparisons which it states was conducted in SPSS • The edge weight accuracy, and edge and centrality difference test figures are missing from the supplement and instead are provided on the OSF. I think the edge weight accuracy/stability plot should be provided in the supplement. • Does the R script provided also require loading of the psych library for the fisherz function on line 586? (I had to load the psych library to get this to work). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Pieter Van Dessel Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-22-06537R1Tailored interventions into broad attitude networks towards the COVID-19 pandemicPLOS ONE Dear Dr. Chambon, Thank you for submitting your manuscript to PLOS ONE. As you will see, both reviewers are extremely satisfied with the changes made. I share their assessment. Still, Reviewer 2 sees some room for further improvement. I have therefore made the decision to formally request a "minor revision". I am convinced, however, that you can make these last adjustments quickly and easily. I look forward to receiving the updated version soon. Please submit your revised manuscript by Oct 16 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Adriaan Spruyt, Ph.D Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I thank the authors for their extensive revisions. In my opinion, the authors have decidedly improved the paper. All critical points have been addressed and I applaud the changes that were made. Reviewer #2: Thank you for re-submitting the paper I was interested to see this again. I’ve read the replies to the reviewer comments, and I have just a few comments below which are either minor or are points that I think are appropriate to discuss post-publication. 1) Should Zeal and Neal (2021) be cited when mentioning criticism of centrality, specifically re: the boundary specification problem (doi: 10.1037/met0000426). If this paper is covered in the cited Hallquist paper cited then perhaps not, but otherwise I think Zeal probably should be cited here. 2) Should the IQR range be added to Table 3 for the medians? 3) Regarding my initial comment about whether a nodes edge set may influence strength centrality, I’m not sure the authors reply did completely address this. I understand that edges in an MGM all have the same substantive interpretation e.g. conditional associations, but what I don’t think the reply did address (unless I’m mistaken), is what effect averaging different types of regression coefficients may have on calculating strength centrality. For example (if I understand the MGM correctly), if you have a continuous node connected to two other continuous nodes then edge weights are averaged linear regression coefficients and strength centrality is the sum of these two averages. But if you have a continuous node connected to one other continuous node as well as a binary node, then one edge is an average of two linear regression coefficients, and the other edge is an average of a linear regression coefficient and a logistic regression coefficient, and strength centrality is thus the sum of different types of coefficient averages compared to the first node. Is it valid to then compare the two nodes on strength centrality? (even if the edges themselves can all still be interpreted as conditional associations)? It is however quite possible I am misunderstanding, because I am not an expert in psychometrics or statistics. Overall, I’m satisfied that the authors addressed all points sufficiently, and as such, my recommendation would be for the paper to be published. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Tailored interventions into broad attitude networks towards the COVID-19 pandemic PONE-D-22-06537R2 Dear Dr. Chambon, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Adriaan Spruyt, Ph.D Academic Editor PLOS ONE |
| Formally Accepted |
|
PONE-D-22-06537R2 Tailored interventions into broad attitude networks towards the COVID-19 pandemic Dear Dr. Chambon: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Adriaan Spruyt Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .