Peer Review History
Original SubmissionApril 23, 2021 |
---|
Transfer Alert
This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.
PONE-D-21-13365 The effects of quality of evidence communication on perception of public health information about COVID-19: two randomised controlled trials PLOS ONE Dear Dr. Schneider, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. I have been fortunate enough to receive two excellent reviews of your work. I would be grateful if you could submit a revised version of the manuscript addressing all the points that they make. Please submit your revised manuscript by Jul 15 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Richard Rowe Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. During our internal checks, the in-house editorial staff noted that you conducted research or obtained samples in another country. Please check the relevant national regulations and laws applying to foreign researchers and state whether you obtained the required permits and approvals. Please address this in your ethics statement in both the manuscript and submission information. 3. Thank you for stating the following in the Funding Section of your manuscript: "The Winton Centre for Risk & Evidence Communication, thanks to the David & Claudia Harding Foundation" We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "The author(s) received no specific funding for this work." Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ 5. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation. Authors’ affiliations should reflect the institution where the work was done (if authors moved subsequently, you can also list the new affiliation stating “current affiliation:….” as necessary). 6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Very important study and well done. Congratulations to the authors. I enjoyed reading this work and think it provides useful information. However, I worry that the presentation of the data is its main limitation. Apologies for any typos or grammatical atrocities below. Major Comments 1. Would you consider reporting per CONSORT and adhering to reporting standards for both trials please? http://www.consort-statement.org/consort-2010. This includes adding CONSORT flow diagrams. 2. Very long report and somewhat unusual reporting in that the format seems to be: 1. introduction, 2. general methods linked to 3 different online locations to understand the data (protocols, data, supplement), 3. Additional methods for experiment 1, 4. Experiment 1 results, 5. Experiment 1 discussion, 6. Additional methods for experiment 2, 7. Experiment 2 results, 8. Experiment 2 discussion. 9. Conclusions (another 2.5 pages!). Even for single reports of complex two trials, they much more brief. This paper should be reduced by at least 50% in length. There should be only one section for introduction, one for methods, one for results, and one for discussion. 2a. Description of outcomes - somewhat repetitive to have almost the exact same verbiage repeated for multiple headings (perceived trustworthiness, perceived effectiveness, etc.) when it could be summarized likely as a single line referring to multiple outcomes. eg. Compared to individuals randomized to an infographic displaying "high quality evidence", those assigned to the infographic displaying "high certainty evidence" reported x ("Quality" group mean [95%CI], "Certainty" group mean [95% CI]; mean difference mean [95%CI])... 3. Reporting of effects and interpretability - helpful to understand magnitude of effect such as mean difference with 95%CI rather than focusing on p values and technical description of statistical tests in abstract and main text. Very difficult for the general reader to interpret as is currently. No generalist will understand what "F(1,946)= 35.61" or "ηG2 = 0.036" or the like means. Likewise, reporting capital M (eg. "M = 4.11, 95% CI [3.96,4.26]" and similar uses of M, or "dadj", would not be clear or approachable by the general audience. Also not completely clear what OR represents (eg. line 226 or 234). If in fact an odds ratio, how exactly was this generated from the data and what does it represent? Presumably you would have had to make some kind of cut point? More helpful if you use plain language. Doesn’t mean you have to completely eliminate all the statistical reporting, but at minimum the text should be easily interpretable to a general reader. Minor Comments 4. References mismatched? - lines 85-88, references 11-13 don't seem to match evidence quality rating systems. Please verify that all references match. 5. OSF database and protocols, authors - Spiegelhalter seems absent from the OSF database. I only see the other 3 coauthors. "Contributors: Claudia R. Schneider Alexandra Freeman Dr. Sander van der Linden". I recognize different indivdiuals contribute in different ways to projects. Please verify the correct contributions in OSF please. 6. line 93 - change "by the GRADE" to "by GRADE" or "using the GRADE approach" or something similar. 7. oxymoron - line 101-102 - "in general on COVID-19". Perhaps rephrase to "addressing interventions for COVID-19" or "addressing interventions for SARS-CoV2" or something similar. 8. Similar terms with two different meanings - "uncertainty" in the manuscript commonly refers to both (1) statistical variance associated with point estimates (eg. 95% CIs) and (2) quality (aka certainty or confidence) of estimates. Given the focus of uncertainty to refer to the latter rather than the former, is there verbiage different from "uncertainty" that can be consistently used throughout when referring to 95% CIs? 9. Abstract - please, preference is for estimates of effect rather than just p values presented. 10. Questions posed - For the concept of perceived trustworthiness, are "trustworthy" "accurate" and "reliable" synonymous? they may be correlated but do they truly reprsent the same construct? Trustworthiness would be the veracity that the data presented are true, eg. "a trustworthy source of information". One can generate a trustworthy and low certainty estimate of effect. One can also generate a untrustworthy, high certainty estimate of effect. By contrast, an estimate may be not accurate (the extent to which the reported value approximates a true value) nor reliable (the extent to which repeated measures of the same value result in the same estimate). I suppose clarity in exactly what "Trustworthy" was meant to convey and did you test at all that the people surveyed agreed with the intended meaning. 11. Any clustering or correlation of participants? eg. multiple members of same household? How did you (or the survey company you employed) handle this? Could you describe more how the survey company samples, precisely who the sampled population was, and how they are found by the company? Were individuals paid by the company to partake in the survey? 12. Line 183-184 - GRADE uses both certainty and quality of the evidence, and if anything, favours certainty of the evidence of quality of the evidence. All recent GRADE guidelines refer to "certainty". Hence, "the commonly used GRADE wording" is not be correct. Perhaps you could just say something to the effect of "two alternate wording used by GRADE". 13. Given that Experiment 1 was a 2x2 factorial, and Experiment 2 was a 3x2 factorial, please report whether there was any interaction first, then describe the effects of either assignment. 14. Supplemental tables describing demographic variables - Linked to the CONSORT main comment. Could this please be a main table encompassing both trials, as is often reported in randomized studies, and please report by assignment (+/- overall studied population) rather than solely overall studied population, as is presented currently. 15. Linked to reporting - summmary display of results - Do you think a simple table reporting the mean values (and SD or 95%CI) of each group per outcome, followed by their associated mean difference with 95%CI, might be more informative than Figures 2 and 4? (the dot plots could be moved to supplement) 16. eg. 17. A table with two column headings, Experiment 1 and Experiment 2 with their associated (brief) descriptions 18. Row headings are outcomes 19. Column 1-4 for Experiment 1 are mean & 95%CI values for groups 1-4 for the 2x2 trial. Column 5-9 are between group mean differences with 95%CI. 20. Similar could be repeated for Experiment 2. 21. "Tukey HSD" abbreviation is not spelled out. 22. Reporting of groups - Rather than describe groups as, for example, "in the high quality evidence group" it might be more clear to describe allocations as "participants assigned to the "high quality" infographic group" or "participants assigned to the infographic labelled as high quality", or similar. 23. Likewise, describing the effect of wording as quality or certainty of the evidence could be more clear. eg. "no main effect of quality wording" could be "no main effect of wording as quality or certainty of the evidence". Again more informative to describe mean differences between groups with 95%CI rather than emphasis on description of statistical tests and p value / hypothesis testing. Of course if you want to include the p value in additon to the estimate of effect, I would not oppose that. 24. Lines 245-259 - understanding of quality vs certainty of evidence - again, effect estimation may be more helpful here than just hypothesis testing. if the M values reported represent means, do you think a between group difference mean of 0.21 on a 7-point likert scale is meaningful? Could this be statistically signifincant but unimportant in practical terms? 25. Line 287 to 291 - The language is much too technical here and loses readers - myself included. 26. May be helpful to contrast these findings with those of non-public health settings. Do they apply to communication of scientific results to the public in general? How do clinicians or other stakeholders react to the same presentation formats (I understand you did not test this but is there any data to support these other scenarios)? With regards to clinical guideline panels, it is notable that they are more likely make strong recommendations when there is high certainty evidence (https://www.jclinepi.com/article/S0895-4356(21)00068-8/fulltext). 27. Would also be curious to know to what extent science communication should show mean effects with or without their associated 95%CI - a related topic to the one you tested. Any evidence in that regard? Reviewer #2: This is a brilliant study. Congratulations. I have some suggestions that you might want to consider. Background 1. You may want to correct statements suggesting that the GRADE Working Group uses ‘quality of evidence’. Although it is correct that it did in the past, it has, for the most part, used ‘certainty of evidence’ since 2013 – including in the Lancet article that was the basis for the infographic you used. See, e.g., https://doi.org/10.1016/j.jclinepi.2012.05.011, https://doi.org/10.1016/j.jclinepi.2017.05.006, https://doi.org/10.1016/j.jclinepi.2018.01.013, https://doi.org/10.1016/j.jclinepi.2018.05.011, https://doi.org/10.1016/j.jclinepi.2021.03.026 2. You may want to be less obscure when you explain the concept of ‘certainty of evidence’ in the background (where you refer to ‘known unknowns’, deeper uncertainties, and ‘indirect’ uncertainties) and in the discussion (where, for reasons that are not obvious, you refer to imprecision of effect estimates as ‘direct’ uncertainty and other sources of uncertainty, without any explanation of what these are, as ‘indirect’). In the example you use in your study, GRADE was used to assess the certainty of the evidence, so you could quite easily refer to the sources of uncertainty considered in that approach (risk of bias, inconsistency, indirectness, imprecision, and reporting bias). 3. There is some other relevant research that you might want to reference in the background, e.g., https://doi.org/10.1016/j.jclinepi.2014.04.009, https://doi.org/10.1016/j.jclinepi.2007.03.011, https://doi.org/10.1177/0272989x10375853, https://doi.org/10.2196/15899 Methods 4. I would find it easier to understand your study if you described the methods in a more structured way. You may want to use the CONSORT checklist if you have not already. 5. You may want to provide more information about the participants, including your eligibility/selection criteria. I assume you only included adults. The Respondi website provides almost no information about their panel, so it is hard to know how typical or atypical they might be compared to other U.S. adults. 6. What were potential participants told about the study when they gave informed consent? 7. Why did you measure prior beliefs after showing participants the infographic? Did you check the reliability of their post hoc ‘prior’ belief by checking to see if change from those to their beliefs after seeing the infographic were consistent with their reported shift in beliefs? 8. The only description of the analysis that I found was that ‘All analyses were carried out in R version 3.6.’ 9. Is there a reason why you did not make the protocol for these studies available together with the other material on OSF? 10. It is not clear why your hypothesis about no information about the certainty of the evidence being interpreted as high certainty evidence. That seems quite predictable given the way the infographic presented the information (with precise numbers appearing to give a clear answer to the question and no clue to suggest that the information was not trustworthy. Did you tell participants anything before they agreed to participate about what you were going to present that might have influenced how they perceived the information? Also, I don’t follow your explanation for what you found in the discussion. There you say there were no ‘external’ cues about the certainty of the evidence, but there are cues that suggest it is trustworthy. I also don’t understand the difference between your first two explanations (implicitly assuming high certainty or assuming high certainty if low certainty is not explicit), and I don’t understand how loss aversion is relevant or would explain why participants perceived the infographic as representing high certainty evidence. (Moreover, the evidence suggests that framing effects may only occur under specific conditions - https://doi.org/10.1002/14651858.cd006777.pub2.) Results 11. I found the statistics difficult to understand. It would be very helpful if you could make the results more understandable for non-statisticians – especially the size of the effects, including the meaning of the odd ratios you report. 12. There are several places where you use significant or significantly without being explicit that you are referring to statistical significance. That is problematic, since this commonly leads to misunderstanding. In addition, there are several places where you report no or not ‘significant effects’. When you do that, it is not possible to tell whether the findings are inconclusive because you did not have sufficient power to rule out a meaningful effect or they rule out a meaningful effect. 13. I would find a table helpful showing the main results for both experiments, i.e., the odds ratios (with confidence intervals) for each of the three main comparisons (low vs high certainty in experiment 1 and 2 and low vs no information about certainty in experiment 2) for trustworthiness, perceived effectiveness, and intended behaviour. Discussion 14. The findings for trustworthiness and perceived effectiveness were very similar for experiment 1 and 2 (as one would expect). How do you explain the apparent difference in findings for intended behaviour? 15. Although you say that the infographic was “based on empirically-tested good practice recommendations for evidence communication”, I thought there were several problems with it, which you might want to consider in the discussion. First, it does not give any contextual information. The question you asked participants about behaviour was about their “intentions to wear eye protection when in busy public places”. However, all the studies on which the infographic are based were conducted in healthcare settings (except for one small study with no events). There is also no information about how long participants were followed up. The baseline risk (16%) is very high and misleading. It is highly unlikely that anyone not using eye protection in busy public places would experience that high a risk of getting Covid-19. It also is unlikely that random types of eye protection used by ordinary people in crowded public places would have that large of an effect. There is no information about potential harms or downsides. I found the icon array confusing rather than helpful, since I did not understand immediately that each icon represented two people. Also, I am unaware of evidence that indicates that icon arrays effectively communicate information about effects. There is at least one other study that found that an infographic (including an icon array) did not improve understanding (compared to a plain language summary): https://doi.org/10.1016/j.jclinepi.2017.12.003. 16. You may want to reconsider or better justify your recommendation that communicators use the word ‘quality’ instead of ‘certainty’ based on participants reporting that it was ‘easier to understand’. I do not agree with your recommendation. First, are you confident that they understood what the word means in this context correctly? I suspect more people are likely to misinterpret the meaning of ‘quality’, given how the word is commonly used, than they are to misinterpret ‘certainty’. Second, whatever term is used, it should always be accompanied by an explanation, since most people are unlikely to understand the basis for judgements about the certainty of the evidence. Third, there is evidence that suggests that people want information about the certainty of the evidence, that they want plain language text – not just numbers, and that using adjectives such as ‘probably’ and ‘may’ can help to communicate the certainty of the evidence (https://doi.org/10.1016/j.jclinepi.2014.04.009, https://doi.org/10.1177/0272989x10375853, https://doi.org/10.1016/j.jclinepi.2019.10.014). I suspect that is more likely to be important for communicators to do (consistently) than it is for them to use ‘quality’ instead of ‘certainty’. 17. You may want to strengthen your discussion of what you call an ethical dilemma. First, I don’t think public health authorities often do or should present information about effects in isolation. In fact, they frequently fail to provide any quantitative information. So, to the extent there is a dilemma, it is more in relation to including information about the certainty of evidence in connection with a recommendation or advice. In that context, there can be good reasons for recommending something, despite low certainty evidence, and that justification can and should be provided. That has been done by many organisations and people that have made strong recommendations while being transparent about the uncertainty of the evidence for pandemic control measures. Second, many if not most people would consider it unethical to, for example, withhold information about adverse effects of vaccines to persuade people to be vaccinated. How is it different to knowingly withhold information about the certainty of the evidence to persuade people? If you believe there are good reasons to persuade people to wear eye protection in crowded public places, then you should make an honest argument to persuade them. How is it ethically justified to be dishonest and knowingly present untrustworthy information as though it were trustworthy? Third, when there is important uncertainty or disagreement, not being honest and transparent can perpetuate practices that are wasteful and may be harmful. It also can inhibit research to reduce uncertainty and disagreement, and it can undermine trust. Persuasive tactics can infringe on people’s autonomy if information is withheld or if persuasive tactics are not justified. They can also inadvertently harm people. If there is a good justification to persuade people, the basis for doing so should be transparent and persuasive messages should not distort the evidence. This does not mean that clear, actionable messages cannot stand alone. Key messages should be up front, using language that is appropriate for targeted audiences, but it should be easy for those who are interested to dig deeper and find more detailed information, including the justification for a recommendation. When there are important uncertainties, they should be acknowledged. Best, Andy Oxman ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
PONE-D-21-13365R1The effects of quality of evidence communication on perception of public health information about COVID-19: two randomised controlled trialsPLOS ONE Dear Dr. Schneider, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Jun Tanimoto Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: This paper is well written, the topic is interesting, and the results seem correct; the work is acceptable. I make some recommendations for minor revisions. Besides these issues, the work appears methodologically sound and is well written. ##1 Authors should explain details about limitation of their survey and analysis. ##2 Figure captions need to be improved (Author can explain in detail). ##3 Introduction should be improved by adding more recent and related works. For example, Hypothetical assessment of efficiency, willingness-to-accept and willingness-to-pay for dengue vaccine and treatment: a contingent valuation survey in Bangladesh, Human vaccine and Immunotherapeutics, DOI: 10.1080/21645515.2020.1796424 (2020). Evolutionary game theory modelling to represent the behavioural dynamics of economic shutdowns and shield immunity in the COVID-19 pandemic. R. Soc. Open Sci. 7: 201095. http://dx.doi.org/10.1098/rsos.201095 (2020). “Do humans play according to the game theory when facing the social dilemma situation?” A survey study, EVERGREEN, 07(01), 7-14 (2020). Prosocial behavior of wearing a mask during an epidemic: an evolutionary explanation. Sci Rep 11, 12621 (2021). https://doi.org/10.1038/s41598-021-92094-2. An evolutionary game modeling to assess the effect of border enforcement measures and socio-economic cost: export-importation epidemic dynamics, Chaos, Solitons & Fractals 146, 110918 (2021). ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 2 |
The effects of quality of evidence communication on perception of public health information about COVID-19: two randomised controlled trials PONE-D-21-13365R2 Dear Dr. Schneider, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Jun Tanimoto Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
Formally Accepted |
PONE-D-21-13365R2 The effects of quality of evidence communication on perception of public health information about COVID-19: two randomised controlled trials Dear Dr. Schneider: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof. Jun Tanimoto Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .