Peer Review History
| Original SubmissionApril 3, 2023 |
|---|
|
PONE-D-23-07977Nudging accurate scientific communicationPLOS ONE Dear Dr. Clavien, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Reviewers have found it difficult to understand several aspects of your manuscript, including the hypothesis and methods. I kindly ask you to carefully consider their comments to improve the clarity of the article and to address their concerns. Please submit your revised manuscript by Jul 28 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Alberto Molina Pérez, Ph.D. Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for the opportunity to review this paper. I read it with great interest. The topic is fascinating and, in light of the importance that social media nowadays plays in the sharing and spread of scientific (and other) news, the questions that the authors address in this paper are timely and important. The authors report findings from two large, pre-registered empirical studies, which is great. I also wholeheartedly agree with the authors that publication and discussion of null results emanating from scientific research is often as important as that of positive results. Before the paper gets published, however, I think the paper could be improved in a number of ways. I list my main, more general suggestions and remarks first, and smaller points after that. The more general suggestions, remarks, and questions: On the whole, the introduction to the paper is very well written and it motivates clearly the need for empirical studies that the authors conducted. However, as I read the paper, I often found myself flicking back and forth through the pages, and I had to consult the pre-registration document as well as the supplementary PDF files that the authors uploaded onto the Open Science Framework (OSF) database, in order to better understand the hypotheses that the authors set out to address and how their empirical studies did that. I think it’d be good to either have a single supplementary information document to accompany this paper and to make clear and direct references to that document from the main text to help a reader connect the dots, or to add references from the main text to the PDFs already uploaded onto the OSF, along with some additional guidance in the main text. This general blathering aside, below are more specific points in that regard. 1. In the introduction, when setting the scene for the subsequent discussion of empirical studies, the authors discuss the positive results bias and the confirmation bias in the dissemination of research results. This gives the impression that the nudges that the authors subsequently investigate in their empirical studies are meant to counteract these both forms of bias. However, if I understood the rest of the paper correctly, the authors assess the nudges only with respect to their effectiveness to counteract the positive results bias, but not the confirmation bias. Regarding the latter, the empirical studies were only meant to reveal (or confirm) the prevalence of the confirmation bias. It’d be good to make that clearer and more explicit in the introduction. 2. In the introduction the authors discuss the prospect of nudging people towards disseminating rigorous scientific results. However, in the context of the whole paper, the meaning of “rigorous” is somewhat obscure. The question of what counts as rigorous comes up, for example, when reading examples of vignettes that the authors used in their empirical studies. In all the provided examples (the one on p. 8 in the main text and the additional ones in the “Materials Main vignettes” PDF document on OSF), the sample size criterion favours one option while the control group criterion (that is, the use of randomized control trials) favours another. In such cases it is ambiguous which of the two presented options would reveal a choice to disseminate rigorous science. Presumably rigorous science is associated with larger sample sizes, the presence of control groups, no bias towards positive results, and no bias towards a specific research institution on the whole, that is, everything else being equal. a) It would be good to explain that somewhere early on in the text, perhaps where the authors discuss their experimental design and hypothesized predictions. I found the authors’ discussion of their predictions listed in the pre-registration document associated with the first study very useful for that matter. b) I think it’d be good to present those predictions somewhere early on in the main text too. c) It might also be helpful to explain to a reader that, despite their being vignettes where what counts as a choice of reporting rigorous science is ambiguous, these cases are counterbalanced with cases where that question has a clear answer (large sample size + the presence of a control group vs. small sample size + no control group). As a result, we should expect rigorous science to be associated with a bias for larger sample sizes and for the presence of a control group across the board, that is, across all participants and all possible vignettes. It took me a while to appreciate that when thinking about the authors’ discussed results. I think that such slight (explicit) guidance for a reader’s thoughts would make it easier for one to appreciate the authors’ findings. d) Lastly, to better understand the extent to which lay people’s choices diverge from the ideal, it would be nice to present a Figure that would show how participants’ actual choices compared to the choices of an ideal, unbiased reporter. Such a Figure would also visualize the actual sizes of some of the discussed effects that the authors subsequently report from their performed regressions. I don’t insist on the authors necessarily adding such a Figure to their paper, but I think that a Figure like this would be very useful for general clarity and for appreciation of the effect sizes that are at play. 3. In the discussion of how the authors determined their target sample sizes in the paragraph preceding the section “Materials” on p. 7 (lines 141–156), it’d be good if the authors could add a sentence or two on what simulations they performed, i.e., what statistical methods they used (instead of, or in addition to, pointing a reader to the R code uploaded onto OSF). 4. In the section “Materials,” where the authors present an example of a vignette they used, it’d be very good point a reader towards a supplementary information document (or the “Materials Main vignettes” PDF on OSF) to look at vignettes for each of the 5 scenarios that participants were presented (medical drug, psychological therapy, etc.). I also strongly recommend to add a bit of variety to the examples presented in the “Materials Main vignettes” PDF. Presently all the presented examples in this supplementary document pit a study with a small sample + no control group + positive result vs. a study with a large sample + control group + null result. It would be great to replace some of these with the following two cases: a) small sample + control group + positive result vs. large sample + no control group + null result; b) small sample + no control group + null result vs. large sample + control group + positive result. One can figure out how the various vignettes looked like from going through the “Materials_First_Experiment_Science_Journalism” PDF on OSF. However, one needs to do quite some work inspecting the code that the authors used to put the various bits of text together to form the presented sentences in their vignettes. 5. There’s a sentence in the presented vignette on p. 8 that says “After 5 days, around 50% of the participants had seen their condition improve in both the placebo group and the treatment group, since their blood pressure decreased.” When I read the examples of other vignettes in the “Materials Main vignettes” PDF on OSF, I noticed that in the example associated with the new medical drug condition there, the terms used were not “the placebo group” and “the treatment group,” but “the placebo group” and “the control group.” The latter pair is confusing because usually we refer to a placebo group and a treatment group, or a control group and a treatment group, but not to a placebo group and a control group. When inspecting the code that the authors used to generate these vignettes (presented in the “Materials_First_Experiment_Science_Journalism” PDF on OSF) I noticed that there was indeed this “error” in the code. It would be good if the authors would make a point about that and perhaps re-run their regressions excluding the participants who were exposed to the vignette that used the expressions “the placebo group” and “the control group” to contrast the two groups in the same sentence. If the results remain largely unchanged, it might be worth simply mentioning that in a footnote and leave the rest of the paper as is. 6. Strictly speaking, the preference ranking that the authors elicited from participants’ choices of which study to report as discussed on p. 9 (lines 191–200) gives an ordinal measure of preference. For such categorical, ordinal data of a dependent variable, it is often advisable to perform ordinal logistic regressions instead of straightforward linear regressions. However, as far as is known to me, the scientific community is in two ways regarding this. Some people insist on ordinal logistic regressions, others are happy with using simple linear regressions. I’m not going to pick a side here and do not insist that the authors perform ordinal logistic regressions. (In fact, interpreting results from ordinal logistic regressions is a nightmare when it comes to interaction terms and for that reason some people advise against their use.) That said, if the authors could add a sentence or two to justify their decision to perform simple linear regressions (or perhaps add a Figure plotting the data that would support their choice), that would be a good thing to do. One possibility is to reference some previous studies that performed simple linear regressions on similar data. 7. Regarding the exclusion of participants for data analysis on pp. 11-12. The authors mention that they excluded participants who “copy-pasted citations from the internet” when providing reasons for their choices (lines 261–262). It’d be good if the authors could explain how they identified statements as copy-pasted. 8. It’d be good if somewhere in the text the authors reported summary demographic statistics of their recruited participants. 9. I think the last few sentences in the conclusion are a tad too strong. These results show that the specific nudges that the authors tested were not very effective. This doesn’t mean that nudges in general won’t work. Perhaps it is possible to design other types of nudge that would work? Smaller points: 10. Introduction, second paragraph, last sentence: “accurate transmission of information” (lines 40–41). I think it’s more fitting here to say “transmission of accurate information.” That would also make it consistent with the expression used in the sentence preceding this one. 11. Introduction, the last sentence of the first paragraph on p. 4: “We chose to study the impact of nudges, or minimal interventions that try to influence participants in a desirable direction without changing the incentive structure faced by the participants (Thaler & Sunstein, 2021)” (lines 70-72). That is all fair, but it would be good if the authors could add a sentence or two on why it is particularly important or useful to consider nudge interventions as opposed to other types of intervention, e.g., those that would indeed change the “incentive structure” when it comes to dissemination of scientific news. On the one hand, if everyone agrees on what constitutes good and bad research, why not simply change the “incentive structure” itself? On the other hand, perhaps changing the incentive structure is not always possible or is too costly? 12. In the introduction, the authors present a formal name for the first type of nudge that they set out to investigate: the Attention to the Null Hypothesis nudge. It’d be good to present a formal name for the second type of nudge as well. For example, the Social Norm nudge. That would make it easier to follow the discussion in what follows. 13. Introduction, last paragraph, first sentence: “we study whether it is possible to steer people towards more rigorous research” (line 120). I think it’s more fitting here to say “towards reporting more rigorous research” or “towards propagating more rigorous research.” 14. Methods, last paragraph on p. 9: “participants had to report on the hypothesis that some intervention had a positive impact” (lines 211–212). It’d be good to make this clearer: participants had to report their initial intuition concerning the hypothesis that they were tasked to assess. Similarly in the sentence that follows this one. Also on the following p. 10: “participants then had to give plausibility ratings” (line 224). It is probably more accurate to say “their initial intuition concerning the plausibility of their assessed hypotheses” (or similar). 15. Materials, last paragraph on p. 9: “a drug was improving some illness” (line 212). Bad wording: the drug was not improving an illness, but was effective in treating the illness (or something similar). 16. Concerning the elicitation of participants’ initial intuitions regarding the assessed hypotheses on p. 10 (lines 224–229), it’d be good to give a bit more detail. a) Provide all five options that participants had to choose from in the Competing hypotheses treatment (otherwise it’s unclear how statements other than at the two extremes may have looked like). b) Explain what the options were in the Positive hypothesis only treatment. Alternatively, the authors could point a reader towards this information in supplementary documents (e.g., to the relevant page in one of the PDF documents on OSF). 17. The last paragraph preceding the section “Results” on p. 11. Again, it’d be good point a reader towards supplementary documents to find the exact wordings that were used to elicit these additional data in surveys. 18. Results, Table 2 on p. 13, 4th line row: “Positive Hypothesis.” Should this say “Positive Hypothesis Only”? 19. Results, first paragraph on p. 13. Going back to some of the points I made earlier, I think this is another place where it’d be useful to remind a reader what the various predictions for the interaction of the Positive Hypothesis Only variable with other variables were. 20. Second paragraph on p. 25: “This intervention was quite strong” (lines 504–505). This wording isn’t quite clear. Perhaps there’s a way to rephrase this. 21. Conclusion, one but last paragraph on p. 26: “Participants from Prolific and Amazon Mechanical Turk tend to be more educated than the general population” (lines 536–537). It’d be nice to add a reference here if possible. Really minor points and typos: 22. Materials, first paragraph, first sentence: “Inspired by Bottesini et al., 2021” (line 159). “I” should not be capitalized. 23. Materials, first paragraph on p. 8: “Microfinance on poverty” (line 163). “M” should probably not be capitalized. 24. Materials, p. 8: “For instance, in the drug condition” (line 174). For clarity, perhaps better to use the full name given to this condition earlier on: “the new medical drug condition.” 25. Materials, p. 10: “or two both hypotheses” (line 225). Probably should say “to” instead of “two.” 26. Results, p. 12, last paragraph: “in conformity with our predictions” (line 279). Perhaps worth adding “in conformity with our predictions and results from previous studies” (to reflect the earlier discussion in the introduction). 27. Figure 1 and the Tables were not explicitly referenced in the text. I think it’d be good to add explicit references to them. Also, it’d be good to expand the caption of Figure 1 to explain it in more detail, for example, what is on the y-axis. 28. First paragraph on p. 24: “only one of the effect” (line 487). This should be plural: “effects.” Reviewer #2: The paper has it’s strength in the data. However, the empirical design is hard to follow. There are two experiments that have been carried out at different point of times. One factor is the same, further variations take place (Yale vs Princeton), which are not explained. Each participant read one text so there is need to vary affiliations. Also, asking participants to imagine they were journalists Arena Problematik and Shirley be Diskusses in the limitations more thoroughly. Nudging accurate science and be a journalist is not always in the Dame line. One limitation is that none descriptives are available about the samples. Reviewer #3: This is a technically very good article, well written and with a solid experimental design. From this point of view, there would be no reason to criticize it. But there are reasons, in my opinion, to question a central aspect of the experimental design. The authors start from a clear and correct statement: "Ordinary citizens are important conveyors of scientific information, in their decision to share information with friends, family, and, in the case of social media, even with random stranger" Immediately afterwards, they state that "In this paper, we study several factors influencing non-specialists’ treatment of scientific information" In particular, it focuses on two very important biases: a positive outcome bias, and a confirmation bias. In the end they conclude that people know how to recognize the signs of epistemic credibility, but they are "also attracted towards positive results, and more likely to report on articles that strengthen their own pre-existing beliefs. [...]. nudges are not enough to counteract epistemic vices". Among the limitations, the authors highlight the fact that the experiment is hypothetical. And this is where the problem lies. To find out whether laypeople, as transmitters of scientific information, suffer from the two biases mentioned above, was it necessary to assume that the participants had to imagine that they were science journalists who had to select scientific studies to report in their next article? It seems to me a very forced artifice not explained in the article. It is not just that people lack experience in these tasks, but that the authors could have imagined an experimental design in which people, for example, obtain scientific information -by reading it, watching documentaries, etc.- and transmit it to others, to see to what extent they transmit the information in a way that reinforces their beliefs and positive results. The idea that laypeople are science journalists seems to me to be inadequate to "study several factors influencing non-specialists’ treatment of scientific information". Moreover, it should have been said that "nudges are not enough to counteract epistemic vices" in an overly contrived context in which people act as if they were science journalists. Would they have worked in a more realistic context in which laypeople convey information without imagining that they are science journalists? Would they have worked among real science journalists? We don't know. Now, if one accepts that this design is good enough, the article can be published as it is. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Nudging accurate scientific communication PONE-D-23-07977R1 Dear Dr. Clavien, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Alberto Molina Pérez, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Just a very minor comment: page 5, lines 94-95, it may be preferable to say "large versus small" (rather than "small versus large"). Reviewers' comments: |
| Formally Accepted |
|
PONE-D-23-07977R1 Nudging accurate scientific communication Dear Dr. Clavien: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Alberto Molina Pérez Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .