Peer Review History
| Original SubmissionMarch 2, 2021 |
|---|
|
PONE-D-21-06938 Cognitive contagion: How to model (and potentially counter) the spread of fake news PLOS ONE Dear Dr. Rabb, Thank you for submitting your manuscript to PLOS ONE. Your manuscript has been carefully reviewed and overall regarded as a valuable contribution by the referees. I personally share the appreciation expressed by the referees and consider this work a valuable contribution to a class of models whose importance is raising and that are still open to many important development. However, some important aspects in need of further consideration have been commented. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Among the others, some comments need particular consideration and should be thoroughly addressed in the revised version of the manuscript:
Please submit your revised manuscript by Aug 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Marco Cremonini, Ph.D. University of Milan Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please reupload your manuscript as a .docx or editable .pdf file 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper puts forward an agent-based model of opinion dynamics. The main innovation of the paper lies in its assumptions on the behaviour of the agents. Namely, in addition to the mechanistic aspects of information diffusion that are traditionally incorporated in most opinion dynamics models, the paper also makes specific assumptions on the behaviour of the agents, and explicitly models their likelihood to be receptive to new information depending on their current beliefs. This assumption is aimed at modelling phenomena of collective misinformation, such as the refusal to believe in the reality of covid-19. I find the paper excellent in every possible way. I really like this modelling approach, and I think it's the only one with some hope of yielding empirically testable predictions, something which is sorely lacking in most of the opinion dynamics literature. I think the results presented by the authors are convincing, and I find their analysis (pages 19-22) very thorough and instructive. I basically have no criticisms or major comments, and I believe the paper should be published in a form very close to its current one. I only have a couple of minor suggestions, listed in the following: 1. The only aspects of the model I didn't find entirely convincing are related to its assumptions on institutional agents (let's call them news sources) and their relationships with their subscribers. Is it realistic to assume that a subscriber should keep listening to (as in maintaining their link with) a news source he/she consistently doesn't agree with? If I understand this aspect of the model correctly, it seems that non-institutional agents are not allowed to re-evaluate their links to news sources. Could this be accommodated, e.g., by expanding the number of news sources in order to simulate the effects of a competitive news market? I fully realise these points are probably way beyond the scope of the current paper, but I still think it could be interesting to see them acknowledged/discussed in the final section of the paper. 2. Have the authors considered the scenario in which the parameters in Eq. (6) are drawn from a suitable distribution? I think this would be a very interesting extension of the model towards a more realistic description of a heterogeneous audience. Again, like in the previous point I don't think it's necessary to include additional results in this respect, and I only recommend to include this aspect as a point of discussion. 3. I think the authors should take a look at Sikder et al., "A minimalistic model of bias, polarization and misinformation in social networks", Scientific Reports (2020). As they will see, that paper starts from very similar premises to those of their study, and also makes explicit assumptions about the behaviour of agents with respect to new information depending on their beliefs. The two models are ultimately quite different and focus on different aspects, but it's quite interesting to see how both achieve remarkable consistency in their results across very different network topologies (see, e.g., Fig. 2 in the Scientific Reports paper). Reviewer #2: Dear Nicholas, Lenore, Jan, and Matthias, Thank you for giving me the opportunity to review your paper “Cognitive contagion: How to model (and potentially counter) the spread of fake news”. You are tackling an important and timely topic with a rigorous approach, and yield an insightful conclusion. You point out (correctly, in my view) that traditional models of social contagion generally don’t account for the internal state of the individual, and fail to consider how this internal state influences their adoption decisions. With some minor modifications, I would like to see your paper in print. pg 4, eqn 1 (and other equations describing changes in beliefs) - I tripped up here at first because I didn’t recognize that this was an equation describing changes to the state at `t+1` as a function of the state at `t`. It might clarify things include a subscript for the timestep in these equations, ie: p(b_{u,t+1} = … | b_{u,t}) or p(b_{u}(t+1) = … | b_{u}(t). Also here, can you clarify in the text that ‘u’ is the focal adopter and ‘v’ is the focal exposer? Pg 5: Complex contagion There is some ambiguity in the literature about the precise meaning of “complex contagion”, and how it captures the need for social reinforcement. Kempe, Kleinberg and Tardos (2003) (https://www.cs.cornell.edu/home/kleinber/kdd03-inf.pdf) articulate the difference between a (proportional) threshold model, and an independent cascade model (which most folks would call ‘simple contagion’). Their description of a threshold model is what you (and a lot of other papers) describe as complex contagion. Centola and Macy actually have a different requirement for “complex” contagion than either mentioned by KKT: that a minimum absolute number (not fraction!) of individuals need to expose you to the belief before you adopt. This is important to their argument because they are making the claim that small world networks don’t lead to fast diffusion unless you have “wide bridges” across the network. (Of course, there are some new papers suggesting this falls apart if you have a stochastic decision rule, so maybe we should take it with a grain of salt.) - I think if you want to use the term “complex contagion”, that’s probably ok, (given the ambiguity in common usage) just note that you’re using the proportional threshold interpretation, not the strict requirement from the Centola and Macy paper, which you’re citing at the moment. Even better, contribute to making the term less ambiguous by differentiating between Centola’s complex contagion and what’s in the Schelling model. - The other place it will be relevant is in your section on complex contagion in WS networks (pg 15). You say that “interestingly, complex contagion was successful on the WS graphs”. I’m guessing that this is interpreted as surprising because of the Centola and Macy finding that complex contagion should be slower than simple contagion in small-world networks. I think the resolution is that you’re using a different interpretation of complex contagion (and that WS networks are not degree regular). I would suggest you choose one of the two usages and stick with it throughout. Pg. 6: Cognitive contagion model DeGroot 1974 (https://www.jstor.org/stable/2285509) has a classic model of updating degrees of belief in response to neighbors beliefs, with some weighting on neighbors. Your model could be considered an extension of the DeGroot model that makes the weights on individuals an increasing function of difference between individuals, ie. accounting for homophily. In this vein, you probably also want to look at Dandekar, Goel and Lee 2013 (https://www.pnas.org/content/110/15/5791) and see whether you agree with their conclusions, and if not, why not. Axelrod 1997 (https://journals.sagepub.com/doi/abs/10.1177/0022002797041002001) has a model that accounts for individuals paying more attention to similar individuals, as does DellaPosta 2015 (https://www.jstor.org/stable/10.1086/681254) and Baldassarri and Bearman (https://www.jstor.org/stable/25472492). In these cases, homophily is conceptualized as similarity on other belief statements, rather than just on a single belief. The homophily element of their work is similar to what you’re doing here with a single belief. If homophily itself is the driving factor, your simulations should give the same results. If it’s about gradual belief updating, you may get different results. There are a couple folks looking at within-person interactions between beliefs (a third way to conceptualize the interaction between existing beliefs and adoption decisions!). Goldberg and Stein 2018 (https://journals.sagepub.com/doi/abs/10.1177/0003122418797576) is a good start, as is Friedkin et al 2016 (https://science.sciencemag.org/content/354/6310/321/tab-figures-data) - although Friedkin’s model suffers from an ambiguity in whether the outcome is due to social contagion or to the assumed logic constraints. (I’ve done some work on this too (https://arxiv.org/abs/2010.02188) but please don’t read this as grubbing for citations - if you want to cite someone on those ideas, go with the folks above…) - Please check that your results are insensitive to the number of levels in your model. 7 points is arbitrary (which is fine) but we don’t expect it to be a faithful representation of what goes on in people’s heads (regardless of what Likert thinks). Sometimes these things matter - just make sure that it doesn’t matter here. Make a 100-point model or something that approximates a continuous scale, just to be sure. You can put it in the supplement, and put a note in the main body to say that you checked. - Another consequential parameter choice is the ’stubbornness’ of individuals. You do a good job exploring different options here, but your justification for why the stubborn parameters are the ones you carry forward is grounded in the outcomes you want to see. Again, it’s not a problem to do so if the purpose of the model is to highlight a possible outcome and describe when it is likely to occur. But, it does seem a bit like sampling on the dependent variable. Can you either justify why we should expect this to be the right choice without reference to the simulation outcomes (you could use a micro-level analysis with a single agent) or just make this assumption explicit? Something like “sometimes, people are stubborn. When this is true, here’s what we expect to happen”. Then you’re just making a micro-level assumption and exploring its consequences, rather than trying to say “the world is this way…”. - I’m surprised 10 simulation runs is enough to get a stable result. I usually end up increasing the number of runs (10, 20, 50, 100, 200, etc) until I don’t see any difference in the resulting averages, and then do 2-10 times as many as that. If you have done thousands and found the same results, you can say that you did them, but that the results only show the results of 10 because the effects are so robust. If you haven’t done a large number, it might be a good idea, as they aren’t expensive. - Do you pick a random agent to be the broadcaster, or are they different entities in your model? I had trouble working that out. - Fig 3. beliefs don’t match what’s in the text (in the text, u is 1, but they are exposed to 6, in the fig, they believe 6 but are exposed to 0). This had me confused for a bit as I wasn’t sure I was reading the figure correctly. Pg. 9: Contagion experiments - I’m aware that the term “experiments” is sometimes used to describe running simulations under different conditions, and in the broadest sense (trying something to see what happens) it’s an appropriate term. However, I do feel that as a community it would be useful to distinguish between computer-assisted “gedanken-experiments” and experiments that make manipulations in a lab or in the real world with human participants. The first is for theory building - an essential part of the scientific division of labor - and the second for theory testing. I certainly won’t twist your arm, but you may find it helpful to be more explicit that you are exploring the macro-level consequences of a micro-level assumption, using a simulation to overcome the brain’s inability to see these consequences on its own. This will clarify your contribution for readers, as they’ll know what to expect in the next section. - Do your behavior over time charts (Fig 6. - 16.) show a true T0? I.e. the distribution of individual beliefs *before* any adoption has taken place? I would imagine that I should see the T0 for all charts to be identical (same starting conditions) and fairly similar to the bottom charts in Fig 9. If I’m not mistaken, please include those t0 belief distributions in your plots - it would be helpful to know where individuals are coming from. pg 14. Section on comparing contagion models - You vary social network structure and then describe the qualitative differences of your model in each of these graphs. We know that different network structures yields different shaped diffusion curves, and so this isn’t your point. Instead, I believe you are suggesting that the differences between network structures under the traditional contagion models are not the same as the differences under your own model. This is a pretty complicated comparison to make, and as the text currently reads, I personally (with a reviewer’s normal cognitive impairments…) have trouble understanding what I should take away. Specifically, I’m having a hard time distinguishing the effect of your model from the effect of the changes to the network structure, in this section in particular. One idea might be to make a table with different adoption rules on one axis, and different social networks on the other, and for a specific metric compare the differences across condition. Then you can highlight why your model changes our expectation about the effect of network structure. You could have multiple tables for different metrics. pg. 19 Section on Analysis of Results - Your analysis suggests that because there is almost always a path for information to reach an individual from the broadcast node, then the internal logic of an individual’s decision rule is more determinant of the outcome than the social network structure (if I understand it correctly!). A good comparison to make might be a case where there is no social influence at all, essentially an individual adoption model from the “broadcast” source, i.e. a star network with the broadcast source at the center. Then you can compare the effects of the other network structures to see which components of the outcome are due to the characteristics exemplified in each of the networks (clustering, short path lengths, high/low degree, etc). The challenge here is that this sets up a horse race between the effects of individual cognition, and the effects of network structure. You’re not in a position to really adjudicate between these two effects, as in your simulation the relative magnitudes will be entirely dependent on the parameters of “stubbornness” that get selected in the model. You could turn this into an opportunity, however, by describing (qualitatively) the conditions under which we should expect one of the effects to dominate over the other. Then you get to say something like “and so an important piece of empirical research will be to determine which of the two regimes (the types of social contagion we care about) fall into”. Pg. 22 Discussion - I appreciate your disclaimer that you are not making policy recommendations based on your results. It is fully appropriate to acknowledge that you are building theory that we as a community should subject to test before using it to derive policy. I also understand how (academically) we feel pressure to say that our work is “policy relevant”. Unfortunately, I think our community has a tendency to try and have it both ways, and try and both make policy claims and then hedge them at the same time, and it’s never really that clean. I think you may find it more comfortable to move fully away from the “policy recommendation” language, and instead describe how your work sets up what is arguably a very interesting follow-on study to confirm your understanding. (Plos one gives you this luxury - take advantage of it!) Then your description of an intervention can be wholeheartedly about an intervention in an experimental context, without the need for the disclaimer. You can describe how an experiment would differ from individual-level behavioral interventions in the psychology literature, and what that would add to our understanding. Best yet, you’d have your next paper lined up nicely. Thank you again for reading through what has become an inexcusably long review. I’m certain that you can address the questions I have without too much effort, and I look forward to seeing the revisions. James Houghton Reviewer #3: This paper presents a model of diffusion that is grounded in the cognition of each agents. By comparing different graph types to the exposure to information/beliefs coming from an institutional source, the authors try to show how adding complexity to the definition of an agent may lead to map more closely what happened, for example, with beliefs around COVID-19. While certainly interesting and timely, the article presents a series of concerns that make the message a bit less effective than it could be. The most relevant concerns I had are those around (a) the claim that more complex agents is something “new” of this model, while the ABM community has been discussing it since its beginnings, (b) the use of ABM (especially the “why” ABM), (c) the complete lack of reference to description and reporting standards in presenting the ABM, and (d) the report of results shows that you are probably trying to do too much with just one paper. I have detailed what I mean by this in the attached file, I hope you find them useful. I enjoyed reading your paper. Best of luck with your research! ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Giacomo Livan Reviewer #2: Yes: James Houghton Reviewer #3: Yes: Davide Secchi [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-21-06938R1Cognitive cascades: How to model (and potentially counter) the spread of fake newsPLOS ONE Dear Dr. Rabb, Thank you for submitting your manuscript to PLOS ONE. The review process has taken longer that it usually does, but the reviewers have been accurate and overall they clearly appreciated the improvements following the first version of the manuscript. The work is now close to fully meet publication criteria, with some aspects that still require additional consideration. However, the Minor Revision status does not imply that reviewers' last comments could be considered lightly. They are comments that could further improve the manuscript's quality, so appropriate consideration is required.Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. In particular:Reviewer 2 pointed to a single issue regarding tests robustness and the validity of conclusions, which seems to need further explanations to be completely convincing. This is an important point.Reviewer 3, instead, suggests a list of minor modifications or in some cases to add in the manuscript some explanations that were only given in the answers to reviewers. Please submit your revised manuscript by Dec 03 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Marco Cremonini, Ph.D. Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: Thank you for responding so thoroughly to my previous comments. The only area of outstanding concern for me is in the sensitivity test to the number of levels in your model. In my previous review, I asked that you check that the conclusions of your model are not sensitive to the arbitrary number of levels you chose. In theory, if the results are rigorous they should hold equally well despite the number of levels. Thank you for conducting this sensitivity test and reporting the results. I am worried however that your tests did not find that your conclusions were truly robust to the number of levels in the model. In your response and supplement you state this, but in the paper itself you merely say that you ran a sensitivity test. The main body of the paper makes no mention of the fact that the results seem to be strongly influenced by the discretization assumption. In fact, the sentence "We additionally ran in-silico experiments with lower and higher resolutions..." seems to imply that you didn't see any difference by going to more continuous levels of belief, especially as you had previously said "bu can be a continuous variable with the interval from strong disbelief to strong belief, or it can take on discrete values". At the very minimum, you have a disconnect between the scope condition you are claiming (continuous levels belief) and the domain over which your model predicts the outcome you claim (discrete belief levels). To be completely honest with your readers, I think more of this discussion of the reliance on discrete belief levels belongs in your main text. Don't let anyone suspect you're hiding anything. What your discretization assumption seems to be doing is acting as a coarse proxy for similarity between neighbors, so that you don't have to justify a rule for who individuals pay attention to that works in the continuous domain. As your results are dependent on this, you need to be more explicit about it. Otherwise, you can update your model to allow for similarity given a continuous measure of belief, and see if your results still hold. This would be a more robust result, and easier for you to justify if it doesn't add too much modeling complexity. Thanks, James Reviewer #3: My report is in the file attached to this form. Please open that file to access my comments to the paper. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Giacomo Livan Reviewer #2: Yes: James Houghton Reviewer #3: Yes: Davide Secchi [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 2 |
|
Cognitive cascades: How to model (and potentially counter) the spread of fake news PONE-D-21-06938R2 Dear Dr. Rabb, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Marco Cremonini, Ph.D. University of Milan Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-06938R2 Cognitive cascades: How to model (and potentially counter) the spread of fake news Dear Dr. Rabb: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Marco Cremonini Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .