Peer Review History

Original SubmissionMarch 26, 2025
Decision Letter - Anat Gesser-Edelsburg, Editor

Dear Dr. Nygren,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 26 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Prof. Anat Gesser-Edelsburg, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. You indicated that you had ethical approval for your study. In your Methods section, please ensure you have also stated whether you obtained consent from parents or guardians of the minors included in the study or whether the research ethics committee or IRB specifically waived the need for their consent.

3. Thank you for stating the following financial disclosure:

“This study was funded by the Swedish Institute for Educational Research, grant no 2020-00009. https://www.skolfi.se/other-languages/english/ Funding awarded TN”

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

4. Please note that your Data Availability Statement is currently a direct link can’t access each database. If your manuscript is accepted for publication, you will be asked to provide these details on a very short timeline. We therefore suggest that you provide this information now, though we will not hold up the peer review process if you are unable.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

Reviewer #1: This paper investigates the effects of interventions aimed at improving students' ability to identify and evaluate misinformation in upper secondary education in Sweden. The study uses a longitudinal design (with pre-test and post-test) to assess both the immediate and delayed impact of instructional interventions focused on critical thinking and source evaluation. The use of digital tools is taken into account.

The paper has many merits:

- it addresses a timely and socially critical topic—combating misinformation—through a rigorous empirical approach. Education is considered one of the main 'antidote' agains misinformation, and we need many rigorous studies on the long term effect of 'inoculating' small doses of misinformation to students, with how to correct misleading information;

- to my understanding, the analysis is solid, methodologically sound, and supported by appropriate statistical tools.

- the experimental design is clearly described, including procedures and instruments, allowing for full or partial replicability in other schools or countries.

- limitations are transparently acknowledged, strengthening the credibility of the conclusions.

However, some of the data tables are dense and require significant effort to interpret; the inclusion of additional summarizing charts or simplified visualizations would greatly enhance clarity and reader engagement. The paper already contains some basic graphs, but some more effective visualization or info graphic that can effectively represent the most important and technical aspects can be designed.

Reviewer #2: This is a very solid report of a good piece of research. It is very well anchored clearly written, present a good argument and a very clear account of the study, the analysis and the outcomes. It’s very interesting to read and has a good discussion.

The reviewer copy notes that an update can be downloaded, so I have reviewed the updated version. However, there is nothing that explains what has changed, which isn’t obvious from a quick look through.

The main points I’d like to note concern fairly specific aspects of clarity in one or two places, and the overall argument.

On p.14 it’s mentioned that the control group were more advanced and with a stronger academic background. This isn’t further commented on, but might it not be noteworthy in some respects? For example, is it surprising that there are no significant differences (p.18) between the control group and the experimental groups at pre-test?

A recurrent point, which is very interesting, is that certain factors seem to enhance ability (of certain groups) to recognise credible news, whereas others enhance ability to detect misinformation. It might be useful to discuss this distinction a little further. One might be tempted to think that, if a news item is either accurate or not, failing to recognise it as accurate amounts to recognising it as not accurate. Perhaps there is a state where judgement on an item is suspended, or just never made, or something like that, but this isn’t discussed. Related to this, there’s a tendency to conflate “accurate”, “credible” and even “true”, which may not be quite right (although perhaps it’s a workable approximation).

There’s an observation in several places, e.g. p.30 that “higher AOT scores correlated with scepticism toward both true and false news, suggesting that instead of improving discernment, AOT may be linked to a generalised distrust of information“. Perhaps the subtlety here is too much for even senior secondary students, but one might have hoped there was room for a notion of confidence in sources. No source or news item, surely, should ever be accepted as infallible, so that its truth is simply a given! But it may be more credible, so that one can have greater confidence in the likelihood of its being true. Is it just that the instrument used in the study was not capable of detecting levels of confidence (or degrees of scepticism), for instance? (From this perspective, a “general scepticism” is an entirely appropriate and necessary part of analytical thinking, and it’s not at all the same thing as “indiscriminate doubt”.)

In many ways the major message of the paper is that the effects of the kind of intervention studied tend not to be long-term or persistent. There seem to be two things to say about this.

(1) It’s not as clear as it might be, at the outset, that this is a focus of the study, at least in the sense that it’s not represented explicitly in any of the hypotheses. Should they all be read as including “as measured after the lapse of <some time="">”? (Related to this, only the first three hypotheses are explicitly stated anyway, but perhaps the others are clear enough.)

(2) There is little if any discussion in the background section about this particular point. Hence, where would the hypothesis come from? Much, if not most of the existing literature, across many areas of education and learning (at least as I would read it) suggests that one-off or short-term interventions rarely (or only in particular facilitating circumstances) lead to sustained behavioural or cognitive changes. Hence one would be surprised if these interventions were any different. Surely it’s to be expected that meaningful change requires attitudes and practices to be inculcated over a significant period of time?

Very minor typos:

p.5 para.1 "in as" should be just one of these, probably "in"?

p.9 missing full stop at the end of H2.

p.9 last para. -- capitalisation of "Bad News" (twice).

p.13 again, capitalisation of "Bad News".</some>

Reviewer #3: The authors are thanked for their labor-intensive and engaging research. The study is highly significant in that it provides insights into the long-term effects of various interventions. Although the inclusion of different groups and interventions occasionally makes it challenging to follow the text, the research design stands out as one of the study’s major strengths.

Some specific areas of the text are suggested for revision, as outlined below:

- While the inclusion of hypotheses is appreciated, it is also recommended that the research questions guiding the study be explicitly articulated in the text. Positioning the research questions just before the hypotheses could enhance the clarity of the study's framework and objectives.

- In Table 2, for Analysis 4 (Factors of identifying true and false news), the dependent variable is described as “Mean reliability ratings for true news and false news, from one to seven.” It might be helpful to clarify whether this rating is consistent across all independent variables, or if it varies (e.g., 1 to 5 or 1 to 10), as this could influence interpretation.

- Is it possible that there might be an error in the table numbers mentioned in the second paragraph under the “Analysis” section? In the sentence: “…As shown by Tables 4 and 5, there are no significant differences between the control and experimental groups in the pre-test.”

- In the first paragraph under the heading “Fact-checking skills intervention (News Evaluator)” (page 20), the following is stated: “…However, there might be some indication of a positive effect for the first Facebook post (p = 0.056)…”. Is it possible to locate this reported p-value in any of the tables in the main text or appendix?

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org

Revision 1

Response to reviewers

Reviewer #1: This paper investigates the effects of interventions aimed at improving students' ability to identify and evaluate misinformation in upper secondary education in Sweden. The study uses a longitudinal design (with pre-test and post-test) to assess both the immediate and delayed impact of instructional interventions focused on critical thinking and source evaluation. The use of digital tools is taken into account.

The paper has many merits:

- it addresses a timely and socially critical topic—combating misinformation—through a rigorous empirical approach. Education is considered one of the main 'antidote' agains misinformation, and we need many rigorous studies on the long term effect of 'inoculating' small doses of misinformation to students, with how to correct misleading information;

- to my understanding, the analysis is solid, methodologically sound, and supported by appropriate statistical tools.

- the experimental design is clearly described, including procedures and instruments, allowing for full or partial replicability in other schools or countries.

- limitations are transparently acknowledged, strengthening the credibility of the conclusions.

However, some of the data tables are dense and require significant effort to interpret; the inclusion of additional summarizing charts or simplified visualizations would greatly enhance clarity and reader engagement. The paper already contains some basic graphs, but some more effective visualization or info graphic that can effectively represent the most important and technical aspects can be designed.

Response: We appreciate this thoughtful suggestion and fully agree that enhanced visualisations can support clarity and engagement. We have considered adding simplified figures (e.g., summary bar charts of pre- and post-test results for each group), and we experimented with several options. However, due to the complexity of our design—multiple overlapping interventions, several outcome variables, and mixed model estimates across nested data—it proved difficult to produce clear and accurate summary visualisations without risking oversimplification or misrepresentation of the findings.

Reviewer #2: This is a very solid report of a good piece of research. It is very well anchored clearly written, present a good argument and a very clear account of the study, the analysis and the outcomes. It’s very interesting to read and has a good discussion.

The reviewer copy notes that an update can be downloaded, so I have reviewed the updated version. However, there is nothing that explains what has changed, which isn’t obvious from a quick look through.

The main points I’d like to note concern fairly specific aspects of clarity in one or two places, and the overall argument.

On p.14 it’s mentioned that the control group were more advanced and with a stronger academic background. This isn’t further commented on, but might it not be noteworthy in some respects? For example, is it surprising that there are no significant differences (p.18) between the control group and the experimental groups at pre-test?

A recurrent point, which is very interesting, is that certain factors seem to enhance ability (of certain groups) to recognise credible news, whereas others enhance ability to detect misinformation. It might be useful to discuss this distinction a little further. One might be tempted to think that, if a news item is either accurate or not, failing to recognise it as accurate amounts to recognising it as not accurate. Perhaps there is a state where judgement on an item is suspended, or just never made, or something like that, but this isn’t discussed. Related to this, there’s a tendency to conflate “accurate”, “credible” and even “true”, which may not be quite right (although perhaps it’s a workable approximation).

There’s an observation in several places, e.g. p.30 that “higher AOT scores correlated with scepticism toward both true and false news, suggesting that instead of improving discernment, AOT may be linked to a generalised distrust of information“. Perhaps the subtlety here is too much for even senior secondary students, but one might have hoped there was room for a notion of confidence in sources. No source or news item, surely, should ever be accepted as infallible, so that its truth is simply a given! But it may be more credible, so that one can have greater confidence in the likelihood of its being true. Is it just that the instrument used in the study was not capable of detecting levels of confidence (or degrees of scepticism), for instance? (From this perspective, a “general scepticism” is an entirely appropriate and necessary part of analytical thinking, and it’s not at all the same thing as “indiscriminate doubt”.)

In many ways the major message of the paper is that the effects of the kind of intervention studied tend not to be long-term or persistent. There seem to be two things to say about this.

(1) It’s not as clear as it might be, at the outset, that this is a focus of the study, at least in the sense that it’s not represented explicitly in any of the hypotheses. Should they all be read as including “as measured after the lapse of ”? (Related to this, only the first three hypotheses are explicitly stated anyway, but perhaps the others are clear enough.)

(2) There is little if any discussion in the background section about this particular point. Hence, where would the hypothesis come from? Much, if not most of the existing literature, across many areas of education and learning (at least as I would read it) suggests that one-off or short-term interventions rarely (or only in particular facilitating circumstances) lead to sustained behavioural or cognitive changes. Hence one would be surprised if these interventions were any different. Surely it’s to be expected that meaningful change requires attitudes and practices to be inculcated over a significant period of time?

Very minor typos:

p.5 para.1 "in as" should be just one of these, probably "in"?

p.9 missing full stop at the end of H2.

p.9 last para. -- capitalisation of "Bad News" (twice).

p.13 again, capitalisation of "Bad News".

Response: We now highlight that control group students came from more advanced, theoretical tracks but still showed no pre-test advantage. In the discussion, we reflect on how academic placement does not guarantee digital source evaluation skills.

We added clarifications emphasizing the difference between perceived reliability and factual accuracy.

We now include a more nuanced interpretation of AOT results.

We revised the introduction and prefaced the hypotheses with a sentence clarifying the focus on long-term outcomes.

Even if the challenge of long-term effects is highlighted in educational literature (as already mentioned and now better underlined in the paper), most studies in the misinformation field do short interventions, and this is why it is necessary to do this kind of study to provide evidence of potential and challenges.

Reviewer #3: The authors are thanked for their labor-intensive and engaging research. The study is highly significant in that it provides insights into the long-term effects of various interventions. Although the inclusion of different groups and interventions occasionally makes it challenging to follow the text, the research design stands out as one of the study’s major strengths.

Some specific areas of the text are suggested for revision, as outlined below:

- While the inclusion of hypotheses is appreciated, it is also recommended that the research questions guiding the study be explicitly articulated in the text. Positioning the research questions just before the hypotheses could enhance the clarity of the study's framework and objectives.

- In Table 2, for Analysis 4 (Factors of identifying true and false news), the dependent variable is described as “Mean reliability ratings for true news and false news, from one to seven.” It might be helpful to clarify whether this rating is consistent across all independent variables, or if it varies (e.g., 1 to 5 or 1 to 10), as this could influence interpretation.

- Is it possible that there might be an error in the table numbers mentioned in the second paragraph under the “Analysis” section? In the sentence: “…As shown by Tables 4 and 5, there are no significant differences between the control and experimental groups in the pre-test.”

- In the first paragraph under the heading “Fact-checking skills intervention (News Evaluator)” (page 20), the following is stated: “…However, there might be some indication of a positive effect for the first Facebook post (p = 0.056)…”. Is it possible to locate this reported p-value in any of the tables in the main text or appendix?

Response: We have added two guiding research questions before the hypotheses.

We added scales of the independent variables in Table 2. Also, we present regression results in the analysis section in standardized form.

Thank you very much, it was an error indeed.

An extended table with p-values was added as Appendix F.

Attachments
Attachment
Submitted filename: Response to reviewersPLOS250602.docx
Decision Letter - Anat Gesser-Edelsburg, Editor

Investigating the Long-Term Impact of Misinformation Interventions in Upper Secondary Education

PONE-D-25-15019R1

Dear Dr. Nygren,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Prof. Anat Gesser-Edelsburg, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Formally Accepted
Acceptance Letter - Anat Gesser-Edelsburg, Editor

PONE-D-25-15019R1

PLOS ONE

Dear Dr. Nygren,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Prof. Anat Gesser-Edelsburg

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .