Peer Review History

Original SubmissionMay 22, 2024
Decision Letter - Stefano Cresci, Editor

PONE-D-24-20764X Under Musk’s Leadership: More Hate and No Reduction in Inauthentic ActivityPLOS ONE

Dear Dr. Burghardt,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a slightly revised version of the manuscript that addresses the points raised during the review process. Specifically, the reviewers suggested some improvements concerning the research questions addressed in the manuscript and asked for a very limited set of additional results that could further strengthen the paper. Moreover, they requested a few clarifications about the adopted methodology and part of the results. Overall, the required edits should be pretty straightforward to complete and we look forward to receiving the revised version of this interesting manuscript. Comments from PLOS Editorial Office: We note that one or more reviewers has recommended that you cite specific previously published works. As always, we recommend that you please review and evaluate the requested works to determine whether they are relevant and should be cited. It is not a requirement to cite these works. We appreciate your attention to this request.

Please submit your revised manuscript by Oct 07 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Stefano Cresci

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection and analysis method complied with the terms and conditions for the source of the data.

3. Thank you for stating the following in the Acknowledgments Section of your manuscript: 

"Funding for this work is provided through NSF (award #2051101), and through DARPA (awards #HR0011260595 and #HR001121C0169)."

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. 

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 

"DH is funded through the National Science Foundation (award #2051101; https://www.nsf.gov/), who did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

KL and KB are funded through the Defense Advanced Research Projects Agency (awards #HR0011260595 and #HR001121C0169; https://www.darpa.mil/), who did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

4. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

6. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

********** 

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

********** 

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

********** 

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

********** 

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Authors examine material posted on Twitter/X from the beginning of 2022 through June 2023, the period that includes Musk’s full tenure as CEO. Through an analysis of hate speech level, coordinated behavior detection and inauthenticity of the accounts, they investigate if Musk’s purchase of X is correlated with hate speech, and inauthentic account activity. The experiments are performed according to a scientific procedure and the paper is correctly english-written. However, some suggestions:

As you mention in the limitation, you are unable to definitively determine what caused the increased rate of hate speech following Musk’s purchase of X. Given this observation, I would make some comments on the results less assertive. I would start with the title, which states “More Hate”. As far as I understood, there has been a peak in hate speech even previously Musk’s purchase. And there is only a very high peak after the purchase and no clear long trend.

In account coordination, you state “Previous work found relatively little correlation between bots and coordination [33]”. Actually Nizzoli et al. [33] is a particular case of study, while the idea of coordination on social media originated from botnets evolution [1], until Facebook proposed the concept of Coordinated Inauthentic Behavior (which includes the concept of inauthenticity and coordination). As you stated, an example is the interplay between coordination and information operation [2], which could make use of bots.

In Figure 2, you show the mean engagement hate speech per week. I would ask you why you do not show the same plot for the baseline. Due to possible offline, external events it can happen a general increase in the engagement both for hate speech posts and baseline

[1] Mannocci, et al. "Mulbot: Unsupervised bot detection based on multivariate time series." 2022 IEEE international conference on big data (Big Data). IEEE, 2022.

[2] Cima, et al. "Coordinated Behavior in Information Operations on Twitter." IEEE Access (2024).

Reviewer #2: Thank you for allowing me to read this work on the prevalence of hate speech, relative engagement (including exposure to such content), and coordinated and inauthentic behavior on Twitter before and after Elon Musk's takeover. The topic is undoubtedly relevant, and the analysis is methodologically sound and interesting in its findings. Overall, I believe the paper is nearly ready for publication, and I commend the authors for producing a manuscript of such high quality. I also appreciated the methodological choices and the additional analyses conducted to test the robustness of various results.

I have just a few suggestions.

I recommend integrating the research questions (for example, the first one) with a reference to the question on engagement (likes), which is currently missing, even though it is an area of investigation that is extensively addressed and revisited in the discussion. Alternatively, a new research question could be added to cover this aspect. Additionally, this research question needs to be introduced adequately. At present, immediately after the first research question, there is a section dedicated to discussing topics such as engagement, likes, and exposure (views). These are different concepts, and their relationships should be addressed more clearly, as they currently lack depth. The authors seem primarily interested in user exposure to problematic content to test the assertion regarding freedom of speech but not reach. However, they mainly use likes as a proxy for exposure, as reported in the results where the correlation between the two measures is described. When introducing the research question, it should be clarified whether the focus is on engagement, exposure, or both, or if likes are used as a proxy for exposure. In that case, the choice should be justified, and the potential and limitations of this measure should be described. These aspects should be outlined when introducing the research question and in the method section, wherever appropriate, depending on the aspect being discussed.

When presenting the results, many hashtags appear in Eastern characters and languages, despite the authors analyzing English-language content. It would be helpful to elaborate on this aspect in more detail.

Clarifications and minor points:

As someone interested in gender issues and political communication on social media, I found the data on hate speech against transgender and homosexual individuals particularly compelling. I believe this finding could make the paper appealing to researchers working in this field, as these measures help to better understand the social and cultural context and the characteristic communication of a platform. Perhaps the authors could add some further qualitative details on the type of content that constitutes a significant portion of the hate speech on platform X, who share it, and any other details that might be relevant.

Could you provide more details on the methodology used to identify coordinated behavior? For example, for readers who may not be entirely familiar with these methods, it would be helpful to specify how the total volume of reposts is controlled when identifying coordinated actors. Additionally, explaining what TF-IDF represents and why this particular representation was chosen over others would be beneficial.

In the methods section, it is stated that sexual content is filtered out, but it is also mentioned that content with a sexual score above 0.3 is retained. This should be the other way around. Could you please verify?

********** 

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Revision 1

Editor

Editor critique

"Specifically, the reviewers suggested some improvements concerning the research questions addressed in the manuscript and asked for a very limited set of additional results that could further strengthen the paper."

We thank the reviewers for these suggestions and have substantially re-written the manuscript and updated our results to address these concerns.

"Moreover, they requested a few clarifications about the adopted methodology and part of the results."

We have added additional text to improve the clarity of the methodology and results sections.

Reviewer #1

Reviewer critique

"As you mention in the limitation, you are unable to definitively determine what caused the increased rate of hate speech following Musk’s purchase of X. Given this observation, I would make some comments on the results less assertive. I would start with the title, which states “More Hate”. As far as I understood, there has been a peak in hate speech even previously Musk’s purchase. And there is only a very high peak after the purchase and no clear long trend."

We have toned down our claims and the description of our results throughout the manuscript, and have changed the title to “Substantial Hate” (rather than the previous “More Hate”) to address this concern.

"In account coordination, you state “Previous work found relatively little correlation between bots and coordination [33]”. Actually Nizzoli et al. [33] is a particular case of study, while the idea of coordination on social media originated from botnets evolution [1], until Facebook proposed the concept of Coordinated Inauthentic Behavior (which includes the concept of inauthenticity and coordination). As you stated, an example is the interplay between coordination and information operation [2], which could make use of bots.

[1] Mannocci, et al. "Mulbot: Unsupervised bot detection based on multivariate time series." 2022 IEEE international conference on big data (Big Data). IEEE, 2022.

[2] Cima, et al. "Coordinated Behavior in Information Operations on Twitter." IEEE Access (2024).

We thank the reviewer for this critique and have added these references to the manuscript. We have also added new references to contextualize hate speech and misinformation (Saha et al., 2023; Martel and Rand, 2023; Schneider et al., 2023; Kennedy et al., 2022)."

We have also clarified the relevant section of the paper, which now reads as follows:

“There is often a strong overlap between the presence of bots and patterns of coordination (although a study of coordinated activity during the 2019 UK general election found relatively little correlation between bots and coordination).”

Added references:

Kennedy, B., Jin, X., Davani, A. M., Dehghani, M., & Ren, X. (2020). Contextualizing hate speech classifiers with post-hoc explanation. arXiv preprint arXiv:2005.02439.

Martel, C., & Rand, D. G. (2023). Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology, 101710.

Saha, P., Garimella, K., Kalyan, N. K., Pandey, S. K., Meher, P. M., Mathew, B., & Mukherjee, A. (2023). On the rise of fear speech in online social media. Proceedings of the National Academy of Sciences, 120(11), e2212270120.

Schneider, P. J., & Rizoiu, M. A. (2023). The effectiveness of moderating harmful online content. Proceedings of the National Academy of Sciences, 120(34), e2307360120.

"In Figure 2, you show the mean engagement hate speech per week. I would ask you why you do not show the same plot for the baseline. Due to possible offline, external events it can happen a general increase in the engagement both for hate speech posts and baseline"

This is an excellent point; we have updated Figure 2 to include the same plot for the baseline.

Reviewer #2

"Thank you for allowing me to read this work on the prevalence of hate speech, relative engagement (including exposure to such content), and coordinated and inauthentic behavior on Twitter before and after Elon Musk's takeover. The topic is undoubtedly relevant, and the analysis is methodologically sound and interesting in its findings. Overall, I believe the paper is nearly ready for publication, and I commend the authors for producing a manuscript of such high quality. I also appreciated the methodological choices and the additional analyses conducted to test the robustness of various results."

We thank the reviewer for the positive feedback.

"I recommend integrating the research questions (for example, the first one) with a reference to the question on engagement (likes), which is currently missing, even though it is an area of investigation that is extensively addressed and revisited in the discussion. Alternatively, a new research question could be added to cover this aspect."

We have created new research questions to address this apt suggestion, as follows:

How did Musk's purchase of X, and subsequent policy changes, correlate with the volume of hate speech on X?

How did the level of engagement with posts containing hate speech change on X following Musk's purchase of the platform?

How did Musk's purchase of X, and subsequent policy changes, correlate with inauthentic account activity on X?

How did engagement with posts made by inauthentic accounts change following Musk's purchase of X?

"Additionally, this research question needs to be introduced adequately. At present, immediately after the first research question, there is a section dedicated to discussing topics such as engagement, likes, and exposure (views). These are different concepts, and their relationships should be addressed more clearly, as they currently lack depth. The authors seem primarily interested in user exposure to problematic content to test the assertion regarding freedom of speech but not reach. However, they mainly use likes as a proxy for exposure, as reported in the results where the correlation between the two measures is described. When introducing the research question, it should be clarified whether the focus is on engagement, exposure, or both, or if likes are used as a proxy for exposure. In that case, the choice should be justified, and the potential and limitations of this measure should be described. These aspects should be outlined when introducing the research question and in the method section, wherever appropriate, depending on the aspect being discussed."

We have now clarified our discussion of these topics in the Introduction as follows:

“We use the term ‘engagement’ to refer to the number of likes and reposts received by a given post on X. We are primarily interested in users' exposure to hate speech on the platform, yet we do not have data on the number of views each post received before Musk's purchase. Engagement metrics can therefore act as a reasonable proxy for the visibility of posts on X. While we cannot be certain that the number of likes received by a certain post directly corresponds to the number of views received by that post, we nevertheless find that these two metrics are strongly correlated. Previous independent reports show that hate speech has increased since Musk purchased X; however, this prior research did not explore changes in the prevalence of hate speech over the longer term, and did not explore whether an increase in hate posts corresponds to an increase in engagement with hate.”

"When presenting the results, many hashtags appear in Eastern characters and languages, despite the authors analyzing English-language content. It would be helpful to elaborate on this aspect in more detail."

We clarify in the Results section that, “Despite capturing tweets containing English words, we see many Chinese hashtags appear. This is because these hashtags are embedded in nonsensical English phrases such as ``Measure thing painting admit.'' in the largest cluster, and ``One thing, however, he could do; and he did. He wrote a note #开云 [#Kering] #世界杯 [ #World Cup] #AG真人 [AG Real Person]'' in the second cluster.”

"As someone interested in gender issues and political communication on social media, I found the data on hate speech against transgender and homosexual individuals particularly compelling. I believe this finding could make the paper appealing to researchers working in this field, as these measures help to better understand the social and cultural context and the characteristic communication of a platform. Perhaps the authors could add some further qualitative details on the type of content that constitutes a significant portion of the hate speech on platform X, who share it, and any other details that might be relevant."

We appreciate the reviewer’s point, and have therefore added substantial additional text to the Discussion section, as follows:

“When qualitatively observing the posts containing hate speech, we noticed that posts in some categories exhibit themes specific to that category, themes that go beyond the denigration of the targeted identities. For example, while both transphobic and homophobic posts often repeated the moldy trope that people who are transgender or gay suffer from mental illness, relative to homophobic posts, transphobic posts more often expressed the poster's political views. Many users who posted transphobic material stated their support for, or antipathy toward, a given politician based on that politician's position on transgender rights. In contrast, while racist posts in our sample often asserted similarly longstanding hateful stereotypes, unlike transphobic posts, racist posts did not frequently connect such statements to evaluations of politicians. Hence, while a substantial portion of hate speech on X consists of reiterations of perennial derogatory stereotypes about the target group or groups, there is variation across categories of hate speech as to the extent to which hate speech is associated with partisan political stances. The latter presumably reflects the political landscape at the time that posts are written, including the prominence of particular types of discrimination in contemporaneous ‘culture-war’ debates in the public sphere.”

"Could you provide more details on the methodology used to identify coordinated behavior? For example, for readers who may not be entirely familiar with these methods, it would be helpful to specify how the total volume of reposts is controlled when identifying coordinated actors. Additionally, explaining what TF-IDF represents and why this particular representation was chosen over others would be beneficial."

We have now added more detail in the Methods section to describe the coordination detection methodology, which we quote below:

“Co-repost similarity is defined as accounts sharing many identical reposts while controlling for overall repost popularity. For any account that posts at least ten reposts, we record the ID of all reposts then create a TF-IDF vector by calculating each repost ID's ‘term frequency’ (how often it appears for each account) multiplied by their inverse document frequency (how many accounts share this repost). Unlike, e.g., simple repost frequency, TF-IDF gives greater weight to reposts that appear unusually often among a few accounts and less weight to reposts that are popular and broadly shared across X. Accounts whose vectors have a cosine similarity in the top 0.1 percentile are considered coordinated. Finally, activity similarity is defined as accounts posting (reposting or writing original posts, replies, or quote posts) nearly simultaneously. Following previous work, we take all accounts with at least 10 posts and bin their post times into 30-minute intervals. Like repost similarity, we map these bins into a TF-IDF vector, and accounts whose vectors have a cosine similarity in the top 0.1 percentile are considered coordinated. This adds more weight to accounts that are unusually co-active after controlling for times of the day when X accounts are broadly active.”

"In the methods section, it is stated that sexual content is filtered out, but it is also mentioned that content with a sexual score above 0.3 is retained. This should be the other way around. Could you please verify?"

Thank you for pointing this out; we have corrected the text as follows: “We then use the Perspective API to remove posts with toxicity confidence below 0.7 and with sexually explicit confidence above 0.3.”

Attachments
Attachment
Submitted filename: PLOS ONE Rebuttal.pdf
Decision Letter - Stefano Cresci, Editor

X Under Musk’s Leadership: Substantial Hate and No Reduction in Inauthentic Activity

PONE-D-24-20764R1

Dear Dr. Burghardt,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Stefano Cresci

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Thank you for your work on the revised manuscript. Both reviewers agree that you have addressed all of their comments.

Therefore the paper can now be accepted for publication, congratulations!

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: N/A

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The suggestions have been addressed, but all the Figure, except Figure 2, which is the one modified, are disappeared. Please, add all the figures in the final version.

Reviewer #2: Thank you for the detailed response letter and the comprehensive answers to the inquiries. I look forward to seeing this paper published.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Stefano Cresci, Editor

PONE-D-24-20764R1

PLOS ONE

Dear Dr. Burghardt,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Stefano Cresci

Academic Editor

PLOS ONE

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .