Peer Review History

Original SubmissionJuly 15, 2025
Decision Letter - Hui Li, Editor

PONE-D-25-38473Rare Event Detection by Progressive Clustering UndersamplingPLOS ONE

Dear Dr. Abuzeid,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

The reviewers have completed the reviewing, and you could find the comments by the reviewers. It is required to enhance its scientific rigor, clarity, and completeness before it can be considered for acceptance. The key revisions needed are:

Methodological and Experimental Rigor: It is necessary to elaborate on the specific implementation details of the PCU algorithm, including the criteria for selecting clustering algorithms, determining key parameters (e.g., the "rupture" point), and conducting parameter sensitivity and computational complexity analyses. Please also supplement the description of the experimental setup and incorporate statistical significance tests to strengthen the credibility of the comparative results.

Depth of Analysis and Discussion: The manuscript requires a dedicated subsection to systematically discuss the limitations of the proposed method, potential failure scenarios, computational costs, and its generalizability to other domains (e.g., finance, healthcare). It is also recommended to include a summary table clearly outlining the advantages and disadvantages of related studies.

Presentation and Language Quality: Please enhance the resolution and clarity of the figures (Figures 1-4) and ensure the "Graphical Abstract" mentioned is either included or all references to it are removed. Furthermore, a thorough revision of the text is needed to address language issues, reference formatting (which requires updating), and consistency in terminology (e.g., the "Semi-Guided" branch).

I look forward to receiving your revised manuscript accompanied by a detailed point-by-point response letter. I hope the reviewers' feedback is useful to you.

Please submit your revised manuscript by Dec 11 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Hui Li

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse

3. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process.

If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: this study explores various resampling techniques and introduces a novel method called Progressive Clustering Undersampling (PCU). This technique removes negative instances that are distant from positive ones. PCU was compared with eight common undersampling and two oversampling techniques, consistently outperforming them on highly imbalanced and noisy datasets. The workflow demonstrates that rare anomalies can be effectively predicted using unsupervised methods based on frequency-driven decision boundaries. Progressive clustering ultimately identifies clusters with the highest concentration of positive instances. These delineated clusters are then saved by supervised models and used in the preparatory phase before prediction. The proposed method produces two outputs: one optimized for a high F1- score and the other for high precision. Overall, this approach presents a promising solution for identifying rare anomalies in complex, imbalanced data environments.

Good work keeps up

But some comments are needed?

all of them are submitted to the editor

Reviewer #2: Overall Evaluation

This manuscript presents a novel approach named Progressive Clustering Undersampling (PCU) for detecting rare events in highly imbalanced datasets. The study is well-motivated and addresses an important challenge in machine learning and data-driven anomaly detection. The integration of clustering-based unsupervised learning with progressive undersampling and supervised refinement is innovative and potentially impactful.

The manuscript is generally well-written, methodologically sound, and supported by comprehensive experiments on both real-world and synthetic datasets. However, several key areas require major revisions to improve clarity, reproducibility, and scientific rigor before the paper can be accepted for publication.

Major Comments

Methodological Clarity and Reproducibility

The PCU algorithm is interesting but described in a largely conceptual way. The pseudocode in Algorithm 1 lacks specific implementation details such as:

How clustering algorithms are selected or switched between stages.

Criteria used for determining the “rupture” point.

Parameter tuning strategies for clustering algorithms.

Please expand this section with more formal definitions, computational complexity, and parameter sensitivity analysis.

Comparative Evaluation

While the paper includes comparisons with several resampling methods, it would strengthen the work to include statistical significance tests (e.g., t-test or Wilcoxon) across repeated runs.

Please clarify whether the same random seed or data splits were used across all methods to ensure fairness.

Generalizability

The petroleum dataset is well described, but the method’s generalizability beyond this specific application (e.g., finance, healthcare, cybersecurity) should be better discussed.

Consider adding one more benchmark dataset from a different domain to show robustness.

Figures and Visualizations

Figures 1–4 are valuable but require higher resolution and clearer legends.

The “Graphical Abstract” mentioned in the text is missing from the submission. Please include it or remove all references to it.

Ablation Study

Since PCU combines unsupervised clustering and supervised refinement, it is important to show the contribution of each step individually (e.g., clustering-only, supervised-only, and combined).

Language and Structure

Some sections (e.g., Introduction and Related Work) are overly descriptive. Please focus more on critical comparisons and concise synthesis.

Check for consistency in reference formatting (e.g., DOI style and numbering).

Minor Comments

Typographical and formatting errors occur in several places (e.g., “undersam-pling,” “pos-itive”). Please revise carefully.

Include clear variable definitions when first introduced in equations or pseudocode.

The term “Semi-Guided” and “Fully-Guided” branches should be formally defined and consistently referenced.

Ensure all URLs in the references are correctly formatted and accessible.

Clarify whether “noise” in the Two Moons dataset refers to label noise or feature noise.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures

You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation.

NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.

Revision 1

Dear Editor,

I would like to sincerely thank you and the reviewers for your valuable time, insightful comments, and constructive suggestions and modifications. Your feedback has been instrumental in improving the clarity and quality of this manuscript. We have carefully addressed all points raised, and believe these revisions have greatly enhanced the overall writing.

All additions or changes in the marked-up copy appear in red, while deletions are shown with strikethrough. Answers here include searchable keywords, written in italics.

Reviewer One:

1. Need to add subsection for more explanation about the limitation, challenges, contribution and structure of the article.

Performed in subsec. 5.1.

structure of the article :

‘The rest of the paper is structured ..’

2. It is better to add table summarizing the x-studies showing the advantages and disadvantages.

I definitely agree with you that summarizing the related studies in a table would be better. However, we chose to keep the comparison within the text rather than summarizing it because of limitations on the number of tables and pages. We reserved those for the methodology description and results.

3. A detailed analysis of the limitations and potential failure scenarios of the proposed model is missing.

Detailed in subsec. 5.1.

4. Some information about datasets and experiment setup is needed.

Subsec. 3.1 (one page) describes the petroleum dataset. while the second dataset was described in:

‘The two moons dataset is a synthetic, ….’

please refer whether a specific information is missing so we can modify or enhance writing.

experiment setup description is enhanced in subsec. 5.2.

5. Additional comparative analysis, around computational requirements, cost and robustness of the model with other SOTA methods.

‘Three evaluation metrics were assessed on two datasets’…

Regarding the computational aspect, here is our perspective:

PCU simply integrates one or more common supervised and unsupervised algorithms to undersample the training set. The computational complexity is tied to the specific algorithms used, with their exact complexity functions typically outside the scope of this work and usually obtainable from the foundational algorithm papers.

In general, undersampling methods may require time to reduce the number of samples, but they offer clear advantages during model building compared with oversampling techniques.

6. Provide quantitative remarks of the impact of the proposed method in the abstract and conclusion.

The quantitative results were found to be too extensive to include in the abstract due to character limitations. Listing the scores of PCU branches compared to the next best resampling methods for both datasets, in terms of F-score and precision, would exceed the allowable length. Therefore, we instead summarized these findings in the conclusion:

‘When using the KNN classifier with various resampling …’

7. References needed to be updated (24/25).

Updated with related articles.

Reviewer Two

Methodological Clarity and Reproducibility

The PCU algorithm is interesting but described in a largely conceptual way. The pseudocode in Algorithm 1 lacks specific implementation details such as:

• How clustering algorithms are selected or switched between stages.

A list of user-defined algorithms required to run the PCU and can be used in all stages for simple implementation. PCU compares between clusters and select the unique one regardless of its origin:

‘PCU is not tied to a specific unsupervised algorithm …’,

Criteria used for determining the “rupture” point.

This was answered in:

‘until the minority instances can …’ to ‘formation of the last cluster’.

in addition, clearly stated in the added subsec. 5.2

We Also:

- Explained farther the pseudo-code (Algorithm 1).

- Differentiated between customized version of PCU and automated one.

• Parameter tuning strategies for clustering algorithms.

We focused more on testing parameters of resampling algorithms, since the study’s main goal is to compare different resampling methods. Tuning unsupervised parameters would be extensive and could lead to a huge number of trials when combined to other options. This is illustrated in:

‘Clustering algorithms were employed in the baseline setup …’.

Please expand this section with more formal definitions, computational complexity, and parameter sensitivity analysis.

All were described in the new subsec. 5.2.:

Regarding the computational complexity. Here is what we think:

PCU simply integrates one or more common supervised and unsupervised algorithms to undersample the training set. The computational complexity is tied to the specific algorithms used, with their exact complexity functions typically outside the scope of this work and usually obtainable from the foundational algorithm papers.

In general, undersampling methods may require time to reduce the number of samples, but they offer clear advantages during model building compared with oversampling techniques.

Comparative Evaluation

While the paper includes comparisons with several resampling methods, it would strengthen the work to include statistical significance tests (e.g., t-test or Wilcoxon) across repeated runs.

Five repeated runs with different splits were reported to ensure that the results are not occurred by chance. Table 2 was modified to include various runs results.

‘Noise versions were tested using different train/test splits ….’

Please clarify whether the same random seed or data splits were used across all methods to ensure fairness.

This is illustrated in:

‘All models use the same training set, test set ….’

Generalizability

The petroleum dataset is well described, but the method’s generalizability beyond this specific application (e.g., finance, healthcare, cybersecurity) should be better discussed. Consider adding one more benchmark dataset from a different domain to show robustness.

Incorporating an additional industry dataset would be very useful but also would greatly expand the paper beyond our page limit. We have outlined our expectations regarding other domains where PCU may be beneficial within the paragraph:

‘Our experiments demonstrate the effectiveness of …’

Additionally, A benchmark dataset was experimented and added to enhance limitation understanding:

‘Another limitation is that PCU is built on the assumption—that the dataset’

Figures and Visualizations

Figures 1–4 are valuable but require higher resolution and clearer legends.

Modified all, Thank you.

The “Graphical Abstract” mentioned in the text is missing from the submission. Please include it or remove all references to it.

We apologize for the inaccessibility. It was in the original draft at the bottom of the manuscript; we moved its placement after the abstract, since the figure is typically uploaded separately in editorial managers.

Ablation Study

Since PCU combines unsupervised clustering and supervised refinement, it is important to show the contribution of each step individually (e.g., clustering-only, supervised-only, and combined).

The contribution of the unsupervised component is illustrated using both Sankey diagrams and cluster inclusion tables. And

‘displayed from right to left ….’

In figure 4. Also displayed as

‘green and violet boundaries …’

In the Graphical abstract.

The performance of the supervised component is reported in the petroleum dataset evaluation, without resampling (Figure 5) and in the Non-resampled results (Table 2). The combined model (i.e., PCU) is evaluated under the ‘Fully-guided’ and ‘Semi-guided’ in both the corresponding figure and table.

Language and Structure

Some sections (e.g., Introduction and Related Work) are overly descriptive. Please focus more on critical comparisons and concise synthesis.

We summarized some, and removed others (marked by strikethrough).

Check for consistency in reference formatting (e.g., DOI style and numbering).

Modified, Thank you.

Minor Comments

Typographical and formatting errors occur in several places (e.g., “undersam-pling,” “pos-itive”). Please revise carefully.

Hyphenation settings at lines ends can be modified according to the journal settings.

Include clear variable definitions when first introduced in equations or pseudocode.

Clearer definitions were added specifically in the Algorithm definition (pseudocode). Its placement was moved ahead to Subsec. 5.2, to leverage the definitions from Section 4—such as cluster inclusion, the number of removals, and the rupture stopping point.

The term “Semi-Guided” and “Fully-Guided” branches should be formally defined and consistently referenced.

‘The Semi-Guided Branch involves training ..’

Ensure all URLs in the references are correctly formatted and accessible.

Done, thank you.

Clarify whether “noise” in the Two Moons dataset refers to label noise or feature noise.

The Two Moons dataset includes both types. Additionally, we added experiments that vary each type individually. Table 2 presents more test variations in label noise, while Figure 7 was added to evaluate variations in feature noise.

‘Similar experiment was conducted where the noise’

We appreciate your time and consideration, and we hope that the revised version meets the expectations of the reviewers and the editorial board

Attachments
Attachment
Submitted filename: Responses to reviewers.docx
Decision Letter - Hui Li, Editor

Rare Event Detection by Progressive Clustering Undersampling

PONE-D-25-38473R1

Dear Dr. Abuzeid,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Hui Li

Academic Editor

PLOS One

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This study presents Capturing rare events in severely imbalanced datasets is challenging, as the learning and optimization processes are often biased toward the majority class. To address this issue, this study explores various resampling techniques and introduces a novel method called Progressive Clustering Undersampling (PCU). This technique removes negative instances that are distant from positive ones. PCU was compared with eight common undersampling and two oversampling techniques, consistently outperforming them on highly imbalanced and noisy datasets. The workflow demonstrates that rare anomalies can be effectively predicted using unsupervised methods based on frequency-driven decision boundaries. Progressive clustering ultimately identifies clusters with the highest concentration of positive instances. These delineated clusters are then saved by supervised models and used in the preparatory phase before prediction. The proposed method produces two outputs: one optimized for a high F1- score and the other for high precision. Overall, this approach presents a promising solution for identifying rare anomalies in complex, imbalanced data environments.

Good work keeps up

Reviewer #2: Recommendation: Accept with Minor Revisions

Overall Assessment

The authors have done an excellent job addressing the concerns raised in the previous round of reviews. The manuscript has been substantially improved in terms of clarity, methodological detail, reproducibility, and discussion of limitations. The introduction of subsection 5.1 (limitations/challenges/contributions/structure) and 5.2 (experimental setup, complexity, parameter choices) directly resolves many of the earlier criticisms. Figures have been upgraded, the graphical abstract is now properly placed, and additional experiments on noise variations strengthen the claims. The responses to both reviewers are thoughtful, transparent, and courteous.

The proposed Progressive Clustering Undersampling (PCU) method is novel, well-motivated for highly imbalanced and noisy settings, and the empirical evidence (across petroleum and Two Moons datasets) convincingly shows superior performance over a wide range of established resampling baselines.

The paper is now suitable for publication in PLOS ONE after only minor revisions.

Minor Revisions Required

Abstract – Quantitative Summary

The abstract states that PCU “consistently outperform[s]” other methods but provides no quantitative indication. While the authors correctly note character limits, consider adding a brief quantitative remark, e.g.:

“achieving up to X% higher F1-score and Y% higher precision than the next-best resampling technique on highly imbalanced noisy datasets.”

Even an approximate range or “substantial improvements” would help readers quickly gauge impact.

Conclusion – Strengthen Quantitative Takeaway

The conclusion mentions KNN results but could more prominently highlight the best overall gains of PCU (e.g., from Tables/Figures) across classifiers and datasets. A single sentence summarizing the magnitude of improvement would reinforce the contribution.

Statistical Significance

The authors added multiple runs and report variability, which is appreciated. For the final version, please consider adding a brief statement (or footnote to tables) indicating whether differences were statistically significant (e.g., via paired t-test or Wilcoxon on the repeated runs). This is not mandatory for PLOS ONE but would further strengthen the comparative claims.

Generalizability Discussion

The added benchmark discussion and limitation paragraph are helpful. To further improve readability, consider moving or duplicating the key sentence about expected beneficial domains (finance, healthcare, cybersecurity) into the Conclusion for emphasis.

Minor Editorial / Consistency Issues

Check for any remaining automatic hyphenations (e.g., “undersam-pling”, “pos-itive”) and disable if possible or manually correct.

Ensure all new references added in this revision follow the exact PLOS ONE style (especially DOI formatting).

In the text, “rupture point” is now clearer, but consider using a more standard term (e.g., “stopping criterion” or “termination condition”) or defining it explicitly on first use for broader accessibility.

Graphical Abstract

Confirm that the graphical abstract is uploaded as a separate high-resolution file in the final submission system, as required by PLOS ONE.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Hui Li, Editor

PONE-D-25-38473R1

PLOS One

Dear Dr. Abuzeid,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Professor Hui Li

Academic Editor

PLOS One

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .