Peer Review History

Original SubmissionSeptember 6, 2025
Decision Letter - Alessandro Bruno, Editor

PONE-D-25-47910FetCAT: Cross-Attention Fusion of Transformer-CNN Architecture for Fetal Brain Plane Classification with Explainability using Motion-degraded MRIPLOS ONE

Dear Dr. Suha,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR:

  • Address all the remarks raised by the three reviewers
  • Specify what kind of data you employed in your trials
  • Highlight the steps of the proposed methods that are "explainable".

==============================

Please submit your revised manuscript by Dec 04 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Alessandro Bruno, Ph.D.

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. We note that your Data Availability Statement is currently as follows: [All relevant data are within the manuscript and its Supporting Information files.]

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

4. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

Additional Editor Comments:

Dear Authors,

Your paper reveals a certain level of depth on the topic of interest.

However, you need to address some weak points that two reviewers have raised.

Please specify the type of data you adopted in your experimental trials. Also, you may want to push forward the bar of explainability and provide more methodological details as requested by one of the three reviewers.

Do your best to answer all comments and remarks from reviewers pointwise.

I recommed your manuscript for a Major Revision round.

Kind regards,

A.B.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: I Don't Know

Reviewer #3: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

Reviewer #3: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have presented a well-written manuscript highlighting a novel hybrid architecture that integrates a pre-trained Swin Transformer with a custom Adaptive Med-CNN model through cross-attention fusion mechanisms for automated fetal brain MRI plane classification.

However, the authors have not fully explained how the fetal MRI images were obtained and selected. If the images consisted of those from patients with confirmed neurological anomalies or not. If the model was trained on normal fetal MRI images as well as those with confirmed anomalies. If so, what type of anomalies were included or excluded as part of the training process and selection criteria. Also, how well did the model perform in confirming or excluding images with/without anomalies present.

Reviewer #2: Summary

This study presents FetCAT, a hybrid architecture combining a Swin Transformer and a custom AdaptiveMed-CNN via cross-attention for automated classification of fetal brain MRI planes (axial, coronal, sagittal). Using 52,561 motion-degraded fetal MRI slices from 19–39 weeks’ gestation, the authors compare FetCAT with various CNN and transformer models, analyze the effect of data augmentation, and employ Grad-CAM for explainability. The proposed Swin–AdaptiveMedCNN achieved 98.64% accuracy without augmentation, outperforming all other tested models.

Major Comments

1.Generalizability and Validation

The dataset comes entirely from a single institution (Stanford LPCH), which introduces potential bias related to imaging protocol and demographics. The authors should test the model on an external dataset or, if not possible, define a fully independent test cohort held out from the start. Clearly describe how subjects were split to prevent overlapping slices between training and validation.

2.Statistical Analysis

The manuscript reports very high accuracies but does not include uncertainty estimates or statistical comparisons. Confidence intervals, per-class metrics, and statistical tests (e.g., McNemar or bootstrap confidence intervals) are needed to support statements that the proposed method significantly outperforms baselines.

3.Interpretation of Data Augmentation Results

The conclusion that augmentation reduces performance is unusual. The authors should present per-augmentation results, quantify performance differences, and discuss whether certain transformations (e.g., small-angle rotations) might still provide benefits for subsets of data such as early gestational ages or specific motion levels.

4.Data and Code Availability

The paper lists data as available in the manuscript, yet it relies on a repository that may have access restrictions. Please ensure compliance with PLOS ONE’s open-data policy by providing a link to the dataset or a clear process for accessing it. Include source code and train/validation split definitions in a public repository.

5.Ethics Statement

Although the dataset is anonymized, fetal MRI is human-subject data and typically requires an institutional review board statement or waiver. The authors should clarify whether the dataset was covered under an approved secondary-use protocol.

6.Clinical Framing

The Grad-CAM explanation is helpful and well validated by expert review. It would strengthen the paper to report clinical or workflow relevance—for example, whether the model reduces labeling time or shows consistent behavior across gestational age ranges and cases with significant motion.

7.Reproducibility Details

Include full training details such as batch size, number of epochs, early stopping criteria, and exact k-fold parameters. Provide layer sizes, dropout rates, and normalization methods. This will help others reproduce the results.

Minor Comments

•Provide separate accuracy, precision, recall, and F1 for each plane (axial, coronal, sagittal) and include a confusion matrix.

•Include calibration plots or reliability scores if the model outputs probabilities.

•Correct typographical and formatting issues (for example, “architechture” → “architecture,” “Saggital” → “Sagittal”).

•Ensure consistent use of terms such as “cross-attention” and capitalization of plane names.

•Clarify class distribution in the training and validation sets.

Recommendation

Major Revision.

The approach is technically sound and well-motivated, but the manuscript requires stronger validation, additional statistical analysis, and clarification of ethics and data availability before it can be recommended for publication.

Reviewer #3: Notes:

1- In abstract section some sentences are very long and complex, packing multiple ideas together.

2- The introduction section is generally well-structured and effectively builds a research case. But the problem statements are long and complex, shorter, focused sentences could improve readability. The novelty is mentioned late in the section, weakening the early impact and lacks a concise research question or hypothesis.

3- The literature review is extensive; the authors should provide more critical synthesis of previous work, highlighting how FetCAT directly addresses the identified gaps.

4- In methodology section many equations and algorithmic steps could be summarized conceptually.

5- The methodology section page 10 and page 9, there are opposite meaning in equation (1).

6- In methodology section, what was the specific 'k' in k-fold cross validation? What was the batch size? These are important for reproducibility.

7- In Table 5: The table is untidy. The "Swin ConvNext" row is missing "Without Augmentation" values, which looks like an error. The "AdaptiveMed" row without a transformer prefix is confusing, is this the standalone AdaptiveMed-CNN? If so, it should be in Table 3.

8- The results are presented as point estimates like ( 98.64% accuracy) without any measures of statistical significance or variance.

9- The discussion section like a summary of results rather than a realy discussion. It reports what was found but does not synthesize these findings into a higher-level argument or model for why FetCAT works so well.

10- There is no clear separation between “Discussion” and “Conclusion” both are merged into a single continuous text, which blurs their respective purposes.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures

You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation

NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.

Revision 1

1. Concern: Address all the remarks raised by the three reviewers

Response: We sincerely thank the reviewers and the editor for their constructive feedback. We have thoroughly addressed all comments and suggestions from all three reviewers and incorporated the required revisions throughout the manuscript.

2. Concern: Specify what kind of data you employed in your trials

Response: We appreciate the clarification request. The trials employed an open-source fetal brain MRI dataset for plane classification. Comprehensive details regarding the acquisition source, sample size, class distributions, and preprocessing steps have now been clearly provided in Section 3.1 of the methodology.

3. Concern: Highlight the steps of the proposed methods that are "explainable"

Response: Section 3.5 has been fully revised to explicitly highlight the explainable components of the proposed method. We have clarified the post-hoc Grad-CAM workflow and detailed the specific stages of the model that contribute to interpretability.

Reviewer 1:

The authors have presented a well-written manuscript highlighting a novel hybrid architecture that integrates a pre-trained Swin Transformer with a custom Adaptive Med-CNN model through cross-attention fusion mechanisms for automated fetal brain MRI plane classification. However, the authors have not fully explained how the fetal MRI images were obtained and selected. If the images consisted of those from patients with confirmed neurological anomalies or not. If the model was trained on normal fetal MRI images as well as those with confirmed anomalies. If so, what type of anomalies were included or excluded as part of the training process and selection criteria. Also, how well did the model perform in confirming or excluding images with/without anomalies present.

Response: Thank you for your insightful feedback on dataset transparency, which we agree is essential for evaluating clinical generalizability in fetal neuroimaging.

We have addressed and highlighted this by adding a paragraph to Section 3.1 ("Data Collection and Preprocessing"), clarifying that the open source dataset used in this study includes only developmentally normal fetal MRIs from 741 routine examinations (no anomalies). Since the dataset comprises exclusively developmentally normal fetal brain MRIs with no anomalous cases included for model evaluation. However, in the future, we plan to collect data specifically focusing on different anomalies in mind to enhance the generalizability of our study.

Reviewer 2.

1.

Generalizability and Validation

The dataset comes entirely from a single institution (Stanford LPCH), which introduces potential bias related to imaging protocol and demographics. The authors should test the model on an external dataset or, if not possible, define a fully independent test cohort held out from the start.

Clearly describe how subjects were split to prevent overlapping slices between training and validation.

Response: We thank the reviewer for this insightful concern. We worked on this concern carefully. To address this concern, we tested FetCAT proposed model on an entirely external dataset from OpenNeuro Fetal MRI repository (https://openneuro.org/datasets/ds003090/versions/1.0.0/metadata ), where also our model maintained high performance (81.0% accuracy), confirming its generalizability beyond the single-institution Stanford data. We have added the results and statistical analysis on this in Section 4.1.3 on our revised manuscript.

We have clearly documented that the analysis uses a subject-level split across the 741 unique subjects for 2-fold cross-validation, explicitly guaranteeing that no patient's slices overlap between the training and validation sets to prevent data leakage in updates Section 3.1.1.

2.

Statistical Analysis

The manuscript reports very high accuracies but does not include uncertainty estimates or statistical comparisons. Confidence intervals, per-class metrics, and statistical tests (e.g., McNemar or bootstrap confidence intervals) are needed to support statements that the proposed method significantly outperforms baselines.

Response: Thank you for this valuable suggestion regarding statistical validation. In response, we have now incorporated comprehensive statistical analyses including 95% confidence intervals, per-class performance metrics, and McNemar's test to rigorously validate performance differences. These additions are detailed in Section 4.1.2, and 4.1.3 with supporting result Tables. These analysis provide robust statistical evidence supporting our performance claims and enhance the reliability of our findings.

3

Interpretation of Data Augmentation Results:

The conclusion that augmentation reduces performance is unusual. The authors should present per-augmentation results, quantify performance differences, and discuss whether certain transformations (e.g., small-angle rotations) might still provide benefits for subsets of data such as early gestational ages or specific motion levels.

Response: We thank the reviewer for this valuable comment. In the revised manuscript (Section 4.3), we have added group-wise augmentation results, covering geometric, intensity-based, and noise/deformation based augmentation ablation test and quantified their performance differences to clarify the observed trends. However, as part of our future extensions, we plan to investigate augmentation effects in greater detail across gestational ages and varying motion levels applying other strategies too.

4

Data and Code Availability:

The paper lists data as available in the manuscript, yet it relies on a repository that may have access restrictions. Please ensure compliance with PLOS ONE’s open-data policy by providing a link to the dataset or a clear process for accessing it. Include source code and train/validation split definitions in a public repository.

Response: We thank the reviewer for highlighting the importance of open data and code compliance. In response, we have ensured full adherence to PLOS ONE's policy by providing direct, unrestricted access to the fetal MRI dataset through the Stanford Digital Repository, where all 52,561 anonymized images are immediately available for download. Additionally, we have made our complete implementation publicly available in the FetCAT repository, which includes the hybrid Swin Transformer-CNN architecture, training pipeline, and validation split definitions, thus ensuring full transparency and reproducibility of our study.

5

Ethics Statement

Although the dataset is anonymized, fetal MRI is human-subject data and typically requires an institutional review board statement or waiver. The authors should clarify whether the dataset was covered under an approved secondary-use protocol.

Response: Thank you for raising this important point on ethics for human-subjects data, which underscores the need for clear IRB transparency.

We have added a new subsection at the end detailing that the dataset was collected under Stanford University's IRB protocol with informed consent, and its public, anonymized release permits secondary use without additional approval at our institution.

6

Clinical Framing

The Grad-CAM explanation is helpful and well validated by expert review. It would strengthen the paper to report clinical or workflow relevance—for example, whether the model reduces labeling time or shows consistent behavior across gestational age ranges and cases with significant motion

Response: We thank the reviewer for this valuable suggestion. In response, we have now added a new Figure 6 that illustrates the clinical workflow integration, and a corresponding discussion in Section 4.2.1, explicitly addressing the model's reduction in labeling time and its consistent performance across gestational ages and motion-degraded cases.

7

Reproducibility Details

Include full training details such as batch size, number of epochs, early stopping criteria, and exact k-fold parameters. Provide layer sizes, dropout rates, and normalization methods. This will help others reproduce the results.

Response: Thank you for this valuable suggestion. We have added a comprehensive subsection in the methodology detailing all training hyperparameters, architectural specifications, and preprocessing steps to ensure full reproducibility of our results. Additionally, the complete implementation code has been made publicly available and mentioned at the end of the manuscript to facilitate replication of our experiments.

Minor Comments

8

•Provide separate accuracy, precision, recall, and F1 for each plane (axial, coronal, sagittal) and include a confusion matrix.

Thank you for the insightful concern. To address the reviewer's feedback, we added a new Table in the Results section with class-wise metrics (accuracy, precision, recall, F1-score) for Axial, Coronal, and Sagittal planes, including point estimates and 95% CIs from cross-validation. We also included a Figure showing the 2-fold CV and test set confusion matrices, which demonstrate strong performance with few misclassifications with the proposed technique.

9

•Include calibration plots or reliability scores if the model outputs probabilities.

We addressed the reviewer's concern by calculating and including the model's calibration plots and scores with visualization in Section 4.1.2, which confirmed the reliability of the predicted probabilities.

10

•Correct typographical and formatting issues (for example, “architechture” → “architecture,” “Saggital” → “Sagittal”).

We thank the reviewer for careful reading; all noted typographical errors, such as "architechture" and "Saggital," have been corrected throughout the manuscript.

11

•Ensure consistent use of terms such as “cross-attention” and capitalization of plane names.

We have ensured consistent hyphenation and lowercase for 'cross-attention' throughout the manuscript, standardized capitalization of plane names (e.g., 'Axial', 'Coronal', 'Sagittal') as class labels

12

•Clarify class distribution in the training and validation sets.

We thank the reviewer for the suggestion and have acknowledged the class distribution in the revised manuscript in section 3.1.1.

Reviewer 3’s Comments to the Author:

Reviewer’s Concerns

Author’s Responses

1

In abstract section some sentences are very long and complex, packing multiple ideas together.

Response: Thank you for your observation. We have revised the abstract to break down complex sentences into simpler, more focused statements, ensuring each sentence conveys a single clear idea for improved readability.

2

The introduction section is generally well-structured and effectively builds a research case. But the problem statements are long and complex, shorter, focused sentences could improve readability. The novelty is mentioned late in the section, weakening the early impact and lacks a concise research question or hypothesis.

Response: We thank the reviewer for the constructive feedback. The introduction has been revised with shorter, more focused sentences, an earlier emphasis on our novel FetCAT architecture, and the explicit inclusion of our central hypothesis on cross-attention fusion.

3

The literature review is extensive; the authors should provide more critical synthesis of previous work, highlighting how FetCAT directly addresses the identified gaps.

Response: We thank the reviewer for this valuable suggestion. We agree that a stronger critical synthesis strengthens the narrative for our proposed method. In direct response to this comment, we have thoroughly revised the final two paragraphs of the Literature Review (Section 2).

4

In methodology section many equations and algorithmic steps could be summarized conceptually.

Response: We appreciate the reviewer for this suggestion. We have added a conceptual summary for clarity but retain the algorithms to ensure the reproducibility and precise implementation of our novel cross-attention fusion mechanism, which is a core contribution of this work.

5

The methodology section page 10 and page 9, there are opposite meaning in equation (1).

Response: Thank you for your careful observation regarding the inconsistency in Equation (1) between pages 9 and 10 of the methodology section; it was a mistake and we have revised it to align the meanings consistently across both cases.

6

In methodology section, what was the specific 'k' in k-fold cross validation? What was the batch size? These are` important for reproducibility.

Response: Thank you for this valuable suggestion. We have added a comprehensive subsection in the methodology detailing all training hyperparameters, architectural specifications, and preprocessing steps to ensure full reproducibility of our results. Additionally, the complete implementation code has been made publicly available and mentioned at the end of the manuscript to facilitate reproducibility of our experiments.

7

In Table 5: The table is untidy. The "Swin ConvNext" row is missing "Without Augmentation" values, which looks like an error. The "AdaptiveMed" row without a transformer prefix is confusing, is this the standalone AdaptiveMed-CNN? If so, it should be in Table 3.

Response: Thank you for highlighting this formatting issue in the table; the "AdaptiveMed" row was intended as the CNN backbone shared across hybrid fusions (e.g., BEiT-AdaptiveMed and DeiT-AdaptiveMed) to enable direct comparisons of transformer variants on a consistent local feature extractor, but line wrapping caused it to appear standalone. The omission of "Without Augmentation" values for the "Swin-ConvNeXt" row was an unintentional error during table preparation, which we have now corrected by adding the computed metrics. We have reformatted the table for clarity.

8

The results are presented as point estimates like ( 98.64% accuracy) without any measures of statistical significance or variance.

Response: Thank you for this valuable suggestion regarding statistical validation. In response, we have now incorporated comprehensive statistical analyses including 95% confidence intervals, per-class performance metrics, and McNemar's test to rigorously validate performance differences. These additions are detailed in Section 4.1.2, and 4.1.3 with supporting result Tables. These analysis provide robust statistical evidence supporting our performance claims and enhance the reliability of our findings.

9

The discussion section like a summary of results rather than a realy discussion. It reports what was found but does not synthesize these findings into a higher-level argument or model for why FetCAT works so well.

Response: We have addressed this concern by restructuring the manuscript and creating a separate discussion section that synthesizes the findings and provides a higher-level interpretation of why FetCAT performs effectively.

10

There is no clear separation between “Discussion” and “Conclusion” both are merged into a single continuous text, which blurs their respective purposes.

Response: We acknowledge this fact and addressed the reviewer's feedback by creating two distinct and focused sections: "Discussion" and "Conclusion", ensuring clear separation of results synthesis from the final summary statements in our revised manuscript.

Attachments
Attachment
Submitted filename: Review_response_PLOSOne (3).docx
Decision Letter - Alessandro Bruno, Editor

FetCAT: Cross-Attention Fusion of Transformer-CNN Architecture for Fetal Brain Plane Classification with Explainability using Motion-degraded MRI

PONE-D-25-47910R1

Dear Dr. Suha,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Alessandro Bruno, Ph.D.

Academic Editor

PLOS One

Additional Editor Comments (optional):

Dear Authors,

I am glad to let you know that I appreciate your efforts to improve the manuscript's quality.

I will, therefore, recommend it for acceptance.

With regards,

A.B.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

Formally Accepted
Acceptance Letter - Alessandro Bruno, Editor

PONE-D-25-47910R1

PLOS One

Dear Dr. Suha,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Associate Professor Alessandro Bruno

Academic Editor

PLOS One

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .