Peer Review History

Original SubmissionAugust 26, 2025
Decision Letter - Ali Mohammad Alqudah, Editor

PONE-D-25-46399Towards Practical AI for Agriculture: A Self-Supervised Attention Framework for Spinach Leaf Disease DetectionPLOS ONE

Dear Dr. Khan,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Dec 29 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.
  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.
  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Ali Mohammad Alqudah

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. Please update your submission to use the PLOS LaTeX template. The template and more information on our requirements for LaTeX submissions can be found at http://journals.plos.org/plosone/s/latex.

4. We note that your Data Availability Statement is currently as follows:

“All relevant data are within the manuscript and its Supporting Information files.”

Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition).

For example, authors should submit the following data:

- The values behind the means, standard deviations and other measures reported;

- The values used to build graphs;

- The points extracted from images for analysis.

Authors do not need to submit their entire data set if only a portion of the data was used in the reported study.

If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories.

If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access.

5. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 

6. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: No

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The manuscript is of high quality, well-structured, and makes a valuable contribution to the field of AI for agriculture. The research is rigorous, employing a variety of state-of-the-art and custom-designed models, and includes critical components often missing in similar studies, such as self-supervised learning to mitigate data scarcity, an ablation study for robustness, and XAI for model interpretability. The deployment of a functional web application demonstrates a strong commitment to translational research. The writing is clear, and the methodology is described in sufficient detail for reproducibility.

Major Strengths:

1. Addressing a Critical Gap: The focus on Malabar spinach, an under-researched but vital crop in the regional context, is the study's primary novelty and significance. This directly addresses a need in Bangladeshi agriculture.

2. Comprehensive Methodological Pipeline: The authors don't rely on a single approach. They benchmark standard pretrained models (EfficientNet, ResNet), develop custom lightweight architectures (SpinachCNN, Spinach-ResSENet), experiment with Vision Transformers (SpinachViT, SwinV2), and crucially, implement a self-supervised pretraining strategy (SimSiam) to leverage unlabeled data. This provides a rich comparative analysis.

3. Innovative Model Design: The integration of attention mechanisms is a key contribution. The "Spinach-ResSENet" (using Squeeze-and-Excitation) and particularly the "SimSiam-CBAM-ResNet-50" (using Convolutional Block Attention Modules) are thoughtful adaptations that demonstrably improve performance and feature focus.

4. Focus on Practicality and Robustness:

o Self-Supervised Learning: The use of SimSiam on 671 unlabeled images is a practical solution to the common problem of limited annotated agricultural data, making the approach more scalable.

o Hybrid Loss Function: Combining Cross-Entropy with Supervised Contrastive Loss is a sophisticated technique that enhances class separability, leading to better generalization.

o Ablation Study on Noise Robustness: The evaluation of model performance under Gaussian and Salt-and-Pepper noise is highly relevant for real-world field conditions where image quality can be poor. The finding that the best model (SimSiam-CBAM-ResNet-50) maintains >95% accuracy under noise is a major strength.

o Edge Deployment Consideration: The discussion explicitly contrasts the high accuracy of the Swin Transformer with its impracticality for edge devices due to its size (28M parameters) and reliance on large-scale pretraining. This focus on deployable, lightweight solutions (like the 23.6M parameter SimSiam-CBAM-ResNet-50) is commendable.

5. Explainable AI (XAI): The use of Grad-CAM, Grad-CAM++, and LayerCAM is not just a box-ticking exercise. The results show that these techniques successfully highlight biologically relevant lesion regions, which is crucial for building trust with farmers and agronomists who need to understand why a prediction was made.

6. Real-World Deployment: The development and description of a FastAPI-based web application is a standout feature. It moves the research beyond a theoretical exercise, providing a tangible tool for farmers to upload images and receive predictions, visual explanations (heatmaps), and even management advice. This significantly enhances the impact of the work.

7. Strong Results: The reported performance metrics are impressive. The top model, SimSiam-CBAM-ResNet-50, achieves 96.97% test accuracy and a near-perfect macro ROC-AUC of 0.9982. While the Swin Transformer performs slightly better (97.98%), the authors correctly frame the former as the more practical solution.

Minor Weaknesses and Suggestions for Improvement:

1. Dataset Size and Diversity: While the dataset of 2,100 images is reasonable for a focused study, it is still relatively small, especially for training complex models like ViT from scratch. The authors acknowledge this as a limitation. Future work should indeed focus on expanding the dataset, as suggested. A brief discussion on the potential for data bias (e.g., all images collected from one region/university) would be prudent.

2. Model Complexity vs. Performance Trade-off: The manuscript effectively discusses the parameter count of SwinV2 vs. ResNet-50. However, it could provide more context on the computational cost (e.g., inference time, FLOPs) of the SimSiam-CBAM-ResNet-50 model, especially in the context of the deployed web app. How fast is the prediction for a farmer?

3. Web Application Evaluation: The deployment is described, but there is no user study or feedback from actual farmers. While this may be beyond the scope of the current paper, mentioning plans for future field testing or usability studies would strengthen the "practical AI" claim.

4. Clarity in Table 3: In Table 3, the row for "SimSiam-CBAM-ResNet-50(Hybrid)" shows a lower accuracy (95.96%) than its non-hybrid counterpart (96.97%). This is counter-intuitive, as the hybrid loss was shown to improve performance in other models (e.g., vanilla SimSiam-ResNet-50). This needs clarification. Is this a typo, or is there a specific reason (e.g., overfitting) for this result? The text in Section IV.G also seems to conflate the performance of the CE and Hybrid versions of the CBAM model.

5. TTA Results Inconsistency: In Table 3, the TTA accuracy for "SimSiam-ResNet-50(Hybrid)" is listed as 93.94%, which is lower than its test accuracy (96.97%). This is unusual, as TTA typically boosts or at least maintains performance. This should be double-checked or explained.

6. Figure Referencing: Some figures are mentioned in the text (e.g., Fig. 1, 2, 3) but their actual content (the images) are not visible in the provided manuscript draft. While this is common in draft submissions, ensuring all figures are clear and well-labeled in the final version is important.

Conclusion:

This is an excellent manuscript that successfully bridges the gap between advanced AI research and practical agricultural application. The authors have developed a robust, accurate, and interpretable deep learning pipeline for a neglected but important crop. The integration of self-supervised learning, attention mechanisms, and XAI, coupled with a real-world deployment, sets a high standard for research in this domain.

The minor issues noted above, particularly the potential inconsistencies in Table 3, should be addressed. However, they do not detract from the overall significance and quality of the work. The study provides a clear, reproducible blueprint for developing practical AI tools for other underrepresented crops.

Recommendation: Accept with Minor Revisions.

Reviewer #2: The contribution is practical integration (self-supervision + attention + hybrid loss) on an under-studied crop, plus a usable demo. Methodological novelty is incremental/combination-oriented rather than algorithmically radical.

Major Comments:

1) You state a 2,100-image 3-class dataset (6:2:2 split), yet later describe SimSiam pretraining on 671 unlabeled images and a 70/15/15 fine-tuning split with 473/99/99 labeled samples (total 671), which conflicts with 2,100. Please reconcile: total images per class; which subset is unlabeled; exact split logic; and ensure all performance comes from a single, consistent protocol.

2) You apply heavy augmentation and TTA; ensure augmentation is applied only on training and no augmented view of a test image leaks into training. Explicitly document your split-before-augment order and any field/plant-level grouping to avoid correlated images across splits.

3) SwinV2-Small uses ImageNet-21k pretraining while SimSiam models use in-domain pretraining. Discuss fairness: is Swin also fine-tuned from 21k? What happens if you start Swin from 1k only? Conversely, how do SimSiam models compare when initialized from ImageNet vs. from scratch?

4) Provide exact hyperparameters per model, optimizer schedule, epochs, early-stopping criteria, mixup/CutMix settings, batch sizes, RandAugment parameters, layer-wise LRs, seeds, and hardware.

5) For the CBAM-ResNet-50 variant, specify which bottlenecks include CBAM, shapes, and whether BN was frozen in SimSiam pretraining (you mention different BN handling across vanilla vs. CBAM—make consistent and explicit).

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures

You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation.

NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.

Revision 1

Response for Reviewers: PONE-D-25-46399R1

We would like to thank the reviewers for their comments. In the following, we will present our responses to the comments together with a summary of the corresponding changes in the revised manuscript in Highlighted Texts. All page numbers mentioned below are referred in the new revised manuscript in Red Fonts. Following what the reviewer suggested, we have done the followings:

Reviewer 1:

1. Comment: “Dataset Size and Diversity: While the dataset of 2,100 images is reasonable for a focused study, it is still relatively small, especially for training complex models like ViT from scratch. The authors acknowledge this as a limitation. Future work should indeed focus on expanding the dataset, as suggested. A brief discussion on the potential for data bias (e.g., all images collected from one region/university) would be prudent.”

Response: Following the reviewer’s suggestion, a brief discussion on the potential for data bias is added as, “First, the dataset consists exclusively of Malabar spinach leaf images collected within Bangladesh, which may introduce geographic or environmental bias; cross-regional generalization has not yet been evaluated.” (Page 15)

2. Comment: “Model Complexity vs. Performance Trade-off: The manuscript effectively discusses the parameter count of SwinV2 vs. ResNet-50. However, it could provide more context on the computational cost (e.g., inference time, FLOPs) of the SimSiam-CBAM-ResNet-50 model, especially in the context of the deployed web app. How fast is the prediction for a farmer.”

Response: The inference time of the SimSiam-CBAM-ResNet-50 model in the context of the deployed web app is added as, “As visualized in Figure 19, the trained SimSiam-CBAM-ResNet-50 model was deployed as a lightweight web application …. with an average inference time of under 800 ms.” (page 14)

3. Comment: “Web Application Evaluation: The deployment is described, but there is no user study or feedback from actual farmers. While this may be beyond the scope of the current paper, mentioning plans for future field testing or usability studies would strengthen the "practical AI" claim.”

Response: Following what the reviewer suggested, we added future field testing as, “Future studies will also include field trials to evaluate usability, user experience, and decision-making support of the proposed leaf disease classification system for farmers in real-world settings.” (Page 16)

4. Comment: “Clarity in Table 3: In Table 3, the row for "SimSiam-CBAM-ResNet-50(Hybrid)" shows a lower accuracy (95.96%) than its non-hybrid counterpart (96.97%). This is counter-intuitive, as the hybrid loss was shown to improve performance in other models (e.g., vanilla SimSiam-ResNet-50). This needs clarification. Is this a typo, or is there a specific reason (e.g., overfitting) for this result? The text in Section IV.G also seems to conflate the performance of the CE and Hybrid versions of the CBAM model.”

Response: The SimSiam-CBAM-ResNet-50 (Hybrid) configuration shows a modest reduction in accuracy compared to its CE counterpart, achieving 95.29±0.58% test accuracy (TTA 95.29±0.58%) with macro ROC-AUC 0.9967±0.0006. Although supervised contrastive learning (SupCon) often improves generalization, its effect depends strongly on dataset size, class structure, and batch composition. In our setting, the hybrid objective can reasonably yield lower accuracy for several reasons. First, the hybrid loss optimizes a representation geometry rather than the evaluation metric. The CE term drives the classifier toward decision-boundary separation, whereas SupCon encourages tighter intra-class clustering and larger inter-class angular margins. For fine-grained leaf disease imagery, where Alternaria and healthy leaves may share subtle texture cues, excessive compactness can unintentionally suppress class-discriminative variations that CE alone would preserve. Second, the small dataset magnifies gradient variance. SupCon relies on multiple same-class positive pairs per batch, but with moderate mini-batch sizes and class-balanced sampling, the number of positives per class can be low and inconsistent. This increases optimization noise and can create gradient conflict between CE and SupCon, especially when the CE-only model is already operating near ceiling performance. Consequently, while the hybrid loss improves representation structure (as reflected in high ROC-AUC), it may act as mild over-regularization during fine-tuning, leading to the observed small decrease in top-1 accuracy. (Page 13-14)

5. Comment: “TTA Results Inconsistency: In Table 3, the TTA accuracy for "SimSiam-ResNet-50(Hybrid)" is listed as 93.94%, which is lower than its test accuracy (96.97%). This is unusual, as TTA typically boosts or at least maintains performance. This should be double-checked or explained.”

Response: Thank you for pointing out the inconsistency in the TTA accuracy for SimSiam-ResNet-50 (Hybrid). We rechecked the evaluation code and corrected the TTA results in Table 3: Performance Comparison of Self-Supervised, Attention-Based, and Transformer-Based Models. (Page 13)

6. Comment: “Figure Referencing: Some figures are mentioned in the text (e.g., Fig. 1, 2, 3) but their actual content (the images) are not visible in the provided manuscript draft. While this is common in draft submissions, ensuring all figures are clear and well-labeled in the final version is important.”

Response: Fixed.

Reviewer 2:

1. Comment: “You state a 2,100-image 3-class dataset (6:2:2 split), yet later describe SimSiam pretraining on 671 unlabeled images and a 70/15/15 fine-tuning split with 473/99/99 labeled samples (total 671), which conflicts with 2,100. Please reconcile: total images per class; which subset is unlabeled; exact split logic; and ensure all performance comes from a single, consistent protocol.”

Response: The earlier numbers referring to 671 images came from an intermediate experiment and were mistakenly retained in the text. All final results in the paper use the full 2,100-image dataset and a single 70–15–15 split across all three classes. SimSiam pretraining was performed on the training split of this same partition, with labels ignored during the self-supervised stage rather than using a separate unlabeled subset. We have corrected the manuscript to make the total dataset size, class composition, and unified split protocol clear and consistent throughout. Details are added in Section IIIA: Dataset Description. (page 3)

2. Comment: “You apply heavy augmentation and TTA; ensure augmentation is applied only on training and no augmented view of a test image leaks into training. Explicitly document your split-before-augment order and any field/plant-level grouping to avoid correlated images across splits.”

Response: In the revised manuscript, we now explicitly state that all images are first split into training/validation/test sets (70/15/15) and that augmentations are applied only to the training split after this partitioning. Test-time augmentation is applied only to the test split at inference time, and no augmented view of any test image is ever used during training. (page 4)

3. Comment: “SwinV2-Small uses ImageNet-21k pretraining while SimSiam models use in-domain pretraining. Discuss fairness: is Swin also fine-tuned from 21k? What happens if you start Swin from 1k only? Conversely, how do SimSiam models compare when initialized from ImageNet vs. from scratch”

Response: To address fairness, we note that the pretrained SwinV2-Base uses ImageNet-21k → ImageNet-1k initialization, whereas the SimSiam models rely solely on in-domain self-supervised pretraining. SwinV2-Base was also trained from scratch under the same 70–15–15 protocol, and its performance is reported alongside the pretrained version for a fair comparison. For SimSiam, all results shown in Table 3: Performance Comparison of Self-Supervised, Attention-Based, and Transformer-Based Models come from models initialized from scratch followed by in-domain self-supervised learning, and no ImageNet-based initialization was used. (page 13)

4. Comment: “Provide exact hyperparameters per model, optimizer schedule, epochs, early-stopping criteria, mixup/CutMix settings, batch sizes, RandAugment parameters, layer-wise LRs, seeds, and hardware.”

Response: In the revised manuscript, we have expanded the model descriptions and experimental setup sections to report exact hyperparameters for each model, including optimizer type, learning-rate values and schedules, number of epochs, early-stopping criteria, batch sizes, label smoothing, mixup/RandAugment settings, and (where applicable) layer-wise learning rates. We also state the fixed random seed used for all runs and describe the common hardware configuration on which all experiments were executed. These additions make the training protocol fully transparent and reproducible. See Section IIIC: Applied Deep Learning Models for details. (page 4-11)

5. Comment: “For the CBAM-ResNet-50 variant, specify which bottlenecks include CBAM, shapes, and whether BN was frozen in SimSiam pretraining (you mention different BN handling across vanilla vs. CBAM—make consistent and explicit.”

Response: Thank you for the helpful clarification request. We have updated the manuscript to explicitly state that CBAM is inserted into every bottleneck block of the ResNet-50 backbone and that all BatchNorm layers in the backbone are kept frozen in evaluation mode during SimSiam pretraining. We also clarified this BN handling consistently across both the vanilla and CBAM SimSiam variants. See Section IIIC: Applied Deep Learning Models for details (page 10)

Attachments
Attachment
Submitted filename: Plos_Malabar__response.pdf
Decision Letter - Ali Mohammad Alqudah, Editor

Towards Practical AI for Agriculture: A Self-Supervised Attention Framework for Spinach Leaf Disease Detection

PONE-D-25-46399R1

Dear Dr. Khan,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Ali Mohammad Alqudah

Academic Editor

PLOS One

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed all my comments and I therefore recommend for the acceptance of the manuscript.

Reviewer #2: The authors addressed all the comments and incorporated the changes in the manuscript. I am satisfied with the authors responce

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Formally Accepted
Acceptance Letter - Ali Mohammad Alqudah, Editor

PONE-D-25-46399R1

PLOS One

Dear Dr. Khan,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Ali Mohammad Alqudah

Academic Editor

PLOS One

Open letter on the publication of peer review reports

PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.

We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.

Learn more at ASAPbio .