Peer Review History
| Original SubmissionJune 24, 2025 |
|---|
|
Dear Dr. Shah, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 04 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Sohail Saif, Ph.D Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection and analysis method complied with the terms and conditions for the source of the data. 3. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process. 4. Please amend either the abstract on the online submission form (via Edit Submission) or the abstract in the manuscript so that they are identical. 5. Please include a caption for figure 4. 6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 7. We are unable to open your Supporting Information files under folder [Federated_code.rar]. Please kindly revise as necessary and re-upload. 8. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: No Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: The paper titled “A Distributed Framework for Zero-Day Malware Detection Using Federated Ensemble Models” presents a timely and technically promising approach that combines federated learning and ensemble methods to enhance privacy-preserving malware detection. The topic is relevant, and the integration of deep learning with federated techniques reflects current trends in cybersecurity research. The overall framework demonstrates potential, and the study addresses an important challenge of detecting zero-day malware while preserving data privacy. However, while the idea is sound and up to date, the manuscript in its current form requires substantial refinement to meet the standards of a publishable article. I have outlined the following concerns and suggestions for improvement. 1. In paper their claimed contribution is not sufficiently distinguished from existing malware-detection approaches. The authors should clarify what is genuinely new beyond applying standard federated learning with an ensemble. 2. The description of the ensemble federated learning mechanism in proposed methodology is brief and lacks sufficient technical explanation and motivation. 3. The paper does not specify the exact train/validation/test split or the number of cross-validation folds, making it difficult to assess robustness. There must be clearly stated Key training parameters such as learning rate, batch size, optimizer, number of epochs) 4. Although zero-day detection is mentioned in the article, experimental evidence for this scenario is not described in detail to evaluate its validity. 5. The comparative analysis uses only a limited set of baselines; more recent deep or hybrid architectures should be included with confidence intervals or significance tests are provided to substantiate the reported performance improvements. 6. Detail about computational environment such as hardware requirement and federated setup (e.g., number of clients, network conditions) are must be included 7. Some variables in the equations (for example H, W, and C) are introduced without definition which leads towards ambiguity. 8. The manuscript contains some grammatical issues; professional language editing is recommended. 9. The Experiments and Results section would benefit from a concluding subsection that highlights the main findings. 10. Provided source code, implementation instructions and dataset accessibility are provided without these, independent verification of the results is difficult. A link or clear guidance for requesting the material should be included in the supplementary material. 11. The paper addresses an important and timely problem in privacy-preserving malware detection. 12. The proposed pipeline transforming PE files into grayscale images and applying deep feature extraction is technically useful. 13. The methodology is presented with sufficient mathematical detail to follow the main steps of the approach. 14. The integration of federated learning to protect data privacy is appropriate and clearly justified and the addition of an ensemble strategy instead of FedAvg is a useful contribution. 15. Node-level performance provides understanding into model behavior across clients. 16. The aim for privacy preservation and decentralized training is practically relevant. 17. Comparisons with well-known deep learning models and classical classifiers support the empirical claims. Overall, this manuscript integrates deep learning and ensemble techniques and presents a federated-learning based framework for malware detection which is relevant to privacy-preserving cybersecurity research. While the work has several strong points, the paper in its current form requires substantial revision before it can be considered for publication. Reviewer #2: Here are ten major revision comments for the manuscript. Novelty and positioning are unclear The core claim is a “hybrid CLD-Net” combining YOLOv5 with Faster R-CNN. Similar ensembles and late-fusion detectors are already well explored. You must (i) articulate what is technically new (e.g., a specific fusion rule, calibration scheme, or training curriculum), (ii) explain why this outperforms standard strong baselines (YOLOv8/YOLOv7, EfficientDet, RT-DETR/DETR variants, RTMDet) trained and tuned fairly, and (iii) add a short ablation proving that the proposed fusion (not merely the presence of two models) is responsible for the gains. Method description is incomplete or placeholder-like “Algorithm 1/2” are essentially empty, and “Equations (1)–(20)” are mostly narrative placeholders without definitions. Replace all placeholders with real math (e.g., the exact fusion function \hat{y}=f\left(y_{\mathrm{YOLO}},y_{\mathrm{FRCNN}}\right)y^, confidence reweighting, NMS/Soft-NMS thresholds, IoU thresholds, tie-breaking rules). Provide end-to-end pseudo-code with inputs/outputs, and specify training loss terms, anchors, image sizes, augmentations, and optimizer schedules. Metrics are inappropriate for object detection and not reproducible Reporting “accuracy” for detection is non-standard. Replace or augment with mAP@0.5 and mAP@[.5:.95], per-class AP, AR, precision–recall curves, and F1 at a stated confidence threshold. For speed, report FPS and latency (mean ± std) on specified hardware with batch size and image resolution. Remove the subjective 1–10 “performance/scalability/automation” scores in Tables 2–4; replace with quantitative, reproducible measurements. Dataset provenance, labeling, and splits need rigor You cite a Kaggle “Cotton Leaf Disease” dataset that is primarily classification. If you perform detection, clarify: Did you create bounding boxes? How were they annotated (tool, guidelines)? What was the inter-annotator agreement? Provide the exact train/val/test split (no leakage), stratification rules, augmentation pipeline, and any external data. Without this, the 96.7% figure is not verifiable. Baseline design and fairness are insufficient List all baselines with hyperparameters, training budgets, and model sizes (params/GMACs). Include: single-model YOLOv5 (strongly tuned), Faster R-CNN (tuned), at least one modern one-stage (YOLOv8/RTMDet) and one transformer-based detector (RT-DETR/DETR). Use the same input sizes, epochs, and augmentations where possible. Add ablation studies: YOLO-only, FRCNN-only, naive late fusion vs your proposed fusion; show deltas in mAP and latency. Edge deployment claims lack evidence You claim Raspberry Pi 4 deployability and “95% accuracy,” but there are no concrete measurements. Provide device-level benchmarks: model size, quantization (if any), inference latency per frame, sustained FPS, CPU/GPU/NPU utilization, memory footprint, power draw, and thermal stability over a 10–15 minute run. Include qualitative examples and failure cases from the edge device. Inconsistencies and technical inaccuracies There are contradictions such as using YOLOv5 throughout while the Conclusion says “using the YOLOv8 framework.” Fix all model names consistently. Several definitions around “equations” are incorrect (e.g., describing accuracy as “correct over false positives,” or reusing the same text for different metrics). Audit the entire paper for such errors and correct with standard definitions. Figures, tables, and writing quality need substantial overhaul Many figures are referenced (Figs. 1–11) without visible or technically informative content; captions are generic. Replace with architecture diagrams (with tensor shapes), fusion schematics, PR curves, per-class AP bar charts, latency–accuracy trade-off plots, and qualitative detection visualizations (TP/FP/FN). Table 1 columns like “Power Scheduling/Security/Energy Efficiency” are irrelevant to cotton disease detection—remove or redesign the table to summarize methods, datasets, metrics, and results. Edit the prose for grammar and technical clarity (e.g., “Cotton fleas” → “Cotton is,” avoid phrases like “expatriated object detection,” and remove off-topic mentions such as “drug and vaccine recognition”). Related work and references need curation and credibility Some citations appear peripheral or from questionable venues, while several crucial agricultural-vision and detection papers are missing. Curate to reputable, relevant sources (TPAMI, IJCV, CVPR/ICCV/ECCV, T-ITS, Computers & Electronics in Agriculture, Frontiers in Plant Science, etc.). Discuss recent cotton/plant disease detectors (lightweight YOLO variants, transformer detectors, few-shot domain adaptation, low-light robustness) and clearly position your contribution against them. Reproducibility, statistics, and ethical statements Provide a reproducibility package: code link, configs, random seeds, exact library versions, and trained weights (or a deterministic recipe). Add statistical rigor: report mean ± std over ≥3 runs, confidence intervals, and significance tests where applicable. Include data license compliance, consent (if field images were captured), and a short error analysis and limitations section (e.g., performance under occlusion, early-stage lesions, and domain shifts across fields/cameras). If you address these ten areas with concrete math, proper metrics, fair baselines, real figures, and reproducible evidence the paper will be far stronger and more suitable for a good venue. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
Dear Dr. Shah, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Dec 21 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Sohail Saif, Ph.D Academic Editor PLOS ONE Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Partly Reviewer #3: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #3: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #3: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #3: (No Response) ********** Reviewer #1: The manuscript addresses malware detection using federated learning; however, the authors could clarify how their contributions differ from existing methods. It would be beneficial to articulate the innovative aspects of their approach beyond the standard application of federated learning with ensemble techniques. A more detailed explanation of how their method represents a significant advancement over prior studies would enhance the manuscript. In terms of reproducibility and methodological transparency, the manuscript does not currently meet PLOS ONE’s standards for open and verifiable research. The authors are encouraged to provide the complete executable code, detailed implementation instructions, and full access to the dataset used. These materials are essential for enabling independent verification and replication of the results. Additionally, the dataset should be explicitly described and made accessible. If public sharing is not possible, clear guidance on how it can be obtained for verification purposes should be included in the Data Availability Statement and Supplementary Materials, in line with PLOS ONE’s Open Data policy. Reviewer #3: (No Response) ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.
|
| Revision 2 |
|
A Distributed Framework for Zero-Day Malware Detection Using Federated Ensemble Models PONE-D-25-34169R2 Dear Dr. Shah, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sohail Saif, Ph.D Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #3: Yes ********** Reviewer #3: (No Response) ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #3: No ********** |
| Formally Accepted |
|
PONE-D-25-34169R2 PLOS One Dear Dr. Shah, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sohail Saif Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .