Peer Review History
| Original SubmissionAugust 17, 2025 |
|---|
|
Dear Dr. Jia, plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tomo Popovic, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. Please note that funding information should not appear in any section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript. 4. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 5. Thank you for stating the following financial disclosure: “the Anhui Science and Technology University Science Foundation (Grant No. WDRC202103, XWYJ202301), the Open Project Program of Guangxi Key Laboratory of Digital Infrastructure (No. GXDIOP2024010), the Key Project of Natural Science Research of Universities in Anhui (Grant No. 2022AH051642), the Research and Development Fund Project of Anhui Science and Technology University (Grant No. FZ230122).” Please state what role the funders took in the study. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." If this statement is not correct you must amend it as needed. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf. 6. We note that your Data Availability Statement is currently as follows: All relevant data are within the manuscript and in Supporting Information files. Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition). For example, authors should submit the following data: - The values behind the means, standard deviations and other measures reported; - The values used to build graphs; - The points extracted from images for analysis. Authors do not need to submit their entire data set if only a portion of the data was used in the reported study. If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access. 7. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Additional Editor Comments:
[Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: No ********** Reviewer #1: The authors proposed a new model Comparative Concept Tree for interpretable crop pest and disease identification, which combines Conceptual prototype trees to provide transparent, hierarchical decision paths and SimCLR contrastive learning to enhance feature representation and discrimination. Evaluation on three datasets shows improvements over baselines and existing prototype tree approaches, both in accuracy and interpretability. Please find below my comments - Please quantify contributions in Abstract. - At the end of Introduction please provide an overview of the paper by sections. - In section 2 it would be of interest to the readers to have Tables that summarizes related work and advantages and disadvantages of proposed model in the paper. - Please check grammar/formatting issues (missing spaces before citations, inconsistent figure captions). - The chosen baselines are quite good, but recent transformer-based models (e.g., Vision Transformers, Swin Transformer) are not included. This limits the novelty claim. Please comment. - Interpretability is illustrated qualitatively (decision tree visualization), but no user study or quantitative interpretability metric (e.g., faithfulness, comprehensibility scores) is provided. - It is unclear how much each component (prototype tree vs. SimCLR) contributes to improvements. A controlled ablation experiment would strengthen claims. - References are Comprehensive, but mostly up to 2021. Adding 2022–2024 references on interpretable deep learning in agriculture would strengthen the related work. - Please provide future work. Reviewer #2: Thank you for an interesting manuscript. The idea of combining prototype based reasoning with a soft decision tree and a SimCLR-style contrastive objective for plant-disease classification is well motivated. I especially appreciate the path visualizations through the tree, which help explain why the model reaches a decision. Results on AppleLeaf9, Cashew, and Cassava are encouraging and suggest practical promise. What I still miss, to fully stand behind the conclusions, is a stronger foundation for reproducibility and fair comparison. Please provide concrete details for the contrastive component: the exact temperature (τ), the architecture and dimensionality of the projection head, the full augmentation recipe with parameter values and probabilities, and normalization choices. The joint loss weighting between classification and contrastive terms is introduced but not specified—state the value you use, how it was selected, and add a brief ablation/sensitivity analysis. The optimization setup also needs to be unambiguous. Two learning rates are mentioned, but it’s unclear which applies to the backbone and which to prototypes/projection head; the scheduler is described qualitatively, weight decay and optimizer β/β1/β2 (or momentum) are not reported. Please provide a single, end-to-end training “recipe” that another group can follow without guesswork. For the datasets, a transparent account of how train/val/test splits were constructed and which random seeds were used is essential. This is particularly important for AppleLeaf9, which aggregates multiple sources: without provenance-aware splits, leakage (the same scene/leaf across splits) is a real risk. Ideally, include scripts that deterministically recreate the splits. Regarding comparisons, I would like clear evidence that baseline models were trained under exactly the same conditions (same pretraining, augmentations, scheduler, image size, and compute budget). Without that symmetry, performance gaps are hard to interpret. At present the results are single-point estimates; please add repeated runs with different seeds and report mean ± SD or confidence intervals. Given class imbalance, macro-averaged metrics and per-class precision/recall/F1 should also be reported. The interpretability story is appealing but remains qualitative. A few lightweight quantitative indicators would substantiate the claims: prototype purity (class consistency per prototype), path entropy/consistency through the tree, and distances/similarities to retrieved prototypes along the chosen route. Two or three worked examples:input,nearest prototypes at each node, branching probabilities, final decision - would greatly help readers. On presentation, a round of language and typesetting edits would improve clarity, there are a few typographical issues and symbols that do not render cleanly in formulas. In one place you mention a Sigmoid while the task is standard multi-class (Softmax is expected); please align terminology to avoid confusion. The Data Availability statement needs correction. Because you rely on public datasets, please cite the exact sources/links, and—consistent with PLOS ONE policy—release code and trained weights (GitHub etc). That will address the main reproducibility concerns. This is a good idea with practical potential. If you complete the training details, ensure fair and statistically grounded comparisons, add a small quantitative slice of the interpretability evaluation, and release code/models with an updated availability statement, the manuscript could become a strong contribution. I recommend Major Revision. ********** what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Jia, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Please submit your revised manuscript by Jan 30 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tomo Popovic, Ph.D. Academic Editor PLOS One Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: (No Response) Reviewer #2: Thank you for the careful and generally substantial revision. Many of my earlier points have been addressed. The model and contrastive branch are now described more clearly, the training procedure is much easier to follow, transformer-based reference models have been added, and the fidelity score is a useful first step toward quantifying interpretability. Results on all three datasets remain strong, and the path visualizations are helpful. A few key items still prevent me from fully endorsing the paper. The combined loss is introduced, but the actual loss weight you use is never given, and there is no small sensitivity check around this choice. Please state the exact value used in all main experiments, briefly explain how it was chosen, and show how performance changes for a few nearby values on at least one dataset. For the data, I appreciate the description of fixed, stratified splits, but AppleLeaf9 still raises questions about possible overlap between images of the same leaf or scene across train and test. Either describe how you avoided this or acknowledge that it was not possible and discuss the risk. Since you reuse the same split for all models, releasing the split files or a simple script to recreate them would greatly improve reproducibility. On the comparisons, it is not fully clear whether your method and all baselines share the same pretraining steps. If your method uses extra pretraining (for example on iNaturalist) that others do not, this should either be equalized or analysed in an ablation. Because you already run each model several times, it would also be helpful to report mean and standard deviation for at least accuracy, and to add macro precision/recall/F1 (with per-class numbers in a supplement if needed). For interpretability, please give a precise short description of how the fidelity score is computed, and add at least one worked decision example that shows an input image, the prototypes visited at each step, and the final choice. With these points resolved, the paper would be much stronger. At this stage I still recommend Major Revision. ********** what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 2 |
|
Dear Dr. Jia, The manuscript has been substantially improved and is now close to acceptance. Prior to a final decision, the authors are asked to carefully address the remaining minor points raised by Reviewer 2, which mainly concern making experimental settings fully explicit, clarifying dataset splitting and potential limitations, ensuring complete statistical reporting across runs, and slightly strengthening the presentation of the interpretability analysis. These are minor, editorial-level revisions, and provided they are adequately addressed, I am comfortable recommending acceptance without further external review. Please submit your revised manuscript by Mar 14 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tomo Popovic, Ph.D. Academic Editor PLOS One Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #2: Yes ********** Reviewer #2: Thank you for the careful and substantial revision. The paper is much clearer now. The model and contrastive branch are easier to understand, the training pipeline is presented in a way readers can actually follow, the transformer baselines strengthen the comparisons, and the fidelity score is a reasonable first step toward quantifying interpretability. Performance across all three datasets remains strong, and the decision-path visualizations make the method easier to trust. At this point, I think the work is essentially there. I only have a few small items that would make the final version cleaner and more reproducible: • Please state the exact loss weight used in the combined loss for all main experiments, and add a one- or two-sentence note on how you settled on it. If you already tried a couple nearby values, a tiny table in an appendix would be a nice extra, but the main thing is that the final setting is explicit. • On AppleLeaf9, there is still a lingering question about possible overlap (same leaf/scene appearing in both train and test). If you could not fully prevent this due to missing identifiers, just say so directly and briefly comment on how it might affect absolute performance. Since you use the same fixed split for every method, please also release the split files (or a small script that recreates them deterministically). • Please make the pretraining setup completely explicit for all methods. If everyone uses the same iNaturalist initialization, say that clearly and point to the exact checkpoints. Since you already run multiple seeds, reporting mean ± std for accuracy (and macro Precision/Recall/F1) would round out the evaluation, with per-class numbers moved to the supplement if space is tight. • For interpretability, a short, precise definition of fidelity and one worked example (input,prototypes along the path, final class) would make that section much more concrete. With these minor edits, I’m comfortable recommending acceptance after minor revision. ********** what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 3 |
|
Interpretable Crop Pest and Disease Identification Based on Comparative Concept Tree PONE-D-25-44747R3 Dear Dr. Jia, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Please note the suggestions for minor issues of editorial nature that need to be addressed:
Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Tomo Popovic, Ph.D. Academic Editor PLOS One Additional Editor Comments (optional): Minor issues of editorial nature that needs to be addressed: 1) Language polishing: a professional copy-edit would improve fluency and reduce repetition, especially in Sections 1 and 2. 2) On contribution, consider explicitly stating that the contribution is method integration and empirical validation, not a new theoretical learning paradigm. 3) For figure captions, ensure all figures are fully interpretable without extensive reference to the main text. Reviewers' comments: |
| Formally Accepted |
|
PONE-D-25-44747R3 PLOS One Dear Dr. Jia, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof. Tomo Popovic Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .