Peer Review History
| Original SubmissionMay 31, 2025 |
|---|
|
PONE-D-25-28806E-RESPINET: AN LLM-ELECTRA DRIVEN TRIPLE-STREAM CNN WITH FEATURE FUSION FOR ASTHMA CLASSIFICATIONPLOS ONE Dear Dr. Tawfik, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Your manuscript has been reviewed by the reviewers and their comments are now available with us. The reviewers have recommended considering the manuscript for major revision. Authors are expected to revise the manuscript according to each comments of reviewers. ============================== Please submit your revised manuscript by Sep 18 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Nishi Shahnaj Haider, Ph.D. Guest Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS One has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes Reviewer #4: No Reviewer #5: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes Reviewer #4: No Reviewer #5: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript presents a technically sound and methodologically rigorous study. The authors propose a novel multi-modal deep learning architecture, E-RespiNet that combines ELECTRA-based discriminative pre-training with a triple-stream CNN architecture for respiratory sound classification. The integration of three distinct acoustic representations MFCCs, wavelet transforms, and mel-spectrograms—demonstrates a well-thought-out approach to capturing the richness of respiratory audio signals. The hierarchical fusion network and contextual enhancement via ELECTRA further strengthen the architecture's learning capacity. The experimental design is comprehensive and includes: Multi-class classification tasks on two distinct clinical datasets. Binary classification for asthma detection. Cross-institutional validation to assess generalization. Ablation studies for component-wise contribution analysis. Explainability analysis using Grad-CAM to support clinical interpretability. The results are compelling, with state-of-the-art accuracy scores (up to 98.3% in multi-class tasks and 96.23% in binary classification) and strong Matthews Correlation Coefficient values, indicating robust performance. Importantly, the authors acknowledge the model’s limitations regarding cross-domain generalizability, providing a realistic and transparent view of the system's current applicability in clinical practice. The dataset sizes (1,211 and 940 samples, respectively) are reasonable given the constraints of clinical audio collection, and the use of data augmentation helps mitigate potential over fitting. While the generalization gap of ~25% in cross-institutional tests indicates caution for immediate deployment, it also aligns with known challenges in medical AI and is appropriately discussed. The statistical analysis in the manuscript appears to have been performed appropriately and with sufficient rigor. The authors report standard and relevant performance metrics for classification tasks, including: Accuracy for both multi-class and binary classification. Matthews Correlation Coefficient (MCC) for binary asthma detection, which is a more informative metric than accuracy for imbalanced data. Cross-institutional validation metrics, including average accuracy and generalization gap, which are crucial for assessing the model’s robustness across domains. Additionally, the study includes: Ablation studies, which effectively isolate the impact of different architectural components (e.g., each acoustic modality stream, the hierarchical fusion mechanism, and the ELECTRA module). This helps confirm that each contributes meaningfully to the overall performance. Comparison with state-of-the-art methods, showing performance improvements and justifying the novel approach. However, while the reported metrics are comprehensive, the manuscript would benefit from explicitly including the following statistical elements for completeness and transparency: Confidence intervals or standard deviations for accuracy and MCC scores across multiple runs (to account for variance in training and test splits). Statistical significance testing (e.g., paired t-tests or McNamara’s test) when comparing E-RespiNet with baseline models, to validate that the observed performance improvements are not due to random chance. The manuscript is generally well-written and presented in standard, academic English. The structure is logical, and the flow of ideas from the problem statement through methodology, experiments, results, and conclusions is coherent and easy to follow. Minor Suggestions for Improvement: Sentence Length: Some sentences are quite long and packed with multiple ideas, which may reduce readability. Consider breaking these into shorter, more digestible parts for better clarity. Transitions: A few paragraphs, especially in the results and conclusion sections, could benefit from smoother transitions to maintain narrative flow. Typographical Issues: A few minor formatting inconsistencies (such as spacing around numbers or acronyms) were observed. These do not significantly affect readability but should be cleaned up in the final version. Reviewer #2: Here are several high impact grounds on which a reviewer could recommend outright rejection of this manuscript: PONE-D-25-28806 1. Insufficient Novelty o The core idea—multi stream CNNs on MFCCs, wavelets and mel spectrograms—has been explored extensively in prior respiratory sound studies (e.g., Wang et al. 2022; Lee et al. 2023). The authors’ adaptation of ELECTRA adds only a marginal ~2% gain and does not constitute a clear conceptual advance . 2. Questionable Generalization and Overfitting o A 25.5% drop in accuracy under cross domain testing signals severe overfitting to the training sites. This level of performance degradation far exceeds acceptable thresholds for clinical tools and suggests the model does not truly capture disease relevant features . 3. Lack of Statistical Rigor o No confidence intervals, p values or hypothesis tests are reported for any comparisons. Without measures of variance or significance testing, the claimed “98.3% accuracy” cannot be trusted as anything beyond dataset specific chance performance . 4. Opaque Methodological Reporting o Critical details (e.g. random seed settings, exact train/validation splits, hardware specs, hyperparameter search ranges) are missing. This violates basic reproducibility standards for machine learning studies and precludes any possibility of replication . 5. No Clinical or Ethical Oversight o There is no mention of IRB approval, participant consent or real world deployment constraints. For a paper claiming medical diagnostic utility, this omission is a fundamental breach of publication ethics . 6. Excessive Model Complexity with Unrealistic Deployment Path o A 119 M parameter architecture requiring multiple augmentation steps and high end GPUs is impractical for most clinical settings, especially in low resource environments. No discussion of model pruning, latency or power requirements is provided . Recommendation to Reject Given the combination of marginal novelty, overfitting, lack of statistical validation, irreproducibility, and ethical omissions, this manuscript falls well below the standards for a publishable, clinically relevant study. I therefore recommend outright rejection. Reviewer #3: The manuscript titled "E-RESPINET: AN LLM-ELECTRA DRIVEN TRIPLE-STREAM CNN WITH FEATURE FUSION FOR ASTHMA CLASSIFICATION" It presents E-RespiNet, a novel deep learning architecture that combines a triple-stream CNN with a modified ELECTRA transformer to perform multi-class classification of respiratory sound recordings. The work is methodologically sound, with detailed experiments conducted on two independent clinical datasets (Asthma Detection Dataset V2 and KAUH). The results demonstrate excellent performance, surpassing existing state-of-the-art methods. The manuscript is clear, well-organized, and suitable for publication in PLOS ONE after minor revisions. Strengths- Methodological Rigor: The manuscript provides a comprehensive description of model architecture, feature extraction (MFCC, wavelet, mel-spectrogram), and training procedures. High Performance: Achieves 98.30% and 98.00% classification accuracy on two separate datasets, showing both robustness and generalizability. Extensive Evaluation: Includes performance with/without data augmentation, cross-institutional testing, ablation studies, and interpretability analysis using Grad-CAM. Clinical Relevance: The model targets critical respiratory conditions and demonstrates potential for use in resource-constrained healthcare settings. Statistical Validity: Paired t-tests confirm that improvements from augmentation are statistically significant (p < 0.001). Minor Revisions Requested- Clarify ELECTRA’s Role: Although ELECTRA is traditionally a language model, it is used here for acoustic signal processing. Please clarify how pretraining was adapted or fine-tuned, and justify its effectiveness in this non-text domain. Notation Improvements: Ensure consistent formatting and clarity in mathematical expressions in Sections 3.2 and 3.2.1 (e.g., subscripts, equation alignment, variable definitions). Limitations Summary: Although the manuscript discusses generalization gaps, a brief summary of limitations at the end of the conclusion would improve transparency (e.g., need for more diverse datasets, hardware constraints, or need for clinical trials). Grad-CAM Figures: Consider adding 1–2 more representative figures showing Grad-CAM-based class activation maps to reinforce the interpretability claim. Reproducibility: The manuscript would benefit from a public code repository or a statement regarding the availability of code and models. Conclusion This is a well-executed and clearly presented study with practical clinical implications. The proposed model demonstrates strong performance and methodological robustness across datasets and tasks. With minor revisions to improve clarity and reproducibility, I recommend that this manuscript will be accepted for publication in PLOS ONE. Reviewer #4: The manuscript is technically promising with strong experimental results. But model validation, generalization , and reproducibility are of concern. A more concise presentation and improvements in visual quality will also enhance the overall impact. The authors may wish to address this in their revisions of the manuscript. 1. Novelty and Justification for ELECTRA Integration Although the use of ELECTRA in an audio-based diagnostic model is novel, the paper does not present compelling arguments for choosing it over other audio-specific transformer models like Wav2Vec 2.0 or AST. In the absence of comparison or ablation studies, the advantages of ELECTRA are still a claim. 2. Practical Limitations and Deployment Feasibility The processor-heavy nature of the proposed model (ELECTRA with 12 transformer layers and three parallel CNNs) puts into question the feasibility of deploying it to low-resource settings. There is no indication in the manuscript of inference time, memory footprint, or hardware requirements to understand how this work would be applicable to the environments they propose are possible. 3. Writing Redundancy and Overstatements The manuscript tends to make the same points over and over, such as the novelty of the integration of ELECTRA and the three-stream CNN. Terms such as, “first adaptation,” or “new paradigm” are applied without appropriate substantiation. A more muted and neutral tone would be more clear and credible. 4. Low Quality of Figures The resolution of Figures 2 and 4 is poor. The text is difficult to read, and visual details are blurred. Authors should replace these with high-resolution vector graphics to ensure clarity for readers and reviewers. 5. Explainability and Visualization Although the Grad-CAM is referenced as interpretability technique, there is no explicit clear visual depiction or discussion of the way in which the Grad-CAM outputs correlate with clinical decisions. This undermines the explainability of AI. It would also be useful to include visualizations and interpretations from experts on sample. 6. Inconsistent Formatting and Spacing Throughout the Manuscript Throughout the manuscript, there are several inconsistencies in formatting, including irregular line spacing, mismatched fonts, inconsistent use of equation formatting, and uneven paragraph alignment. These issues make the paper difficult to read and professionally review. A thorough formatting revision is necessary to ensure clarity, readability, and adherence to the journal’s submission standards. Reviewer #5: The paper presents a technically solid and well-executed approach to automated respiratory disease classification using a novel triple-stream CNN architecture integrated with ELECTRA-based contextual enhancement. The model achieves state-of-the-art performance on two clinical datasets, and the extensive data augmentation pipeline, cross-domain evaluation, and inclusion of explainable AI techniques demonstrate both practical relevance and research rigor. The idea of adapting a language model like ELECTRA to acoustic signal enhancement is innovative and, if further validated, could open up new directions in cross-modal learning. That said, I have a few concerns regarding the clarity, justification, and completeness of various sections of the manuscript. To provide constructive and actionable feedback, I have divided my review into section-wise suggestions as follows. INTRODUCTION 1. Currently, ELECTRA is introduced as an LLM, but its actual role in audio feature enhancement is only explained much later. Please clarify early in the Introduction that ELECTRA is used in a discriminative way to enhance audio features, not for text generation or NLP tasks. 2. Please reduce repetition around the challenges in asthma diagnosis to improve flow and make room for more technical insights. 3. Please condense the statistics into a single paragraph and improve transitions to the motivation for using deep learning. 4. Add Stronger Justification for Why ELECTRA Over BERT/Other Transformers. RELATED WORK 1. Please reorganize this section for clarity. Consider grouping prior works into 3–4 clear themes, such as: CNN-based models, Hybrid or Transformer models ... etc. 2. Please add a brief paragraph at the end of this section summarizing limitations in existing work. 3. Some studies (e.g., Wanasinghe et al., Shi et al.) are mentioned multiple times with slight rewording. Please avoid repeating the same studies more than once unless comparing them. 4. Use a consistent citation style throughout the section and avoid mixing narrative-style citations with bracketed numbers inconsistently. METHODOLOGY 1. Please clarify if sampling rate differences between datasets were handled (e.g., upsampling KAUH?), and whether this affects performance/generalization. 2. Add Summary Table for Datasets 3. Please briefly justify why MFCC, wavelets, and mel-spectrogram were selected. do they capture complementary properties? 4. Add Table Summarizing All Augmentations 5. Please add a small experiment/table showing model accuracy with vs. without augmentation to quantify its impact. 6. Please specify whether each CNN stream has shared or independent weights. Also include parameter counts or FLOPs per stream if possible. 7. Please include a comparison or short ablation study to show ELECTRA performs better than simpler alternatives like BiLSTM, vanilla transformer, or no LLM. 8. Please clarify whether ELECTRA weights are frozen or fine-tuned during training, and whether it was pre-trained on any acoustic data. 9. Please revise variable naming in formulas (e.g., x_cutmix) to use formal mathematical notation instead of code-style identifiers 10. Please put core equations only in the main paper, and moving extended derivations to appendix. 11. Please include a baseline comparison with early fusion or simple concatenation to justify this complex attention-based method EXPERIMENTS & RESULTS 1. Please include information about seed initialization and environment setup (e.g., PyTorch version, random seed, CPU/GPU determinism settings) to improve reproducibility. 2. Please include ROC-AUC and PR-AUC metrics for both multi-class and binary asthma detection, as these are critical for medical AI evaluation. 3. Please report confidence intervals (e.g., via bootstrapping) or standard deviations across multiple runs to account for random training variation. 4. Please report average training time per epoch and total training duration. 5. Please clarify: was ELECTRA fine-tuned end-to-end with your acoustic model, or were some layers frozen? Was any domain-specific pretraining used? 6. Please include a table with per-class precision, recall, F1-score, and support for all five classes. This is especially important for diseases like asthma vs. bronchial vs. COPD, which have overlapping symptoms. 7. Please discuss what level of accuracy or MCC is clinically meaningful for asthma detection. Do these results meet those thresholds? 8. Please include a baseline comparison for binary asthma detection 9. Please include a sensitivity-specificity curve (or cost-benefit discussion) for the binary classifier. 10. Please discuss why cross-dataset performance drops sharply. Is it due to different recording devices (e.g., 4 kHz vs. 22 kHz), patient demographics, or class imbalance? 11. Please add an experiment where ELECTRA is removed (or replaced by BiLSTM/transformer-lite) to show the gain in using ELECTRA for feature enhancement. 12. Please include a study where one stream is removed at a time to show its individual impact on performance. CONCLUSION & FUTURE WORK 1. Please revise the conclusion to be more precise and cautious. 2. If any sentence in the conclusion claims improvements to real-world diagnosis or healthcare outcomes, they should back it up with references or qualify it as “potential.” 3. The conclusion should not repeat the full abstract just summarize What was done, What results you got and What remains to be done Figures 1. Please ensure all figures are properly included 2. Improve figure resolution and provide a clear figure with better and larger fonts Please ensure that the manuscript avoids ChatGPT-like or overly generic phrasing. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No Reviewer #4: No Reviewer #5: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
E-RESPINET: AN LLM-ELECTRA DRIVEN TRIPLE-STREAM CNN WITH FEATURE FUSION FOR ASTHMA CLASSIFICATION PONE-D-25-28806R1 Dear Dr. Tawfik, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Nishi Shahnaj Haider, Ph.D. Guest Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed Reviewer #5: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes Reviewer #5: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes Reviewer #5: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes Reviewer #5: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes Reviewer #5: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: The revised manuscript “E-RESPINET: An LLM-ELECTRA Driven Triple-Stream CNN with Feature Fusion for Asthma Classification” (PONE-D-25-28806R1) has been significantly improved and now provides a strong and well-supported contribution to the field of automated respiratory sound analysis. I appreciate the careful effort the authors have taken to address reviewer concerns. The inclusion of a detailed theoretical justification for ELECTRA’s selection over alternative transformer architectures, combined with expanded ablation studies, strengthens the manuscript considerably. The enhancements to the explainability framework through Grad-CAM visualizations, with clear clinical correlations, add important interpretability that is highly relevant for medical AI applications. The authors also responded well to concerns regarding novelty, statistical rigor, and cross-institutional generalization. The updated results and discussion appropriately contextualize the contributions within existing literature and highlight both the strengths and limitations of the proposed method. The added section on real-time system implementation demonstrates the practical potential of the work and underscores its translational value. From a presentation perspective, the manuscript now benefits from consistent LaTeX formatting, clearer equations, improved figure quality, and a more concise and professional writing style. These revisions greatly enhance readability and scientific clarity. Overall, the paper presents a novel, methodologically sound, and clinically relevant approach that advances the state of the art in respiratory sound classification. The revisions have adequately addressed the previous concerns, and I find the manuscript suitable for publication in PLOS ONE. Recommendation: Accept Reviewer #5: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No Reviewer #5: No ********** |
| Formally Accepted |
|
PONE-D-25-28806R1 PLOS ONE Dear Dr. Tawfik, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Nishi Shahnaj Haider Guest Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .