Peer Review History
| Original SubmissionMay 8, 2025 |
|---|
|
Dear Dr. Fan, Specifically reviewers were concerned over the lack of detail in describing the parameter settings for the three feature selection methods. In addition, the rationale for selecting the top three base classifiers lacked clarity. One reviewer also stated that the accuracy rate of 73.91% for a screening test requires additional clarification. plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Colin Johnson, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that your Data Availability Statement is currently as follows: [All relevant data are within the manuscript and its Supporting Information files.] Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition). For example, authors should submit the following data: - The values behind the means, standard deviations and other measures reported; - The values used to build graphs; - The points extracted from images for analysis. Authors do not need to submit their entire data set if only a portion of the data was used in the reported study. If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access. 3. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: No ********** Reviewer #1: Reviewer Comments: I. Research Value and Innovation Clear research significance with clinical application potential. The rising prevalence of Allergic Rhinitis (AR) and the limitations of current diagnostic methods (subjectivity, high cost, invasiveness) are well-established. This study builds an intelligent screening model using routine blood test data, enabling early screening without additional tests. This holds significant practical value for primary healthcare, aligning with trends in precision medicine and AI-assisted diagnosis. Moderate methodological innovation. The ensemble voting approach integrating multiple feature selection techniques (filter, embedded, wrapper methods) with machine learning algorithms enhances feature selection reliability and model performance compared to single methods. The soft voting ensemble strategy, weighted by AUC to optimize classifier combination, is logically sound. Results demonstrate superior model performance over single algorithms (AUC=0.862, external validation accuracy 73.91%), providing methodological reference value. II. Method Design and Implementation Data Source and Preprocessing Strengths: Data originates from a single center but has a substantial sample size (n=1295). Diagnostic criteria (combining symptoms and allergen testing) are clear. Grouping methods are reasonable. Limitations: Inclusion only of patients from Hohhot introduces geographical limitations potentially affecting model generalizability. This should be explicitly acknowledged in the Discussion. Details of data preprocessing (e.g., missing value handling, outlier correction, feature standardization) are omitted, potentially impacting model stability. Supplementation is recommended. Feature Selection and Model Construction Strengths: The integrated hard voting method effectively cross-validates features across multiple strategies (retaining features with frequency ≥2). The final 16 retained features (e.g., eosinophil-related indices, RDW) align with AR immunopathological mechanisms (e.g., elevated eosinophils), demonstrating biological plausibility. Limitations: Specific parameter settings for the three feature selection methods (Mutual Information, LASSO, RF-RFE), such as the LASSO penalty coefficient or RF-RFE iteration count, are not detailed, hindering method reproducibility. The rationale for selecting the top three base classifiers ("based on AUC value") lacks clarity regarding the threshold criterion. Supplementation with the AUC ranking results for all algorithms and statistical tests (e.g., Wilcoxon test) is recommended. Model Evaluation Strengths: Model generalizability is assessed using an external validation dataset. Multi-dimensional metrics (accuracy, AUC, etc.) are reported, and result visualization (confusion matrix) is clear. Limitations: The source, sample size, and feature distribution of the external validation set are not specified. Clarification is needed on whether it is an independent cohort or cross-center data. Comparison with single algorithms is limited to AUC and accuracy. The absence of a calibration curve or Decision Curve Analysis (DCA) makes it difficult to comprehensively evaluate clinical net benefit. III. Result Analysis and Discussion Result Presentation Strengths: The Venn diagram effectively illustrates the intersection of different feature selection methods. The model performance comparison figure (Fig 3) visually demonstrates the advantage of the ensemble method. Limitations: Feature importance ranking lacks discussion in the context of clinical significance (e.g., were eosinophil absolute count/percentage assigned the highest weights?). Supplementation with feature contribution analysis (e.g., SHAP values or Permutation Importance) is recommended. Table 2 (Parameter Configuration) lacks complete presentation of some algorithm parameters (e.g., SVM kernel function, KNN neighbor count), hindering model replication. The current figure legend for the confusion matrix (likely Fig 5) lists only "False positive," "False negative," and "Correct prediction" categories without specific numerical values or percentages. Adding this data would enhance clarity. Terminology must be consistent with definitions in the main text. Briefly define these terms in the legend or text. Color legends (■) must correspond precisely to chart colors. If figures are for potential B&W printing, use distinct patterns (e.g., hatching, dots) instead. Regarding data presentation: The meaning of numbers like "136, 19, 29" below charts is unclear (possibly sample counts per category or other metrics). Explicit labeling is required to prevent misinterpretation. If sample counts, supplement with relevant statistical metrics (e.g., accuracy, recall) to bolster result credibility. Formatting & Norms: Figure filenames (e.g., Fig5.tif) should follow journal style (e.g., "Figure5.tif" or "Fig5.png"). Ensure figure resolution meets journal requirements (typically ≥300 dpi). Discussion Depth Strengths: The Discussion objectively identifies study limitations (single data source, lack of symptom/exposure history integration) and proposes future directions (expanding data dimensions, incorporating multimodal information), presenting a clear logic. Limitations: Insufficient comparison with similar studies (e.g., Christo et al.'s 97.7% accuracy model). Analyze methodological differences (e.g., data type, algorithm choice) to clarify this model's specific applicability. The impact of dynamic changes in blood routine indicators (e.g., seasonal fluctuations in seasonal AR patients) on the model is not explored and could be a valuable extension. IV. Ethics and Writing Standards Ethical Compliance. Mention of Institutional Review Board (IRB) approval is noted, but details regarding patient informed consent and data anonymization procedures are missing. Supplementation of the ethics statement details is required. Writing and Formatting Strengths: Well-structured manuscript. Abstract is concise. References cover recent relevant studies in ML and AR diagnosis. Limitations: Some figure/table labels are unclear (e.g., missing y-axis unit in Fig 3, inconsistent table numbering). Standardize figure/table formatting. Redundant descriptions exist between the English abstract and main text (e.g., repeated steps of the ensemble voting in the Methods). Streamlining is advised. V. Overall Recommendations Major Revisions Required: Data & Methods: Supplement data preprocessing details (missing value handling, standardization methods) and feature selection parameter settings. Clarify the source, sample size, and feature distribution of the external validation set, OR consider adding multi-center validation. Supplement AUC rankings for all base classifiers and provide statistical test results to justify selecting the top three. Authors must supplement charts/tables with specific data and explanations to ensure information is complete and easily understandable. Check terminology and formatting for consistency with the journal's author guidelines. Results & Discussion: Add feature importance analysis (e.g., SHAP values) and interpret key features (e.g., EO#, RDW-SD) in the context of clinical literature and their biological significance. Supplement calibration curve and decision curve analysis (DCA) to evaluate clinical utility. Compare the advantages/disadvantages and innovation of this model against similar ML models for AR. Provide deeper discussion of chart/table results, explicitly linking them to the research objectives. Ethics & Formatting: Enhance the ethics statement to include informed consent procedures and data privacy protection measures. Correct figure/table labeling errors, unify table numbering, and eliminate redundant text. Minor Revisions Suggested (Optional Optimizations): Explore the interaction effect of demographic features (age, sex) with blood routine indicators on AR. Compare other ensemble strategies (e.g., Stacking) with the soft voting method used here to further validate model robustness. Summary: This study constructs an intelligent AR screening model based on routine blood test data. The methodology is sound, and the results hold clinical relevance. Enhancing data diversity, feature interpretability, and model evaluation dimensions would significantly strengthen its scientific rigor and persuasiveness. The authors are recommended to undertake a major revision addressing the points above before resubmission. Reviewer #2: I agree that there are few objective tests for allergic rhinitis and that this poses several challenges. I have a few comments 1. I have reservations about whether the blood tests presented in the study can be fully conducted at primary care facilities rather than tertiary hospitals. Screening tests for allergic rhinitis should ideally be low-cost and accessible at the primary care level. 2. The use of hard and soft voting methods combining KNN, LR, RF, DT, and SVM models appears to yield improved results compared to previous approaches. However, the accuracy rate of 73.91% for a screening test requires cautious interpretation. It is recommended that additional clarification or discussion of the limitations be provided to contextualize this result appropriately. 3. A variety of inference models are employed, and the selection process for these models is considered important. Although numerous references are cited, it would be beneficial to include a clear rationale for the selection of KNN, LR, RF, DT, and SVM. While the performance evaluation and validation process using voting methods is illustrated schematically, the presented models appear to be basic validation models. Therefore, it would be helpful to provide an explanation addressing the absence of more recent models to enhance the study's relevance and rigor. 4. While utilizing the abundant data available in China is advantageous, considering that academic journals are read by a global audience, it would be more beneficial to frame the study from a perspective that offers utility to all patients, rather than focusing solely on advancements in the Chinese healthcare system. 5. It would be nice to add demographic data 6. It might be good to have your English proofread. Overall, it is understandable, but a review of spacing and similar details is necessary. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Young Joon, Jun, M.D., Ph.D. ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
Construction of an Intelligent Screening Model for Allergic Rhinitis Based on Routine Blood Tests PONE-D-25-12740R1 Dear Dr. Fan, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Colin Johnson, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: The author has refined the revision suggestions, made thorough revisions and agreed to accept this research article. Reviewer #2: The previously mentioned points have been appropriately addressed. As a screening tool, it may have clinical utility; however, broader, generalizable use will require further research. The authors appear to have recognized this point and acknowledged it as a limitation. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Young Joon Jun ********** |
| Formally Accepted |
|
PONE-D-25-12740R1 PLOS ONE Dear Dr. Fan, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Colin Johnson Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .