Peer Review History
| Original SubmissionMay 10, 2022 |
|---|
|
PONE-D-22-13708External Evaluation of the Dynamic Criticality Index: A Machine Learning Model to Predict Future Need for ICU Care in Hospitalized Pediatric Patients.PLOS ONE Dear Dr. Patel, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ==============================Please pay close attention to the methodology clarifications raised by the reviewers.============================== Please submit your revised manuscript by Aug 20 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Bobak J. Mortazavi, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors compared the performance of an existing model with a customized model on an independent center's healthcare data, for predicting need for ICU care in hospitalized pediatric patients. While the exercise is interesting and signals caution for applying existing models for local clinical utility, there is somewhat weak support for the implication that the authors wish to draw based on their study. Some comments include: - When comparing performance of the existing model with the new model, they should be evaluated on the same cohort. So if the new model is evaluated on a test set, the existing model should be evaluated on that set too. - There is always a need to recalibrate the model, if there is difference in prevalence of the outcome between the development dataset and the test set. Otherwise the performance metrics based on a threshold on the probabilistic output will be off. This is expected. - The authors concluded that while the model developed on multi-institutional data has poor performance, the methods can be used to develop customized local models. However, a large number of variables were included in the model and the original model does not go through variable selection and it is unclear whether the architecture of the model was optimized for the problem. Therefore, it looks like the single center model can be developed without borrowing any knowledge from the existing model. What then is the value of the multi-institutional model? - The authors also mentioned that the degradation of performance is probably due to hospital characteristics. Then the original model should include such information in the model and make the model adaptable to different hospital characteristics. Would this improve the performance on a single center data, adjusting for the center's characteristics? It is questionable that the multi-institutional model yields almost no discrimination for the single center's data, whether anything meaningful has been learned or is it completely overfitting? Without digging into these issues and truly understands the root cause for performance degradation, the study is not going to produce much information for future investigations. Reviewer #2: The authors evaluate whether a machine learning model previously developed from a multi-institutional database will transfer to a new site. The outcome of interest is whether hospitalized children will need ICU care over a variety of time horizons. The authors find that directly applying the model to the new site leads to very poor performance across a variety of metrics. They do find, however, that the model development process transfers quite well. By following the same development procedure (e.g. variable selection, model design, etc.) of the multi-institutional model, the authors are able to develop a model that performs well when trained with data from the new site. This demonstrates that even when the model itself may not transfer across different sites, the modeling methodology may transfer effectively. The paper is well-written and the motivation of the work is clear. The presentation of results is comprehensive and methodologically sound. The problem of robustness across different institutions is a key challenge in the development and deployment of ML models for healthcare applications. The finding that the modeling methodology may transfer effectively even if the model itself does not is an interesting one that has practical relevance. I realize that a significant focus of this work was to replicate the methodology of prior work which determined the modeling decisions. However, I think it would still be beneficial to see a comparison with standard baselines such as logistic regression and decision tree algorithms (e.g. XGBoost) to put the results into context. Is the neural network necessary for strong performance? The authors acknowledge the drawbacks of utilizing a neural network (e.g. interpretability), but do not demonstrate whether there are benefits to using deep learning instead of other approaches. Given that a neural network is being used, it would also be valuable to report full implementation details to aid in replication, even if the information is relegated to the appendix. The only details I see are that it is a fully connected network with 5 hidden layers. How many hidden units were used? How was the network trained (optimizer, learning rate, etc.)? Were regularization techniques utilized (l2 regularization, dropout, etc.)? How were the hyperparameters selected? There is some discussion of why the model may have failed to transfer, but it would be valuable to gain more insight into this. For example, the calibration plot shows that the multi-institutional model tends to be underconfident in the new setting. Is there any insight into why this may be the case? I may be missing something, but in Figure 1a (> 12 − 18 Hours) the AUROC is reported as .493, but the ROC curve appears to lie above the dashed y=x line. Shouldn't the AUROC be >0.5 in that case? Reviewer #3: Strength: • This study is based on a decent dataset with a great number of examples. The authors provide detailed information about the dataset (Table 1), including admissions, age, gender, etc. • The authors provide great metrics to evaluate their experiment, including sensitivity, precision, accuracy, specificity, negative predictive value, F1, as well as AUROC and AUPRC, giving a comprehensive understanding of the model performance. Weakness: • Experiment setting lack details. The authors mention that 75% patients are randomly selected as training data, and 13% for validation and 12% for test (page 6). Please declare how many repeated experiments (or cross validation? I assume not) have been done. • Need more explanation of data and data preprocessing. The authors also declare that “missing data were imputed” (page 7). Please add more details of how are the missing data imputed (zeros, previous values, average values, etc.). In addition, what is the frequency of the data? What features are chosen? Please make them clear in the paper. • The paper lacks background information and related work. Please add some citations about others methods solving this problem, and please explain why it is important to evaluate the CI-D model on individual sites. • The paper needs some baseline model to compare with. I understand that this paper is an evaluation of a previously proposed model. However, only testing CI-D model is not convincing. The authors should introduce some other method to compare with CI-D model to prove that CI-D is or is not working. • The result in Table 2.A seems poor, e.g., F1 scores are below 0.02. The authors also mention this (the first paragraph of page 10). Please have some explanation or analysis of why the performance is very low here. • In figure 1b, how is it possible that the precision-recall curve decrease at the beginning and then increase (the valley of precision values below 0.6 when recall is around 0). Please check the experiment results and have an explanation of that. • It is not very easy to understand the table 2 and 3. Please have some highlights of the results. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Lida Zhang ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-22-13708R1External Evaluation of the Dynamic Criticality Index: A Machine Learning Model to Predict Future Need for ICU Care in Hospitalized Pediatric Patients.PLOS ONE Dear Dr. Patel, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please address reviewer comments on the revision approach. Please submit your revised manuscript by Jan 27 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Bobak J. Mortazavi, PhD Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #3: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #3: 1. The goal of the paper is evaluating CI-D model, and assessing the single site performance of CI-D developed from a multi-institutional data base, and introduce neural networks to predict future need for ICU care. Is the neural network intended to help evaluate CI-D model? 2. The contribution of the paper is confusing. It’s okay to evaluate CI-D model and tell readers where does CI-D model perform well and where does not, however, what can reader learn from the neural network? Is it a new proposed method to evaluate another algorithm? Please modify the introduction and declare the contributions. 3. The machine learning model and the application part has flaws and are not well explained: a) Why there are four separate model, and why these four intervals? (I believe there is some medical support behind, but please emphasize in the paper). b) What are the neural networks predicting? Is the patient outcome a classification problem or regression (I assume classification because AUROC and AUPRC are the evaluation metrics, but please make it clear in the paper). c) The authors declared that the objective of teach training epoch was to maximize the Mathew correlation coefficient and minimize the cross entropy between predicted score and the patient’s outcome, however, it’s still not clear what is the loss/cost function? Is it cross entropy only, or a joint loss of cross entropy and (reversed) correlation coefficient? d) What does the data look like. Is it time-series? Discrete numeric features? other type of features? e) What exactly is the structure of the neural network? The authors started with a neural network with a single hidden layer, and tried many things later, however, did not declare their decision of the network, nor did they show which attempt helped and which did not. f) What neural network model is applied? Fully-connected network? RNN/LSTM? CNN? 4. We previous suggested the authors to add other models to compare with: a). Traditional machine learning such as random forest, logistic regression. b). LSTM if time-series data. This is because neural networks do not always perform better than traditional machine learning models. Also, if the data is time-series, LSTM is a better neural network than fully-connected networks. 5. The way to design the network is ad-hoc. The authors tried a few random things and search which works the best, however, is not an ideal way of designing neural networks. The design of a network needs reasons and domain knowledge support. Trying many things cannot tell readers any useful information when they want to apply the paper to their own work, and there are always new techniques to try. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: Yes: Lida Zhang ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
PONE-D-22-13708R2External Evaluation of the Dynamic Criticality Index: A Machine Learning Model to Predict Future Need for ICU Care in Hospitalized Pediatric Patients.PLOS ONE Dear Dr. Patel, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ==============================There were some additional clarifications needed regarding modeling. Please address the additional reviewer concerns.============================== Please submit your revised manuscript by Jun 12 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Bobak J. Mortazavi, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #3: Thanks for addressing my previous comments. They look good! Some remaining comments are about section Machine learning methodology: 1. Line 210 "at cut points of 0.15, 0.5, and 0.9 in the training and validation sets": it's not very clear what are these cutting points about. 2. I don't think it's appropriate to say "if there was not overfitting". Overfitting will happen as the model is being trained, and it's important to catch the well-trained model before overfitting. E.g., using a validation and a test set - save the best performed model in validation and test it on test set. 3. Line 219 "with minibatches of 10,000": is it 10,000 batches or with batch size 10,000. Also, the paper of citation 19 uses minibatch of size ten. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: Yes: Lida Zhang ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
External Evaluation of the Dynamic Criticality Index: A Machine Learning Model to Predict Future Need for ICU Care in Hospitalized Pediatric Patients. PONE-D-22-13708R3 Dear Dr. Patel, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Bobak J. Mortazavi, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Please do consider the remaining comments from the reviewers for your final version. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: A few remaining comments: Limitation section: a) for the third point, there are techniques like SHAP which can provide interpretation of feature importance b) in addition to limitation, it will be great if the authors can also provide some information about future plan to address these limitation. Line 230: please add citation to LIME "Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “” Why should i trust you?” Explaining the predictions of any classifier.” Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016." Line 299-306: please be consistent - a) please give equation of Sensitivity (=true positives/(true positives+false negatives)), b) please be consistent about parentheses and square brackets in equations. Line 409-416: Seem like the same content to Line 299-306. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: Yes: Lida Zhang ********** |
| Formally Accepted |
|
PONE-D-22-13708R3 PLOS ONE Dear Dr. Patel, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Bobak J. Mortazavi Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .