Peer Review History
| Original SubmissionFebruary 11, 2020 |
|---|
|
PONE-D-20-03952 Additional Value of Laboratory Results over Vital Signs in a Machine Learning Algorithm to Predict In-Hospital Cardiac Arrest: A Single-Centre Retrospective Cohort Study PLOS ONE Dear Ueno Ryo, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: The reviewers have raised a number of points which we believe major modifications are necessary to improve the manuscript, taking into account the reviewers' remarks. Our expert reviewers, especially statistician review has concerns on methods and statistical analysis, and Tuhe use of random forest does not lead to a generalizable model. Please consider and address each of the comments raised by the reviewers before resubmitting the manuscript. This letter should not be construed as implying acceptance, as a revised version will be subject to re-review. ============================== We would appreciate receiving your revised manuscript by Apr 13 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Wisit Cheungpasitporn, MD, FACP Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 1. Thank you for including your competing interests statement; "No" Please complete your Competing Interests on the online submission form to state any Competing Interests. If you have no competing interests, please state "The authors have declared that no competing interests exist.", as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now This information should be included in your cover letter; we will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Please include your tables as part of your main manuscript and remove the individual files. Please note that supplementary tables (should remain/ be uploaded) as separate "supporting information" files [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The topic of this article is important however substantial revisions are suggested to improve the quality and clarify of the work. Overall the language has to be improved. Additionally, authors have often used uncommon words such as “parsimonious” , “ plausible” , etc — it is suggested to use simpler words and sentences. Although the meaning remains the same , but a simple sentence/word is less likely to be misinterpreted and is also well received by international readers. 1. Introduction: it is too short and needs major changes. The research gap is not well supported. It is suggested to clarify the state of the art, the motivation, and the research question. 2. Method: the subsection “outcome” might confuse readers. Please assign a different subsection name. Also provide with a reasoning for the choice of your ML model. It is also suggested to list the limitations of the selected model and address them in the paper. 3. Discussion: it required major rework. Not only English but also the flow has to be improved. Many claims needs citations. I have highlighted some of them : “Prior studies have extensively investigated the importance of early prediction of IHCA.” [cite] “Clinical deterioration is common prior to the cardiac arrest [4,5] and monitored or witnessed patient have more favorable outcomes [2,3] “ — this sentence is very confusing. I suggest rephrasing it. “ Historically, multiple early warning scores have been developed, ranging from an analogue model based on vital signs to a digital scoring system using both vital signs and laboratory resu...” [cite] “ Similar results were obtained in other studies [28]. ” — cite more studies or rephrase the sentence (studies — study) . In the discussion I also suggest to include subsections discussing the effects biases in such ML models and how this has been addressed in the study. What are the risks associated?. Is the outcome clinically meaningful? (Please read the following to build upon discussion section: https://doi.org/10.12968/bjhc.2019.0066 ; https://doi.org/10.1126%2Fscience.aaw0029) Also a section dedicated to future research direction is highly recommended. Reviewer #2: General Comment: The authors present their findings from a diagnostic model development project, aiming to compare two machine learning algorithms for predicting in-hospital cardiac arrest: a standard approach using both vital sign and laboratory measures from an EHR, and a reduced model using only the vital sign components. They report that both models provide similar discrimination in-hospital cardiac arrest occurrence, with similar results in several settings. The debilitating omission in this manuscript is that authors provide only the crudest of calibration/validation measurements, and double down on their refusal to calibrate in their Discussion. The use of random forest does not lead to a generalizable model, at least how used here. Specific Comments: 1. The Introduction and Methods section combined are shorter than the Discussion. As a result there is little motivation of the project and the methods used by the authors are not reported in enough detail, particularly the analytic approach (more on both of these below). 2. Introduction, Second paragraph: The authors main justification of why they would like to investigate omitting laboratory measurements is that the effect of doing so is unknown. While true, this is hardly justification for why they should feel doing so will not affect the diagnostic tool. There has to be a conceptual reason as to why they think this would work, and that reason(s) needs to be reported. 3. Methods Section, Study Population: The authors report that the vital signs and laboratory results are shown in detail in Figure 1. However, this figure shows nothing of the sort. 4. Methods Section, Statistical Methods: The authors use the random forest model for the prognostic tool since -- as they state -- it usually outperforms other algorithms. The reason for that is because it leads to models of great complexity, and model-averages across numerous well-fitting models. This leads to an uninterpretable model, which limits the ability for anyone (let alone diagnostic clinicians or scientists) who might want to use it. Because of this method, the authors do not -- and could not even if they wanted to -- report any prognostic model. As such, there approach is not reproducible and appears of little value then "use random forests, add these vital sign measurements, and let the computer go to work." 5. Methods Section, Statistical Methods, Second Paragraph: The authors use ROC-AUC to assess their model discrimination, which is good. However, their calibration measures (sensitivity, specificity, PPV, NPV) are not adequate for assessing model calibration. Van Calster (2016) actually refers to these measures as "mean calibration", which is a step below "weak calibration". The authors are strongly encouraged to add a calibration plot of observed vs. predicted risk, which more definitively shows the quality of model calibration. Otherwise, the authors need to explicitly state, in their abstract, methods, results and discussion, that their model is not calibrated, even in the weak sense. 6. Results, Table 2: It would be more clear if the authors referred to their models as Vitals-Only and Vitals+labs. Reporting Vitals vs. Labs makes it sound like one model contains only lab measures and the other only vital measures, which is not the case. 7. Results: Only model discrimination is discussed, as only AUC is elaborated upon. No model calibration measures (sensitivity, specificity, PPV, NPV) are mentioned in the results. This is troubling, as the PPV for this model is abysmally low; less than 5% in most cases. As such this model would lead to more false positives than true positives at a rate of at least 19:1. No false positives might be better than false negatives with respect to cardiac arrest, but the authors make no mention of this in the results. 8. Discussion, Limitations, Sixth Paragraph: It is hear that the authors state that they focused on discrimination rather than calibration. They confusingly make a general comment stating that calibration was "unfavorable to our research design," since calibration is "highly unstable as opposed to model discrimination." This is simply unacceptable. The biomedical informatics field has been consistently clear that uncalibrated models are worthless, and both discrimination and calibration for diagnostic and prognostic models are required, especially in supervised learning scenarios such as this. Both the Journal of Biomedical Informatics and the Journal of the American Medical Informatics Association are filled with articles (see anything by Royston, Moons, or Van Calster) stating the importance of calibrated models; they also have numerous articles showing HOW to calibrate. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-03952R1 Value of Laboratory Results in Addition to Vital Signs in a Machine Learning Algorithm to Predict In-Hospital Cardiac Arrest: A Single-Center Retrospective Cohort Study PLOS ONE Dear Dr. Ueno Ryo, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: Our expert reviewer(s) have recommended additional revisions to your revised manuscript. There are many area that need clarifications and improvement. Therefore, I invite you to respond to the reviewer(s)' comments as listed and revise your manuscript. ============================== Please submit your revised manuscript by Jul 17 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Wisit Cheungpasitporn, MD, FACP Academic Editor PLOS ONE Additional Editor Comments: Our expert reviewer(s) have recommended additional revisions to your revised manuscript. There are many area that need clarifications and improvement. Therefore, I invite you to respond to the reviewer(s)' comments as listed and revise your manuscript. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have made substantial revisions. However, more work is needed: 1. I understand the motivation of the work, however, just stating that vitals are readily available. I suggest elaborating on the "data dimensionality," "minimal optimal" problems, etc. emphasizing the benefits of a smaller or optimal dataset. Additionally, did the computational time reduced significantly after eliminating predictors? It would be better if authors can report the computation time (for their computer configuration). 2. "A simpler model with fewer input ... " I do not agree. Especially in a healthcare setting, the adoption of any technology takes place based on its performance and economic feasibility. 3. "... it does not require any log-transformation .... computational complexity of the pre-processing is reduced ....". Log transformation and normalization are not computationally complex. I do not suggest stating this as a reason to justify RF. 4. k-fold cross-validation is always preferred to manually divide data into test and train set. 5. "Implausible values were removed ...". This is not clear. What is "implausible values"? Outlier or anomalies? Please rewrite and explain, just giving citations is not sufficient 6. In the STRENGTH section of the paper, the authors stated that " A prior study showed that the existence of missing data itself has predictive value [27]" Here is the quote from the study [27] "However, missing and incorrect demographic information in both the Death Master File and EHR data can affect the accuracy of the matches and the resulting estimated survival rates. To circumvent these limitations, our outcome was literally whether the EHR indicates that the patient is alive three years after our cohort period ended. We were not modeling time until death or conducting a traditional survival analysis." The study [27] did not replace missing values to binary. Also, the standard practice is to delete the column when more than 50% is missing. Missing data can only be indicative of "non-essential information" (if it is deliberately not captured). 7. The study has 3 strengths and nine limitations. 8.The implication of the study is unclear. How eliminating data make an ML model simpler? The model's complexity is independent of data; it depends on the underlying algorithm. 9.Lastly, the language needs to be improved (tense). The sentence structures are not indicative of technical writing. " Second, clinically, doctors and nurses might intervene in patients ... " Reviewer #2: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Avishek Choudhury Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Value of Laboratory Results in Addition to Vital Signs in a Machine Learning Algorithm to Predict In-Hospital Cardiac Arrest: A Single-Center Retrospective Cohort Study PONE-D-20-03952R2 Dear Dr. Ryo, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Wisit Cheungpasitporn, MD Academic Editor PLOS ONE Additional Editor Comments: I reviewed the revised manuscript and the response to reviewers' comments. Revised Manuscript is well written. All comments have been addressed and thus accepted for publication. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #3: All concerns have been fully elucidated, missing sections and analyses have been completed. Finally, comprehension errors have been corrected. Good work! ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Avishek Choudhury — Stevens Institute of Technology Reviewer #3: No |
| Formally Accepted |
|
PONE-D-20-03952R2 Value of Laboratory Results in Addition to Vital Signs in a Machine Learning Algorithm to Predict In-Hospital Cardiac Arrest: A Single-Center Retrospective Cohort Study Dear Dr. Ueno: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Wisit Cheungpasitporn Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .