Peer Review History
| Original SubmissionMarch 30, 2020 |
|---|
|
PONE-D-20-09068 Machine Learning versus Physicians to Predict Mortality in Sepsis Patients Presenting to the Emergency Department PLOS ONE Dear Dr. Meex, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 07 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Ivan Olier, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your ethics statement in the manuscript and in the online submission form, please provide additional information about the patient records used in your retrospective study. Specifically, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information. 3. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed) 4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ 5. Please ensure that you refer to Figure 4 in your text as, if accepted, production will need this reference to link the reader to the figure. 6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript presents the development of a machine learned model predicting 31-day mortality. The predictive performance of the model is compared to the predictive performance of 4 internists (2 consultants and 2 fellows). The machine learned model outperformed the internists by a wide margin. The novelty in the manuscript is the comparison between the machine learning (ML) model and the internists. Developing yet another ML model for sepsis mortality risk is not exciting; but a comparison of the model with human expert internists is exciting. As the key contribution of the paper this comparison has to be solid. Unfortunately, the way the paper stands now, I do not feel that this comparison is rock solid. Here are some weakness in the comparison. (a) Although the difference between internist and ML is high, with only 4 internists, the differences among the internists can significantly influence the results. It would be helpful to have inter-rater agreement among the internists. Also, how was specificity/sensitivity for the ML model determined? (b) It should also be noted that the machine learned model incorporates physician judgement. Whether a lab test is performed or not is based on clinical judgement and is made available to the model (as the lab absence/presence indicator). A person with fewer labs have fewer problems and is hence less likely to die. How the authors handle missing values is typically completely reasonable and correct, however, in this case, it "leaks" physician knowledge to the ML model. Beside the comparison, the model development itself raises some concerns. Specifically, he performance differences between the various machine learning methods is incredibly high (.63-ish for Ridge regression, .65 neural networks, .72 for random forest, but .85 for xgboost). We also typically see xgboost outperform other methods but not to this extreme extent. I am wondering whether the authors may have made some mistake evaluating xgboost. There are other indicators of a possible mistake: - The performance of xgboost differs across different tables (.813 in Supp Tbl 2; .852 in Supp Tbl 5). With a stated confidence interval of .79-.83, .852 is quite a bit outside the confidence interval. - The learning rate is stated as .075 in Supp Tbl 5 but .001 in the text; could be a typo or actual different hyper-parameterization. A different hyper-parameterization could explain observed AUC values outside the confidence interval than random chance. - The "95%"-confidence intervals are not 95%. In Supp Fig. 2B, only fold 1 falls consistently within the stated 95% confidence interval, the remaining 4 folds fall outside the "95%"-confidence interval for large consecutive portions of the ROC curve. How was the bootstrap estimation performed? Specifically, which data set is resampled (the development or the leave-out)? I agree with the authors that a full-fledged lattice search of the hyper-parameter space is unnecessary, but I would suggest: - explaining how the hyper-parameters were determined (e.g. using the default values from the package) - explaining which if any hyper-parameters were changed (based on the CV test set; NOT the 100 patient leave-out set) - conduct a sensitivity analysis demonstrating that small changes in the hyper parameters do not lead to major perturbations in the performance. The SHAP score is poorly explained and Figure 3 is poorly described. Readers not familiar with SHAP and violin plots will not understand this figure. An example could be useful. Eg. High values of urea (red points) impact the risk of mortality positively (large positive impact); low values reduce it slightly (modest negative impact); urea observations in most patients have no impact on mortality (the urea violin plot is widest at 0 impact). A concern with xgboost is the (somewhat) black-box nature of the model. The authors address this concern by computing the SHAP score for each feature. Given that the key differentiator of trees from the other methods is their ability to detect and use interactions, SHAP may not capture this well. A better approach is to take the (say) top 20 features and build a reduced model to see the performance loss. (I picked 20 because that is what the authors focus on in Supp Tbl 4, but other numbers could illustrate their point better.) Clinical significance. ML models in sepsis abound with very marginal contributions. Proposing yet another one without explaining how it can improve sepsis care is fairly meaningless. I appreciate the authors mentioning how their model could be used, and I think expanding on this is essential. The author's suggestion of using the model to identifying high risk patients (>= 50% risk of mortality) is reasonable. However, their evaluation does not quantify their contribution from this perspective. The clinical significance of this work could be significantly improved by (i) showing how much better the ML model is at identifying patients at >=50% risk (or any other high risk); (ii) clinically describing where the ML model was correct and the internists were not (again >= 50% vs <50% can be used) ; (iii) understanding the limits of the model, clinically describing where the ML model "fails": predicts low risk of mortality while the patient dies (regardless of whether the internists made the correct prediction). Note that higher AUC does not guarantee an improved ability to identify high-risk patients; correctly ordering low-risk patients will increase the AUC yet it has not clinical significance. Limitations. Some important limitation are omitted. 1.The key limitation is portability. We can expect the risk scores and the internists to have similar performance if they made predictions for patients in a different health system. How about the xgboost model? Will it achieve similar performance? 2. Physicians, which actually seeing patients, may incorporate other features not captured by the EHR. SUMMARY. The key contribution of the paper is a comparison between an ML model and human internists. There are minor flaws in the comparison (ML has access to clinical judgement). The model development process appears to have technical problems (confidence intervals appear incorrect; unknown rational for the hyper-parameterization and dependence of the performance on these parameters). The performance envelope of the model is unexplored: what clinical patient characteristics make it fail? make it perform better than the internists? Important limitations are not mentioned. I think this paper has the potential to be an influential piece of work but its contribution as it stands now is insufficient and some technical aspects may even be incorrect. Reviewer #2: • Mortality prediction in patients with sepsis, utilising a small dataset of patients presenting to a single emergency department • The title is a little sensational – “Machine Learning versus Physicians”, and could be modified to represent the science o “A comparison of machine learning models versus clinical evaluation for mortality prediction in patients with sepsis” • Authors testing the hypotehesis that machine learning models would outperform physician evaluation and existing clinical risk scores • Authors determined that machine learning models out performed clinicians due to higher sensitivity and specificity, however discriminatory information is not provided in the abstract • Introduction o The references noted in the introduction note high discriminatory scores for abbMEDS (up to 0.85), mREMS (up to 0.84) and clinician judgement (up to 0.81), suggesting similar performance to this cohort o The comment is made that “machine learning can extract information from incomplete…data” – although complexity and non-linear relationships are areas where ML does succeed, incomplete data is still a major limitation • Methods o Patients with missing clinical data were excluded – how many variables needed to be missing? Were the data missing at random? A plot representing percentage of missing values would be valuable, and an analysis to confirm that the missing value distribution was equal in the training and testing set o Similarly, the authors note that the machine learning model is capable of dealing with missing data – could further information be provided as to how GBMs deal with the issue? Further, this statement is incongruent with the previous noting that patients with missing clinical data were excluded (unless this does not refer to biomarkers, and instead to other data) o As biochemical markers were taken within the first two hours, and a significant proportion of treatment is performed in that time frame, was the variable of time between presentation and time of biomarker withdrawal recorded and adjusted for? o The train/test split ratio is unusual – 93:7 – can the authors explain why this was chosen? Internal validation on 100 patients is small and may affect the reproducibility of this result. o Mortality was obtained from the electronic health record – was this linked to a national health index? How were deaths out of hospital recorded in the EHR? o The septic shock definition mentions a MAP ≤90 – is an alternate threshold intended here? o The use of SHAP to explain the findings are an important addition and the authors should be credited for its inclusion o Why were internal medicine physicians chosen, as opposed to emergency or intensive care physicians? Is this typical for this hospital? o Calibration measures such as the Brier score and calibration plots should be presented for the models o There is marked class imbalance, with only 13% of the population experiencing the primary outcome of death – were oversampling methods considered? • Results and Discussion o There appears to be a significantly higher percentage of patients with cancer and diabetes in the training/development set o Why is the discrimination/AUC not directly compared between the ML models and the physicians and clinical scores? o AUCs should be compared using DeLong’s test to demonstrate statistical superiority in regard to discrimination o Figure 2 does not include the clinical risk scores for comparison o Creatinine does not share the same relationship with model output as does urea – can this be explained clinically? (from the SHAP figure) o Several references to other machine learning models in the literature focused on sepsis and mortality prediction have been omitted General points • Funded study with no conflicts of interest • Grammar could be slightly improved however does not interfere with message of the paper o “…and categorize from low to high risk” should be “and is categorized from low to high risk” o “follow-studies” should be “follow-up studies” • Authors have made data available without restriction, however note that it is available in the supplementary material – the code and raw data are not provided? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-09068R1 A comparison of machine learning models versus clinical evaluation for mortality prediction in patients with sepsis PLOS ONE Dear Dr. Meex, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. In particular, please consider Reviewer 2's concerns regarding size of validation subset. I understand the authors point on attempting the fairest possible comparison between ML algorithms and clinicians, but also agree that a very small validation subset could be in detriment of the quality of the ML performance evaluation. The authors might want to consider the inclusion of further model evaluation using an out-of-the-bag resampling strategy on 70/30 as suggested, alongside the ones already presented. It would be very helpful if you could address this and the rest of the reviewer points. Please submit your revised manuscript by Oct 10 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Ivan Olier, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: Reviewer comments are attached below: - The significantly lower discriminatory scores in this data compared with the literature may represent differences in the particular sample, or reflect the small sample size and thus be inaccurately portraying a lower discriminatory capacity of the physicians. - The decision to only test on 100 patients is unclear; the model may falsely appear more accurate without robust validation. Did the authors attempt validation on a randomly selected 30%? - Cancer and diabetes would traditionally be strongly associated with mortality, particularly in septic shock. The authors note that this was considered unnecessary to distribute these evenly, however it is unclear why. - A statistical opinion should be sought regarding the direct validity of comparing AUROCs with DeLong’s test; it is the reviewers opinion that this is a useful comparator for discriminatory evaluations such as this. - The reverse relationships of urea and creatinine are unclear. Often both are not included in the same score as they are in the same direction (and consequently one will knock out the other during development); it remains unclear why they would be in opposite directions. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
PONE-D-20-09068R2 A comparison of machine learning models versus clinical evaluation for mortality prediction in patients with sepsis PLOS ONE Dear Dr. Meex, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Dec 05 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Ivan Olier, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: All comments have been addressed Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes Reviewer #4: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: I Don't Know Reviewer #4: I Don't Know ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: I have no further comments for the authors. I have read the prior comments and believe that they have beed addressed in a satisfactory manner. Reviewer #4: Thank you for the opportunity to review this manuscript. Though interesting and seemingly sound in terms of methods, I have some concerns regarding its application (see comments below). In order for this study to be valid it needs to applied to an undifferentiated population with infection who could have sepsis, but do not have the diagnosis yet. As I read it, it would appear that all of the patients in this study were referred for admission for some reason, which is different from patients presenting to the ED with an infection as many of those patients would be discharged home. Therefore I don’t believe this score can be directly compared to score made for an undifferentiated ED population. However, if this is not the case, the authors should state clearly that this study included all ED data for patients meeting their SIRS/qSOFA criteria. I would be interested in knowing how this score would be applied, since the authors specifically chose to test and validate their models based on the first 2 hours of available clinical and laboratory information and from what I can tell these were all patients who were being admitted to the hospital. If this is the case it would reduce the applicability of this tool. How was consent obtained on a retrospective study? How were patients able to refuse participation, since the “ethics committee waived the requirement for informed consent” This article should have a separate stat/methods review, particularly the use of 100 patients for the validation set despite this being addressed in this revision in a 70/30 split the data in table 1 seem too sparse with several of the features having N’s in the single digits. I am not sure the standard for an ML paper. Also I am not familiar with their method for cross-validation. I am not familiar with the term acute internal medicine physicians? Do these physicians work in the ED or do they work on the acute inpatient services admitting patients? How was the subpopulation of 1420 patients selected out of the 5967 patients consulted to internal medicine? Was this cohort selected randomly? There should be flow diagram showing the total populations, those excluded with reasons why and the final cohort. The lab model vs the lab + clinical model include features that are quite different. How do the authors suggest we reconcile these differences and which model should we consider to be superior? Also please define “Blood group (present)”. Some terms are not defined in the manuscript, such as GCS. Also thrombocytes should be replaced with “platelet count”. The standard for mortality prediction in sepsis is the SOFA score. There should be a direct comparison with SOFA or at least modified version of the SOFA score (there are several versions) in order to conclude that this may have clinical utility. There needs to be more of an explanation of the different models and how they should be interpreted. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
A comparison of machine learning models versus clinical evaluation for mortality prediction in patients with sepsis PONE-D-20-09068R3 Dear Dr. Meex, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Ivan Olier, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #4: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #4: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #4: The manuscript is substantially improved and I thank the authors for their efforts. However, the main limitation is still present. Namely, that the study was performed in a population that was already consulted for admission to the hospital for sepsis. I am not sure how clinically this score would be applied in the ED setting. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #4: No |
| Formally Accepted |
|
PONE-D-20-09068R3 A comparison of machine learning models versus clinical evaluation for mortality prediction in patients with sepsis Dear Dr. Meex: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Ivan Olier Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .