Peer Review History
| Original SubmissionMarch 22, 2025 |
|---|
|
Dear Dr. Veledar, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers found the study important and methodologically strong but identified key issues that must be addressed for acceptance. Required changes include clarifying the outcome definition, detailing data preprocessing, addressing class imbalance with appropriate metrics, and adding supplemental materials defining predictors and reporting full model outputs. The inclusion of hemorrhagic strokes should be justified or evaluated through sensitivity analysis. While the use of SDOH is a strength, the discussion should acknowledge their limited impact on model performance and clarify their value. Please submit your revised manuscript by Jun 27 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Noah Hammarlund Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. For studies involving third-party data, we encourage authors to share any data specific to their analyses that they can legally distribute. PLOS recognizes, however, that authors may be using third-party data they do not have the rights to share. When third-party data cannot be publicly shared, authors must provide all information necessary for interested researchers to apply to gain access to the data. (https://journals.plos.org/plosone/s/data-availability#loc-acceptable-data-access-restrictions) For any third-party data that the authors cannot legally distribute, they should include the following information in their Data Availability Statement upon submission: 1) A description of the data set and the third-party source 2) If applicable, verification of permission to use the data set 3) Confirmation of whether the authors received any special privileges in accessing the data that other researchers would not have 4) All necessary contact information others would need to apply to gain access to the data [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: See attachment for additional information. Three areas are outline that should be addressed by the authors before the publication criteria are fully satisfied. This study uses a rigorous analytic approach of multiple XML models, including regression-based, tree-based, and distance-based algorithms. This diversity in modeling approaches enhances the robustness of the findings. Using 10-fold cross-validation ensures that the models are tested on different subsets of the data, reducing the risk of overfitting. The study uses a combined ranking approach, incorporating Weighted Importance Scores and Frequency Counts to identify significant predictors across models. The performance of each model is assessed using standard evaluation metrics, including accuracy, area under the ROC curve (AUC), squared-error loss, logistic loss, and misclassification rate. However, I strongly encourage the inclusion of additional information in the study to increase the interpretability and understanding of the findings. By incorporating additional information, robustness tests, and replicability measures, the study can further enhance its credibility and impact in the field of stroke research. 1. Detailed Description of Data Preprocessing: • The manuscript should include a more detailed description of the data preprocessing steps, such as handling missing values, normalization, and applying feature engineering techniques. 2. Model Hyperparameters: • Providing information on the hyperparameters used for each XML model would enhance the study's transparency and replicability. 3. Sensitivity Analysis: • Including a sensitivity analysis to show how changes in model parameters or the inclusion/exclusion of certain variables affect the results would strengthen the robustness of the findings. I would also encourage the authors to discuss the weaknesses and limitations of their modeling approach and their data sources. The following areas should be addresses: 1. Sample Size and Generalizability: • Acknowledge the limitations related to sample size and the regional focus of the study. Discuss how these factors might affect the generalizability of the findings. 2. Class Imbalance: • Address the issue of class imbalance in the dataset and how it was managed. Discuss any potential impacts on model performance and the interpretation of results. 3. Model Limitations: • Highlight any limitations of the XML models used, such as their reliance on specific types of data or potential biases in variable selection. 4. Data Availability: • Mention any restrictions on data availability and how they might limit the ability of other researchers to replicate the study. Finally, the only thing missing from this study is a complete outline of the potential implications of these findings. I would like to know how these conclusions can be used without the following areas: 1. Clinical Practice: • Discuss how the findings can be applied in clinical practice to improve post-stroke care, resource allocation, and patient outcomes. Highlight any specific recommendations for healthcare providers. 2. Policy and Public Health: • Explore the implications for public health policy, particularly in addressing socioeconomic disparities and improving care transitions for stroke survivors. 3. Future Research: • Identify areas for future research, such as developing interventions based on the identified predictors, validating the models in different populations, and exploring additional variables. By including these additional details, the manuscript can provide a more comprehensive and nuanced understanding of the study's findings, their implications, and the context in which they were derived. This will also help address potential concerns and enhance the research's replicability and applicability. Reviewer #2: This is a well written, important study that attempts to identify important clinical and non-clinical risk factors that help predict mortality and rehospitalization for patients post-stroke using a consolidated approach of multiple explainable ML techniques. I think the research has merit but there are a number of statistical problems that should be addressed before a satisfactory evaluation of this manuscript. 1. The introduction is well written and clear, there are a number of statements that would benefit from citations (e.g. while recent studies have not substantially improved prediction intervals...), Furthermore, the introduction could present more substantial information regarding performance of classifiers from previous studies as well as problems and limitations that makes this study unique (such as the inclusion of SDOH variables into these types of models) 2. It is unclear how the outcomes were combined into a single variable, what is the frequency for each outcome in the cohort? 3. Class Imbalance: The manuscript suggests that applying multiple machine learning (ML) methods and combining predictors helps address class imbalance. However, I am not convinced this alone mitigates the issue, especially in the absence of intrinsic strategies such as resampling, class weighting, or appropriate performance metrics. Furthermore, this assertion lacks citations, which weakens the credibility of the claim. I recommend supporting this statement with relevant literature or clarifying the methodological justification. 4. I would strongly recommend restricting the analysis to ischemic strokes. The dataset contains only ~100 cases of intracerebral hemorrhage, which likely have distinct clinical and demographic predictors compared to ischemic strokes. Including them may introduce noise that could impair the model's discriminatory performance and interpretability. 5. Table 1 Statistical Analysis: The manuscript does not clearly indicate what statistical tests were applied to generate Table 1. Given the presence of varied data types (e.g., means, medians with ranges, and percentages), it is important to specify whether the appropriate tests were used for each variable type. Additionally, it is unclear if corrections for multiple comparisons were performed. 6. Lasso, ridge and elastic nets are not distinct types of ML classifiers from logistic regression, rather there are regularization techniques aimed to reduce the important of coefficients by either shrinking their values or removing them altogether. Including all four variants (standard logistic regression, LASSO, ridge, and elastic net) in an ensemble may introduce redundancy, as they share the same underlying structure and linear decision boundaries. While the ensemble includes other diverse model types such as SVM, KNN, and decision tree-based methods, it may be more effective to select one or two representative logistic regression variants to avoid overemphasizing a single modeling framework and to maintain better balance across model types. 7. ML Model Performance: It would be valuable to include a supplemental table showing the full results for each machine learning model (Coefficients, predicted probabilities, odds ratios, etc). Was there an evaluation of the performance using the selected either 12 or 38 variables? is there an improvement in the prediction when using the most important variables? 8. Imbalanced Data: The dataset is highly imbalanced, yet the performance metrics presented do not adequately reflect this. Metrics such as precision, recall, and F1-score—particularly for the minority class—are essential in this context. Including confusion matrices would also help readers assess the classifiers' performance across both classes. 9. On Multicollinearity: The issue of multicollinearity among predictors is not addressed. Given the number of variables and the potential for correlated features, I recommend performing and reporting multicollinearity diagnostics (e.g., variance inflation factors), particularly for models like logistic regression where interpretation of coefficients is important. 10. On Clarity and Definition of Predictors:The manuscript lacks a clear definition and description of the predictors used. Many predictor acronyms are not defined in the text, and categorical variables are not described in terms of their levels or categories. I suggest including a supplemental table that clearly lists all predictors, defines each acronym, and indicates the data type (continuous, categorical), as well as levels for categorical variables. 11. Unfortunately, this study did not improve upon the performance metrics reported in previous analyses, despite increasing the number of variables by incorporating several non-clinical factors, such as social determinants of health (SDOH). Since the addition of these variables did not enhance model performance, it raises the question of their practical value—particularly given that SDOH data are often more difficult to obtain than standard clinical variables. 12. There are a few grammatical errors: last word of introduction unballanced and in the sentence... The TCSD-S enrollee data from 10 collaborating comprehensive stroke canters (is it centers) ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
Dear Dr. Veledar, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 01 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Noah Hammarlund Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: Thank you for your revised manuscript. The study presents a thoughtful and innovative approach, and both reviewers noted major improvements. However, they also raise several remaining concerns that must be addressed before the manuscript can be considered for publication. Where the reviewers raise methodological gaps (e.g., around class imbalance, model justification, or variable inclusion), you should either (a) conduct and report targeted additional analyses where feasible, or (b) clearly explain and justify your approach, and acknowledge resulting limitations where appropriate. Please address the following points: Composite Outcome: Justify the combination of readmission and mortality into a single outcome, especially given their differing frequencies. If separate models are not feasible, acknowledge this and discuss the implications as a limitation. Stroke Type and Generalizability: Either conduct a sensitivity check restricted to ischemic stroke or justify your inclusion of ICH cases. Generalizability limitations due to geography and stroke subtype distribution should be made more explicit in the abstract and discussion. Class Imbalance: The use of multiple models and evaluation metrics does not directly address class imbalance. Metrics like AUC and log loss can obscure performance issues on the minority class. Please provide a clearer discussion of the implications of imbalance and justify the decision not to apply standard techniques (e.g., weighting or resampling). Highlight how this may affect interpretation of metrics such as recall and variable rankings. Multicollinearity: For regression-based models, address concerns about multicollinearity. Either assess and report relevant diagnostics (e.g., VIF, correlation checks), or justify your approach and acknowledge interpretability limitations. Value of SDOH Variables: You make strong claims about the contribution of SDOH to model performance. While these variables appear in your ranked lists, the added predictive value is not directly assessed. We recommend a simple sensitivity analysis comparing model performance with and without SDOH predictors. If you do not include this, revise your language to more cautiously reflect what is shown, and note this limitation. Explainability and Interpretation: Since the study emphasizes explainable ML, include a brief narrative interpretation of top predictors (e.g., mRS, ATOC) and how they could inform clinical decision-making. Scope of Claims: Please revisit some of the manuscript’s stronger claims — such as statements that XML methods “enhance prediction” in small, imbalanced samples. Without comparative analyses or performance improvement tests, such claims should be tempered to reflect the more exploratory and descriptive nature of the work. Strengthening the limitations section will help appropriately frame the contribution. We look forward to your resubmission. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: This study uses explainable machine learning (XML) to identify clinical and non-clinical predictors of 90-day readmission and mortality among stroke survivors. Drawing on data from the Transitions of Care Stroke Disparities Study (TCSD-S), the authors analyzed outcomes for 1,300 patients using 11 machine learning models. The study integrates information from clinical registries, neighborhood-level social determinants of health (SDOH), and transition-of-care metrics. The key contribution is a composite list of 38 variables, derived from model consensus, that predict adverse post-stroke outcomes. The authors argue for the superiority of XML over traditional regression methods, highlighting the added value of SDOH in predictive modeling and the potential application in digital health systems. Contribution of the Methodology, Insights, Conclusions: 1) The manuscript extends current clinical studies by integrating socioeconomic, neighborhood, and behavioral characteristics into the machine learning framework to enhance model relevance. 2) The authors clearly describe model selection, performance metrics, and variable importance aggregation (frequency count and weighted score), providing transparency and reproducibility. 3) The application of 11 XML models to identify consensus predictors is a novel and methodologically rigorous approach to multidimensional health outcome modeling that is strengthened by the 10-fold cross-validation and inclusion of multiple performance metrics (C-statistic, log loss, precision, etc.). 4) These findings could inform risk-stratification tools and targeted post-discharge interventions. Additional Clarity Needed Before Publication 1) Outcome Construction: The composite outcome of “90-day readmission or mortality” is not adequately justified. The frequency of each outcome (readmission vs. death) is provided late in the paper (n=22 deaths, n=187 readmissions), suggesting imbalanced outcome events. Provide a more explicit rationale for combining these outcomes or present separate models for each. At minimum, sensitivity analyses using each outcome separately would enhance interpretability. 2) Additional Preprocessing Details: While the authors briefly mention normalization and the exclusion of 73 individuals due to missing data, there is insufficient detail on how missing values were handled or imputed. It would be beneficial if the authors explicitly described missing data patterns, imputation methods (if used), and any variable selection criteria. A flowchart could help illustrate data preprocessing. 3) Limited Generalizability: The sample is geographically confined to Florida, and 92% of strokes are ischemic. Only ~100 ICH cases are included, limiting broader applicability. Please consider performing a secondary analysis that is restricted to ischemic stroke only or provide stratified results. Discuss generalizability limitations more prominently in the abstract and conclusion. 4) Imbalance: The adverse outcome rate is relatively low (15.8%), yet the authors do not apply formal balancing techniques (e.g., SMOTE, class weights). It would be beneficial if the authors included an additional discussion on how class imbalance may or may not influence metrics such as precision and recall. 5) Explainability: The manuscript emphasizes “explainable” ML but does not attempt to interpret the most influential variables beyond summary tables. I suggest including a short narrative interpretation of how top predictors could inform clinical decision-making. Reviewer #2: The manuscript has improved dramatically from the first submission, I believe it is a stronger work either analytically as well as conceptually. The changes to the text and analysis have improved the credibility to the work produced. However, While the authors have attempted to address the reviewer’s concerns, the revisions provided fall short in several critical areas: The authors emphasize that their goal is to produce a robust list of factors that are the most relevant for guiding post-discharge care decisions for stroke patients. They also claim that their ensemble approach help mitigate biases that individual classification methods bring to the analysis. These emphasis are repeated throughout the responses as a justification for not including performance metrics that would help the reader understand the real power of their final model. While in theory these justifications can be valid, due to the nature of their dataset (highly imbalanced, skewed population) it is extremely important to understand how these models address these biases and what are the statistical limitations. The analysis suggested: Sensitivity analysis, multicollinearity evaluation, model performance metrics, are not difficult to carry out and would give the reader the necessary tools to be assess the validity and limitations of the analysis. Sensitivity Analysis: Even though this was not part of my comments, I agree that a sensitivity analysis would be important to understand the relationship of variables in the model. While using multiple modeling approaches helps mitigate algorithm-specific biases, the authors did not perform any sensitivity analysis (e.g., testing the impact of excluding variables, varying key parameters, or assessing model robustness under different conditions), as originally requested. This directly touches the value of adding SDOH to the analysis if there is minimal gain in the performance, as understanding the importance of these variables in the model against the outcome provide justification to their inclusion. Class Imbalance: The authors acknowledge the imbalance and justify their focus on threshold-independent metrics like AUC and loss functions. However, they do not sufficiently address how the models perform on the minority class. Minority-class-specific metrics (e.g., precision, recall, F1) are dismissed due to instability rather than being transparently reported in aggregate. This leaves important aspects of model performance unassessed. Importantly, their recall values revolve around 0.50, meaning they only catch about half of the true positives, this is a classic symptom of class imbalance: high precision (fewer false positives), but lower recall (many false negatives). Multicollinearity: The response deflects the issue by appealing to model diversity, without directly addressing the multicollinearity concern in models like logistic regression where coefficient interpretation is meaningful. Social Determinants of Health (SDOH): The authors justify the inclusion of SDOH variables based on conceptual and ethical grounds but do not provide empirical evidence that these variables improved model performance. Without analyses demonstrating added value (e.g., feature importance, subgroup effects, or calibration improvement), it is unclear whether the additional complexity and burden of collecting SDOH data is warranted in this context. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org
|
| Revision 2 |
|
Dear Dr. Veledar, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Thank you for your revised manuscript and detailed responses. The paper presents a thoughtful and well-structured modeling approach with clear improvements from the previous version. Most of the reviewer concerns have been adequately addressed, including those related to class imbalance, generalizability, and multicollinearity. We especially appreciate your attention to strengthening the limitations and clarifying analytic decisions. However, two points still require further attention before the manuscript can be accepted:
Please submit your revised manuscript by Sep 04 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Noah Hammarlund Academic Editor PLOS ONE Journal Requirements: If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
Identifying Determinants of Readmission and Death Post-Stroke Using Explainable Machine Learning PONE-D-25-14703R3 Dear Dr. Veledar, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Noah Hammarlund Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-25-14703R3 PLOS ONE Dear Dr. Veledar, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Noah Hammarlund Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .