Peer Review History
| Original SubmissionDecember 14, 2021 |
|---|
|
PONE-D-21-38902Identifying potential predictors for automated frailty detection. A registered report protocolPLOS ONE Dear Dr. de Vries, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please see my specific comments at the end of this message. Please submit your revised manuscript by May 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Edison I.O. Vidal, MD, MPH, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. 3. Please include a PRISMA-P checklist, for protocols, instead of a PRISMA checklist, as per our submission guidelines for registered report protocols of systematic reviews. For more information, see https://journals.plos.org/plosone/s/submission-guidelines#loc-guidelines-for-specific-study-types. Additional Editor Comments: I have evaluated this systematic review protocol and the reviewers’ comments, which I found very helpful. 1. As reviewer #1 pointed out the text is very unclear regarding its aims. On the one hand, the authors argue about the importance of detecting frailty. On the other hand, they describe the aim of their study and adopt eligibility criteria designed to identify predictors of frailty. Detecting frailty is very different from predicting frailty and has major implications for the design of this review. In the first paragraph of the discussion section, the authors make the following statement: “The described study is designed to identify as many potentially relevant variables as possible, to test as predictors for frailty in a modeling effort. The retrieved potential predictors will be used to build the foundation for the development of an automated frailty detection tool on the basis of routine health care data collected from the EHR in secondary care.” If the final aim of the authors is indeed as they wrote in that last sentence, then it makes little sense to look for predictors of frailty. It is essential that the authors attain more clarity regarding the true aims of their study and that their justifications and methods are aligned with those goals. 2. It is insufficient to conduct searches only in Pubmed and CINAHL. At the very least, the authors must also search EMBASE e Web of Science. 3. Lines 96 to 98: “Two databases will be searched for eligible articles: PubMed (MEDLINE) and CINAHL Plus. Eligible articles are written in English or Dutch and should have an available full text to enable complete review.” The availability of a full text is often related to the authors' resources and tenacity. It is inappropriate to exclude a reference because its text was not available online in the databases subscribed by the authors' institution or in their library. 4. The methodological approach to start selecting articles published from 2018 onwards and looking for “data saturation” is not acceptable for a rigorous systematic review. 5. Lines 147 to 150: “First, title selection will be performed by the first author. To reduce the risk of bias, a random sample of 10% of the titles will be blinded and double checked by the second author, and another random sample of 10% of the titles by the third author.” Lines 151 to 153: “If discordance between two authors is ≥20%, a group discussion to fine tune the eligibility criteria will be held, and the first author will then repeat the title selection.” That approach is not acceptable for a high-quality systematic review. At least two authors must screen and select studies independently. The same observation applies to the data extraction process. 6. Line 178 to 180: “In this study, a variable is a single data point (e.g. question, item, clinical value, test result) and is considered a potential predictor when it is described as a factor possibly related to frailty, irrespective of its described significance in that article.” If what the authors aim is only the identification of predictors/risk factors for frailty, then they should also be interested in their statistical significance. 7. Lines 186 to 188: “The extraction and selection will be an iterative process in which searching, selecting, and extracting potential predictors is repeated systematically, to ensure adequate data collection.” It is unclear if such an iterative process makes sense in a review like this one that apparently will not address qualitative data, although this is not clear in the eligibility criteria. 8. Lines 188 to 189: “No risk of bias assessment will be performed because no effects on endpoints are quantified.” Systematic reviews must perform evaluations of risk of bias and of the overall certainty of evidence. The authors’ argument that “no effects on endpoints are quantified” as a justification is flawed. The only reason why endpoints are not quantified is that the authors intend only to “count the votes” of included studies in terms of positive, negative, and absent associations. Vote counting is one of the worst possible strategies to perform a literature synthesis and is not acceptable for any high-quality systematic review. 9. I found reviewer #3 comment on the question of acute care data extremely relevant because it is often important to differentiate frailty present before hospital admission from frailty developed because of an acute problem immediately before or in the course of hospital admission. 10. I also agree with reviewer #3 that the authors should read Searle’s 2008 article, which provides guidance on the development of frailty indexes, which can also be used with EHR. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is an interesting protocol of a systematic review, in which the authors aim to identify predictors for frailty that could serve as a foundation for future development of EHR-based frailty measures in secondary care. As there has been increasing efforts in various countries to develop automatic frailty tools in clinical settings, this work is important and relevant to the field. I also think that this protocol is well-written, with clear and transparent descriptions on the literature search process, data management, and timeline. I have a few comments below, which I hope can further help in improving the clarity of the aims and methods of the protocol: - My main concern is on the meaning of the word “predictor”, which is not entirely clear to me in this context of EHR-based frailty. I think it can mean slightly different things depending on the operational approach to frailty. If I understand correctly, the authors would like to identify both risk factors for frailty and variables for creating the frailty measures (lines 85–87)? For the latter, I am not sure if I would call it a predictor if, for example, the health deficit variables in creating a Rockwood frailty index are embedded as part of the frailty construct. I think the authors can perhaps give some more examples of frailty measures and predictors in the Introduction to make it clearer. - Similarly, I would also expect some descriptions in the Methods on how frailty will be defined. Do the authors plan to include all available frailty measures? “Frailty” can mean quite different things (and the predictors may be different) for a frailty phenotype, a deficit-based frailty index, a data-driven frailty score, or the Clinical Frailty Scale. - Another thing I also feel a bit confused is whether the authors are searching only articles for EHR-based frailty measures, or any frailty measures in general that are used in secondary care? I think this can be stated more clearly in the aim. - The authors emphasized a lot on the potential development of an automated frailty detection tool based on the predictors identify from this study. A bit more information on that will be of interest; for example, are the authors referring to some machine learning methods to build automated frailty tools? - I don’t think it is accurate to say that “none of them [automated tools using EHR data] in secondary care” (line 248). There are at least several EHR-based frailty indices (PMID: 33770164, 34967685, 34650997), as well as the Hospital Frailty Risk Score (PMID: 29706364) developed for frailty detection in secondary care. Please consider rephrasing it. - The authors may consider stating explicitly in the title that it is a systematic review (and it is frailty for secondary care) to help the readers understand the study design. Reviewer #2: The authors are embarking on a worthwhile endeavor (aimed at better using ROUTINE healthcare data for frailty) but a few points need more careful consideration 1) It is unclear exactly how this proposed review differs from the work of Bery et al (ref 8). Yes the authors want to include more than just frailty instruments - but wouldn't most of the articles yielded by the search terms (Box 1) be frailty scores? If not, then the authors should outline what kind of articles they seek to capture (in addition to frailty scores/measures). I worry that not specifying this will yield a set of articles that is too heterogeneous 2) More clarification is needed on the endpoint of the search ("data saturation"). The search outlined will possibly yield many thousand articles. What if each article generates new variables not previously seen (note that the "accumulation of deficits frailty approach" gives a near-infinite number of possible deficits). How will they deal with this volume of articles? 3) As the authors may know, there are two major "schools of thought" regarding frailty measurement. The accumulation of deficits approach VS the phenotype approach. In studies such as the present study, the accumulation of deficits approach will be OVER-represented as these frailty scores can use a wide variety of variables (in fact, any variable is allowed as long as there are enough deficits). The authors should outline whether they will record the approach used for each study their review identifies. 4) Risk of bias - the authors should consider doing a more systematic "risk of bias" assessment (rather than just discrepancy checking amongst the authors). It will be helpful to consider how to grade the quality of articles included in the final review Reviewer #3: This manuscript summarizes the protocol for a literature review to identify predictors as part of automated frailty detection using the electronic health record. Authors attest that results reported have not been published elsewhere. The process described will include a systematic review and will consider multiple possible sources of data from the inpatient hospital sphere for frailty. Major Concerns: 1. While frailty is defined correctly according to the most prominent definitions, the manuscript authors do not specifically lay out how their predictors will be selected beyond that they have been used elsewhere. This is unfortunate, because the concept of frailty detection using the frailty index methodology is well described and - significantly - shows that it is vitally important that the elements selected are related to aging. Strongly recommend that the authors review Searle et al. 2008 BMC Geriatrics for a "recipe" for a frailty index. 2. The authors note that most frailty assessments and indices are in community care (primary care in US). They note two particular frailty indices used in UK (Clegg et al) and US (Pajewski et al). Both use routinely collected data. However, they do not engage with the question of using acute care data (when the patient is not in a "steady state" for frailty assessment. Does this truly reflect frailty, or does it reflect acuity of illness? some consideration of this concern should be integrated into the plan. 3. No description is listed for how authors will determine whether or not to include elements in their frailty assessment, or how closely related a concept is to frailty, or whether to weight items as in traditional risk prediction models or weight all as "1" as in frailty indices 4. The article is clear and is written in standard English. Given the above serious concerns, I would not recommend this for publication until and unless the authors engage with the extant literature on hospital-based frailty indices, the pros/cons of defining frailty using acute hospital (versus primary care) data, and the existing guidelines for building a frailty index. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Jonathan Mak Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-21-38902R1Registered report protocol: a scoping review to identify potential predictors as features for automated prediction of the risk of frailty in secondary carePLOS ONE Dear Dr. de Vries, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please see the end of this message for specific editorial comments. Please submit your revised manuscript by Sep 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Edison I.O. Vidal, MD, MPH, PhD Section Editor PLOS ONE Additional Editor Comments: I have assessed the reviewers’ comments and revised the new version of the manuscript myself. Unfortunately, there are still some major methodological concerns that must be addressed before the manuscript can be accepted as a registered protocol. 1. I agree with reviewer #3 when they write that it is still unclear whether the authors wish to identify predictors of being frail or of becoming frail. This is a critical issue that was already present in the first round of reviews, when reviewers asked the authors to clarify whether they aimed to detect frail patients or to detect patients at risk of developing frailty. That lack of clarity is present in the authors’ response to my first comment, when they wrote that they “intend to carry out this review to gather input for building an automated tool to predict the risk of someone being frail, not to develop another frailty detection instrument”. Unfortunately, “a frailty detection instrument” can be defined as a tool to predict whether someone is frail. Hence, the authors decision to replace “frailty detection tool” with “tool for prediction of the risk of frailty” did not solve the problem raised in the first round of review. The authors now stated that “the aim of our study is to perform a scoping review to build the foundation for the development of an automated prediction tool for the risk of frailty on the basis of routine health care data present in EHR in secondary care, by identifying all potentially relevant features to test as potential predictors in a modelling effort based on machine learning.” To “predict the risk of frailty” is too vague because it can mean both identifying people who are frail at this moment, as well as to predict who will develop frailty within a certain timeframe. Those remain two different goals that require different methods and justifications. For example, in the end of the introduction section the authors argued that their review will differ from two previous reviews on this subject because they “extracted variables from frailty instruments” whereas the authors aim to “identify all potentially relevant predictors for frailty that can be extracted from the hospital EHR, not limited to frailty instruments only”. Taking into account that most Frailty Indices were constructed based on EHR, it is likely that the results of this review will also include many variables already used to construct those indices and that most of their results will derive from those studies. Hence, unless the authors’ aim to predict the risk of people becoming frail in the future, it is unclear why the results of the two previous reviews cannot be used to feed their future study using machine learning. The authors should also recognize that there are some very simple frailty detection tools available, such as the Study of Osteoporotic Fractures and the PRISMA-7 tools, which can be applied in less than four minutes by any healthcare professional. 2. Reviewers have requested the authors to specify which kind of frailty definition they intend to adopt as a gold standard. For the purposes of the scoping review the authors intend to conduct, I understand that they may include studies that used different definitions of frailty. However, it is essential that they add a description to their methods explaining that when summarising the literature, they will present the prediction variables according to the definition of frailty used in the original studies they were extracted from. I agree with the reviewers that, in the next phase of their research, when they will use artificial intelligence to develop a model to predict the presence of frailty, they will have to define upfront what kind of definition of frailty they will use as the outcome of their prediction models. However, I understand that the authors may choose not to make that decision at the present moment. The downside is that their review will probably be more difficult to perform than if they decided to restrict their inclusion criteria to predictors of a specific type of frailty. 3. While reading the authors responses, it appears that there are some misconceptions about the differences between systematic reviews and scoping reviews. Scoping reviews usually have a broader focus than systematic reviews. For example, both systematic reviews and scoping reviews should strive to perform exhaustive searches of the literature. The data saturation approach proposed by the reviewers is not compatible with the methodological expectations of both kinds of reviews. The argument described by the authors stating that “Moreover, by not continuing the search after data saturation has been reached, we prevent an enormous amount of work that would not lead to any increase in the yield of our research” is not compatible with PLOS ONE’s expectations of high methodological standards. I also find that the justification not to search the Web of Science database because the authors are conducting a scoping review inappropriate. 4. The “synthesis of results” section does not provide sufficient information regarding how the author intend to report their results. “The results will be described in an extensive categorized table of all unique potential predictors for frailty, including total count of articles in which a potential predictor was mentioned and other relevant information.” Specifically, what kind of “other relevant information” do the authors can anticipate to include in those tables? Will they take into account what kind of reference standard was used to define frailty? Will they describe the population of patients that was used in each study? 5. Lines 104 to 105: “Eligible articles are written in English or Dutch and should have an available full text to enable complete review.” That sentence remains inappropriate even after the authors response. If a reference cannot be retrieved by the Dutch Royal Library or from the authors, there is still no reason to say that it was not eligible for the review. It would be more accurate to simply explain that it was not retrieved, as expected by the recommendations of the PRISMA 2020 statement. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Partly ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #1: Yes Reviewer #2: Partly Reviewer #3: Partly ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I would like to thank the authors for the detailed response and clarifications. The manuscript has much improved and the remaining issues are out of the scope of this paper. One final comment: while I understand that the aim of the study is to identify predictors of frailty in EHR data, it is still not entirely clear to me how “frailty” is defined in this context (e.g., whether it is considered as a deficit-accumulated frailty index, phenotypic frailty, or clinical frailty). Especially when using machine learning methods in the authors’ future work, which outcome/measure will be used as the reference standard to define frailty? This may perhaps be less relevant to the current protocol for the scoping review, but I encourage the authors to think about it and define more clearly on what they mean by “frailty” in their work. Reviewer #2: Based on the changes to the manuscript, I now better understand the rationale for this work, which is to identify "predictors of frailty" which will then be fed into a machine learning algorithm and used on real patient data. However, I still have some concerns about the proposed methodology. 1) "Data saturation" --> having had personal experience with extracting frailty predictors from the literature, the volume of articles is incredible, and increases each year. This is especially true of articles deriving FIs based on the accumulation of deficits approach. I do not think saturation will be reached by the authors' current definition. There will always be one more FI that has a few more variables. I suggest the authors revise their plan to end the search 2) Definition of types of articles that will be found in the search --> the authors should better define the kinds of articles they are hoping their search will find. Right now it is a bit vague. It is not just frailty measures, but quite a heterogeneous groups of articles on older people and assessment. In plain English under "literature search" they should spell this out Reviewer #3: 1) What led your team to choose a scoping review over a systematic review? The field of frailty is not new, and it seems that the team is ready to do a systematic review to inform research, rather than a scoping review to describe the state of the field... 2) If you are developing a prediction tool for frailty, rather than a frailty index, then you will need to choose your "gold standard" definition of frailty. How do you intend to do so? using which definition? how do you separate out the elements that define frailty (deficits in n index, features in FRAIL scale or Fried's phenotype) vs. upstream predictors of future frailty? 3) Again, in reviewing response to questions - are you differentiating between someone BEING frail vs. BECOMING frail at a point in the future? How are you defining whether frailty is "present in steady state"? ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Jonathan Mak Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Registered report protocol: a scoping review to identify potential predictors as features for developing automated estimation of the probability of being frail in secondary care. PONE-D-21-38902R2 Dear Dr. de Vries, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Edison I.O. Vidal, MD, MPH, PhD Section Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-38902R2 Registered report protocol: a scoping review to identify potential predictors as features for developing automated estimation of the probability of being frail in secondary care. Dear Dr. de Vries: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Edison I.O. Vidal Section Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .