Peer Review History
| Original SubmissionApril 11, 2022 |
|---|
|
PONE-D-22-10670Questionnaire-free machine-learning method to predict depressive symptoms among community-dwelling older adultsPLOS ONE Dear Dr. Chuang, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers' have raised a number of concerns that you should respond to as part of your revision. Please provide further detail as to the added value of the prediction model over currently available clinical tools. The comments from Reviewer 2 are strongly worded, however, it would be beneficial to provide the rationale for the methods you have chosen to employ here and to present your analyses in a fashion that makes these clear and easy to follow. An additional minor comment is that your 'Summary' should be renamed 'Abstract'. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. Please submit your revised manuscript by Sep 28 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Callam Davidson Editorial Office PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide 4. Thank you for stating the following in the Acknowledgments Section of your manuscript: "This work was supported by the Ministry of Science and Technology (MOST) in Taiwan (grant no. MOST109-2221-E-038-018 and MOST110-2628-E-038-001) and the Higher Education Sprout Project from the Ministry of Education (MOE) in Taiwan (grant no. DP2-110-21121-01-A-13) to Emily Chia-Yu Su. The sponsor had no role in the research design or contents of the manuscript for publication." We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." Please include your amended statements within your cover letter; we will change the online submission form on your behalf [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Positionality: I am a researcher in information science studying the reliability and appropriate use of digital mental health biomarkers. Most of my work has focused on developing and analyzing machine learning models of major depressive disorder, generalized anxiety disorder, and schizophrenia. I have not studied the mental health of older adults specifically. Overall comment: The authors present a paper to predict depression symptoms of older adults. They develop a variety of machine learning models to predict a self-reported depression questionnaire, and then test these models in two holdout test sets, one from a similar population as the training dataset, and one from a different population. The authors then provide two analyses to try and explain the relationship between the model predictors and the outcome variables. Overall, I appreciate the authors’ thorough documentation of results for the main prediction analysis, and the focus on older individuals, a population who is deserving of more attention. I do have questions about the motivation, methods and implications that I feel would clarify the validity of the analysis and improve the manuscript. Please see the details below. Major comments: 1. In the methods it appears that “depressive symptoms” were used as a predictor variable. How was this variable used? Is this current depressive symptoms, or a history of depressive symptoms? If it contains “current depressive symptoms”, then the prediction problem appears trivial, as you’re using a variable in your prediction that represents the outcome you wish to predict. Please clarify and justify its usage. 2. How did the preprocessing methods (e.g. normalization, PCA, multiple imputation chain equation” [MICE]) interact with the 10-fold cross-validation, and testing procedures? Data leakage is a common issue in ML, where data from the holdout data is used within preprocessing. See: Sayash Kapoor and Arvind Narayanan. 2022. Leakage and the Reproducibility Crisis in ML-based Science. Retrieved July 15, 2022 from http://arxiv.org/abs/2207.07048. Please confirm data leakage did not occur. If preprocessing models (e.g. mean/standard deviation, MICE model) were created on the entire dataset, I would recommend redeveloping these preprocessing models on each training dataset, and applying them to each held out fold in the revision. 3. How did the authors choose the specific ethnic group to use as the “non-local” external validation set? This specific external validation dataset appeared to have a much more even distribution between GDS-15 positive/negative compared to the training set and test set. Please further justify why the non-local test set is then a good cohort to validate generalization (as stated in the Discussion), and how the differences in outcome distribution between local/non-local data affect the interpretation of the results. 4. Many of the variables used as predictors, and identified as important in the prediction models (e.g. education status, loneliness, deteriorating physical health) are already well-researched variables that are known indicators of mental health symptoms. For example, see: Susan A Everson, Siobhan C Maty, John W Lynch, and George A Kaplan. 2002. Epidemiologic evidence for the relation between socioeconomic status and depression, obesity, and diabetes. Journal of Psychosomatic Research 53, 4: 891–895. Evren Erzen and Özkan Çikrikci. 2018. The effect of loneliness on depression: A meta-analysis. International Journal of Social Psychiatry 64, 5: 427–435. Given many of the variables identified are already known risk factors for depression, what value does the prediction model in the paper add? Please discuss. 5. The authors propose using their model as a screening tool for administering the GDS-15. In the external validation, results showed either moderate sensitivity/specificity (SPC-GBM), or high sensitivity and low specificity (DI-VNN). The authors should add a discussion of how these results impact the usage of each model where either (1) the model will incorrectly classify individuals experiencing depressive symptoms, or (2) there will be a large amount of over prediction. 6. I found the ontology section was unclear; I am not as familiar with ontology-based methods. Could the authors add a section to their methods describing the ontology analysis, and maybe make the relationship between the ontologies, features/PCs, and their underlying meaning more clear in Figure 3? 7. The authors included a link to an online system where clinicians can upload information and the model outputs a prediction, I am assuming using the algorithm in the paper. I am a bit worried about the public nature of this tool, given that the reliability and validity of the tool has not been published, and users could take the prediction model results at face value. I encourage the authors to add a disclaimer about the reliability/validity on the online tool at a basic reading level, so users do not take the prediction result at face value and use it for making care decisions. In addition, a publication was cited online, which I am assuming is about the prediction model. The links to the publication were not working, and I could not find the publication online. See citation below: Anonymous, et al. Questionnaire-free method to predict 15-item geriatric depression scale (GDS-15) among community-dwelling elders by machine learning. EBioMedicine (2021). DOI: 00.0000/x00000-000-0000-0 PMID: 00000000 Full text PDF Could the authors clarify if this is an existing publication, and if so, include it as a supplementary file so we can confirm that the reported results in this manuscript are different from this prior publication? Finally, when adding my information within the online tool and looking at the GDS-15, I wondered if collecting the information used in the prediction models would really be less burdensome than taking the GDS-15 itself, which is a more direct measure of depression symptoms. In addition, I feared that the tool simply shifts the burden on reporting and entry from the patient to clinicians, who would then need to collect this information for multiple patients, and run the tool. Given this, please justify why a prediction model using these types of demographic data still has utility. Minor comments: 8. The Authors state that the 15-item GDS is the “most appropriate” version of the scale. What defines “appropriate” in this context? Please clarify in the text. 9. I found the statement “questions asked later in the long term were shown to lead to greater misclassifications” unclear. What do the authors mean? Are they stating that there is a delay in patients with suspected depressive symptoms receiving the questionnaire? Misclassifying what specifically? 10. I found the paragraph of the introduction beginning with “Depression affects 264 million people globally” to cover broader material than the previous paragraph beginning with “Depressive symptoms in older adults”. It might make sense to potentially rearrange these paragraphs to begin with the global burden of depression, then highlight issues with depression questionnaires in older adults. In addition, I believe the summary statistics of depression rates in each country do not add much value to the manuscript. Perhaps the authors could shorten this sentence, or focus on statistics relevant to the specific population studied in the manuscript. 11. The authors state in the Introduction that logistic regression is an insufficient model to develop a triage test for GDS-15 screening, but do not provide reasoning why it is insufficient. An LR model to predict GDS-15 - assuming high sensitivity, specificity, and positive predictive value - would be an ideal model to use due to its explainability, simplicity, and robustness. Please justify further why simple models are insufficient for the specific GDS-15 triage test problem. 12. In the revision, per PLOS ONE’s recommendations, please include the Methods section following the Introduction, before the Results section. Thank you. See: https://journals.plos.org/plosone/s/submission-guidelines 13. How/when was the GDS-15 delivered? During the same screening where the predictor data was collected? 14. In the methods, the authors state: “Meanwhile, over-diagnosis causes an increasing frequency of the use of the GDS-15, which may lead to further misclassification.” Is this true? What does “misclassification” mean in this sense? I believe that administering the GDS-15 after the prediction model would simply validate or mis-validate the prediction model results, not lead to further “misclassification” as the GDS-15 is the “gold standard” in the paper. Maybe the authors could elaborate on other issues that may arise by over-predicting patients experiencing depression symptoms. 15. In the Methods, the authors claim when referring to RF and GBM algorithms: “Both algorithms are the most used competition-winning algorithms for predictions using tabular data.” Do the authors have a citation to back up this claim, and subsequently, why do competition-winning algorithms apply to research and this specific prediction problem? Please provide a better justification. 16. What do the authors mean when they state: “for which characteristics do not imply the data but predict the outcome very well”? Please rewrite this statement for clarity. 17. Why will the data only be available for one year after publication? Can the data be accessed now? 18. The title for the first subsection of the Results states “Most had not obtained a university education, are not separated/divorced, and are religious believers”. Could the authors be more specific on who “Most” refers to? 19. In the results, the authors state “Meanwhile, of 17 predictors and 37 PCs for the DI-VNN, only 18 of them had an FDR of <0.05 by the differential analysis with the Benjamini-Hochberg correction.” What differential analysis did the authors perform? What were the null and alternative hypotheses? Please state in the main text. 20. I would appreciate if the authors stated the AUROC of the best performing models for the external validation in the Results section of the main text. I realize it is on Table 2. 21. What methods were used to identify the important features from the SPC-GBM and DI-VNN models? I know that it is often difficult to extract important features in deep learning algorithms. Thus, I would be interested in how the authors identified the important features in the algorithm. 22. Why do the authors believe there was such significant overfitting in the SPC-GBM model from the internal validation to the external validation cohorts? Reviewer #2: Detailed Review: In this paper, the authors present an evaluation of utilizing demographic information as a method for screening for Geriatric Depression. The authors offer a plethora of machine learning models, some more common than others, and evaluate the resulting models to identify which markers were strong indicators of depression in patients. This method would allow care-givers with access to demographic and general factors the opportunity to assess the need to evaluate the patient for depression symptoms. Key Strength of the paper: The work is important, instruments that could potentially be used to screen for these kinds of conditions without the immediate input of a patient is highly relevant and an important facet of medical technology Main Weakness of the paper: The methodology of this study is very poorly done, or poorly presented. The authors give numbers associated with the models’ results, but the metrics they choose to present are not very meaningful given the context, and further more the authors spend almost no time explaining exactly what types of inputs or how hyper-parameters or even how data-splitting occurred. The paper is not written very strongly, and there are many details from the implementation and data preparation that are missing. Why did the authors not utilize a cross-fold validation approach? What does internally/externally validated data mean in the context of splitting the data for training and testing? Neural Networks typically require orders of magnitude more data to justify over conventional models such as Decision Trees or Support Vector Machines, why not use those? All of these models are very sensitive to the hyper-parameters you choose, and the types of data you pass in, which were chosen, what was the justification behind it? While the results are generally reported, no meaningful discussion about baselines are provided, how much better than random guessing are these models performing? Why is AUROC the chosen metric, and not F1-score or precision and recall? These questions are just a few of the many that are left to the reader to try to discern or figure out potentially by having access to the data. While this does not mean that the results presented are not valid ones, there is no understanding of how good they really are? (Would a loaded coin-flip perform better?) Novelty/Originality, taking into account the relevance of the work for the PLOS ONE audience: While the domain might be novel, the general approach of using machine learning models in this way is not particularly novel. Technical/Theoretical Correctness, taking into account datasets, baselines, experimental design, affective theory, are there enough details provided to be able to reproduce the experiments and understand the contribution? There are a plethora of missing details that are unjustified and under-reported making the correctness of this work difficult to evaluate and exact reproducibility difficult. Quality of References, is it a good mix of older and newer papers? Do the authors show a good grasp of the current state of the literature? Do they also cite other papers apart from their own work? The chosen references seem to be reasonable but they way are utilized throughout the paper is not very good. Many things the authors claim go unjustified or uncited, or cited awkwardly. E.g.: “Nevertheless, the screening frequency of a questionnaire should be limited, because questions asked later in the long term were shown to lead to greater misclassifications (Egleston et al., 2011)” This sentence doesn't add anything and is unclear. What are "questions asked later in the long term", and how exactly do they lead to greater misclassification? What is the screening frequency that is most appropriate? Does it vary by the depressive state of the individual? “A triage test with questionnaire-free variables is needed to reduce the frequency of questionnaire use. Demographic and physical health data from routine visits can be utilized.” This is uncited. Also, what does it mean that it is "needed", wouldn’t a redesigned questionnaire, with repeated evaluation in mind, perform better than one that wasn’t meant to (GDS-15). What about the huge population of people who don’t have access to regular medical attention and physical evaluation, or who might not have access to a good health record/history due to the unavailability of infrastructure for this purpose? Most importantly, none of this is later addressed by references or the methods presented in this work. While it might be corrected by instead offering the perspective of: “It would allow physicians a more readily and less invasive approach to screen patients for depression risk” Clarity of Presentation, the English does not need to be flawless, but the text should be understandable. The presentation of this work is incredibly sub-par. Not only are there a tremendous number of sentences that don’t make sense for logical or grammatical reasons, the discussion of the work at hand is insufficient. It is difficult to follow and the paper repeats itself without clarifying or adding to previous made assertions and conclusions. Overall this is not a well written paper. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-22-10670R1Questionnaire-free machine-learning method to predict depressive symptoms among community-dwelling older adultsPLOS ONE Dear Dr. Chuang, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR:Based on the reviewers' comments, you are asked to provide a revised version of the manuscript addressing all their concerns. ============================== Please submit your revised manuscript by Jan 08 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tarik A. Rashid, PhD Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #3: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you to the authors for responding to my comments. Many of the edits were very helpful, but I have further recommendations that I believe could strengthen the manuscript. I also recommend having the manuscript revised for language clarity prior to the next submission. I’ve pointed out some specific examples regarding language clarity in my comments, but it would be difficult to list all of them. I recommend taking another general pass, or potentially asking an external scientific writer to help revise the manuscript. Some of these language issues are grammatical (eg, see my updated comment 2.1.2), but additionally about the specific word choices that justify the conclusions based upon the results or extracted from cited references (eg, see my updated comment 2.2.1). My points are numbered by the numbering used by the authors in their point-by-point responses. Thank you. 2.1.2: I thank the authors for confirming that data leakage did not occur. I think the language could be made clearer. I believe the authors mean by “data partitioning” that within each cross-validation split, all standardization and MICE parameters were computed within the training data, and applied to both the training and validation data. Please confirm, and if true, clean up the language in the methods. “Data partitioned for model development” for example, is clearer language with the appropriate tense compared to “data partitioning for model development”. 2.1.3: Thank you for clarifying the intention of the external validation set in the paper. I recommend adding an extra sentence in the Limitations section stating that despite the reliability of the model demonstrated in the paper using external validation, one should still not assume generalizability. More testing is needed. 2.1.4: I understand the clarification between prediction models and risk identification. I am also not sure of the relevance of multivariate versus multivariate models the authors are making in the paper, and would recommend its removal. On the former point regarding survey fatigue, I am not sure I agree that the authors proposed system is less arduous than depression screening. The authors’ proposed system requires the collection of clinical data (eg, on commorbidities, health conditions, hearing problems), which would require some amount of interaction with the healthcare system. One could argue that accessing annual clinician check-ins is as arduous as self-reported surveys, or a clinical depression assessment. I also am not sure the point on “untruthful or inconsistent responses” in the Introduction is a good argument, given you’re still validating a model against a yes/no self-report of depression symptoms - something that by the authors' argument could be an “untruthful” assessment as ground truth - and thus the model would just propagate this response bias on a large scale. This is analogous to how prediction models propagate bias in data broadly, for example see, https://doi.org/10.1073/pnas.2204529119. I would suggest removing this point as well. That being said, I think the point in the introduction on the developed model acting as an EHR indicator to triage patients for a mental health follow-up is sufficient for justifying the research, and the discussion paragraph regarding “Notably, we conducted this study [...]” can be removed. 2.2.1: I recommend changing the language to not state so objectively the accuracy of the GDS-15, but instead state that a “of which a recent systematic review and meta-analysis found that the 15-item version (GDS-15) is the most accurate”, since the authors are basing this point on a single paper. 2.2.2: I recommend removing the statement “questions asked later in the long-term”, and as suggested in my response to 2.1.4, I do not think response bias is a good motivation for this prediction model. I recommend removing this section. 2.2.4: I understand the authors’ point regarding the preference for machine learning. I recommend stating it explicitly upfront in the introduction, clarifying the word "insufficient" within the same sentence. For example: “[...] using logistic regression (LR) may be insufficient because its simplicity does not accurately reflect the complexity of real-world data [citation justifying this conclusion]. Machine learning [...]”. Reviewer #3: I can see that authors have incorporated most of the changes as asked by the reviewers. 1. Novelty of the proposed approach is either missing or the authors fail to express it in the manuscript. Novelty should be explained. 2. Authors should explain the contributions made in the paper by adding a sub-section in the first section along with a sub-section on motivation. 3. Paper requires a flowchart which can show the flow and step by step approach of the proposed methodology in tackling the undertaken problem. 4. Formatting mistakes are there; such as random forest [RF] , here it should be written as random forest (RF), Similar errors should be removed. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Questionnaire-free machine-learning method to predict depressive symptoms among community-dwelling older adults PONE-D-22-10670R2 Dear Dr. Chuang, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Tarik A. Rashid, PhD Academic Editor PLOS ONE |
| Formally Accepted |
|
PONE-D-22-10670R2 Questionnaire-free machine-learning method to predict depressive symptoms among community-dwelling older adults Dear Dr. Chuang: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Tarik A. Rashid Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .