Peer Review History
Original SubmissionApril 2, 2020 |
---|
PONE-D-20-09443 Applying machine learning in motor activity time series of depressed bipolar and unipolar patients. PLOS ONE Dear Mr Jakobsen, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you see the reviewers' comments, please particularly pay attention to the 'Materials and Methods' and 'Results' section. We would appreciate receiving your revised manuscript by Jun 29 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Kyoung-Sae Na, M.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Competing Interests section: 'The authors have declared that no competing interests exist.' We note that one or more of the authors are employed by a commercial company: SINTEF Digital.
Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. 2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests Additional Editor Comments (if provided): [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors present the results of a classification analysis using machine learning methods to discriminate between patients with depression and healthy controls based on objective motor activity data collected with actigraph. The results show that objective activity data can be used to discriminate between patients and healthy controls with high accuracy. Using objective sensor data to diagnose and/or monitor symptoms in mental illness is both important and interesting. Several issues require consideration and should be addressed. MAJOR ISSUES 1. In the machine learning section on line 133 the authors state that the feature vectors were normalised. It is not clear if the features were normalised per participant or across all participants. In cross-validation, the held out data should not be considered when normalising the training data to avoid learning any information from the held out data. 2. In the machine learning section on line 150-153 the authors state how the class weights are computed. However, it is not clear if the class weights are computed on the training set or across the entire dataset. In cross-validation the weights should be computed on the training set only to avoid learning from the held out data. 3. The results section on line 241-242 states “Weighted DNN performed best without class balancing techniques (no oversampling) […]” Using oversampling and class weighting at the same time will double compensate for the class imbalance in the training set and result in a biased classifier. 4. Tables 2 and 5 are not fully visible in the manuscript. 5. Figures 1 and 2 are too blurred to see axis labels and units. 6. The motivation for presenting a second run of the classification analysis without the false negatives identified in the first run is not clear. Is it not just making the classification problem easier by removing some of the “difficult” cases? The difference in mean activity between TP and FP is already demonstrated after the first run. 7. On line 466-468 the authors state that a weighted and random oversampling DNN achieves higher sensitivity and lower specificity. As stated in a previous comment, if the positive class is both weighted higher and oversampled the model is double compensating for the minority class. 8. In table 2 and 5, it is not clear why the baseline results are reported multiple times and why the results of the baselines are different every time. MINOR ISSUES 9. In the introduction on line 94 the authors state that insight into neural networks is virtually impossible. While it may not be straight forward, there is a large research effort to improve interpretability of neural networks. 10. In the introduction on line 98-99 the authors state that “the random forest classifier is more flexible and less data-sensitive than neural networks.” It is not clear what is meant by “more flexible”. A neural networks with a non-linear activation function and a large hidden layer is a universal function approximator and thus a very flexible model. 11. In the introduction on line 101 the authors state that the random forest algorithm “has been found to predict with approximate similar quality to neural networks.” I think this is highly domain specific. While random forest is a powerful algorithm for many purposes neural networks have been proved to be superior to mostly any other method in areas such as computer vision and speech recognition. 12. It is stated on line 141 that there are 291 depressed and 402 not depressed states, but on line 150-153 ‘depressed’ is said to be majority class and ‘not depressed’ the minority class, which is contradicting. 13. Line 174-177 describes how the features are represented as an image for the CNN. It is not clear how the authors chose this feature representation or why it is appropriate for the classification task. 14. The authors already mentions limitations of comparing the patient and healthy control group. Employment status could be another significant reason why patients with depression present with lower overall activity. Reviewer #2: This manuscript describes reanalysis of existing data applying machine-learning techniques with various data balance technique for activity patterns in depressed patients and healthy controls, and present promising abilities in discriminating between depressed patients and healthy controls in motor activity time series. 1. In the Part of Material and Methods, the detailed information of record for the motor activity might be necessary in order to achirve the integrity of the manuscript, e.g., how many time points were there in one day? How many days were recorded for each subject? Did the subjects wear the equipment all day long even during night? 2. The description of the ML process is very clear, but I am not sure if the algorithm performance is affected by depressive episode or not. 3. For the CNN, each day was represented as an image with 24 rows and 60 columns. The rows represent the hour of the day, and the columns represent the minute for each particular hour. I have two considerations: 1) I am not sure whether such new arrangement of data bring unnecessary artifact, because the data point at each minute for each hour should not have dependent relationships with a high probability. How do you explain this new data represents? 2) Missing values were filled with -1. I am interested in what distribution of these missing values for all participants? What influence may bring to the CNN? 4. From ML classification results tables, the weighted Models did not show any benefits, while, moreover, the most optimistic overall result were attained by unweighted DNN with the random oversampling technique. So, I wonder how you consider to weight for two conditions as you emphasized in particular: ‘This weighting informs the algorithm to pay more attention to the underrepresented class.’ ? 5. There must be a significant gender difference between the two groups, this might also make some data imbalance as well as recording days, how do you manage it? 6. Last, the manuscript gave one of the conclusions that Deep Neural Network performed preeminent in discriminating between conditions and controls. I suggest that we should be more careful and conservative since the sample is small and patient groups are composed by both bipolar and unipolar. Reviewer #3: First, I reveal that I did not fully understand all the methods I applied in this study. Therefore, please read this in consideration of this. Using this activity data through actigraphy, authors studied how to predict the depressive mood state by using various machine learning analysis methods for depression and normal control's activity. Strictly speaking, this study explores which machine learning analysis method has the best performance. I would like to think high on how this paper tried to overcome overfitting and sample imbalance by applying various analysis techniques precisely. However, while this may simply be meaningful in terms of technical methodology, it remains fundamentally questionable as to the value of this study's hypothesis, the nature of the sample, and how valuable the research was in drawing conclusions. It may be a good idea to use the activity level to predict the mood state of a patient with a mood disorder, but it is a very poor study in that various variables were not considered. In particular, machine learning by using the most basic values such as the mean of the activity and standard deviation causes too much to be missed. Eventually, this approach will not predict "mood depressed state", but rather predict "activity depressed state" that is supposed to be due to depression. Authors must seriously consider how to interpret and overcome this. #1. The title appears as if the subject of this study was to differentiate motor activity in patients with bipolar and unipolar depression. It would be better to revise the subject more clearly to reveal the subject of the article. #2. You need to use universal word in English for Keywords. It would be good to change the word into "actigraphy" #3. Introduction Line 78~80 I agree that the time series should reflect biological rhythms and changes in daily life patterns. I think this doesn't just mean that it doesn't follow a simple linear model. It may be key to access the given activity data to fit the characteristics of the time series. What strategies did you use in this study to reflect the characteristics of your data, such as biological rhythms? #4. Materials and methods Line 113~ The description of the sample is insufficient. Was the drug being administered at the time of the study, how long the morbidity of mood disorder was, whether receiving other nonpharmacologic treatments (eg IPSRT) that could affect the condition, were there no compensations for participating in this study, and was the study a simple observational study? If so, the criteria for inclusion and exclusion of this study should be provided. Basically, if one is depressed, he or she will not be able to comply with the study, but it is impressive that there is no significant difference in wearing days. It is necessary to calculate the wearing rate separately. In other words, it is necessary to define what wear days mean. #5. It would not be easy to perform validation with such a small number of samples. Validation process seems to require more specific and easy to understand technology and methodology. In particular, in this case, if you repeat the learning and validation several times, the samples will eventually overlap and it may not affect the internal connectivity, which may have an impact on consequences. You need some explanation to overcome this. #6. Please revise the figures and tables to make them readable. It is difficult to recognize. #7. In the previous studies, authors mentioned that there was no significant difference between unipolar and bipolar depression, but I still have questions about it. Of course, since unipolar depression can be diagnosed as bipolar disorder in the future, it is not easy to make a judgment based on the current diagnosis. However, it is prudent to gather and analyze heterogeneous groups into one group. Analyzing depend on the level of activity, sampling may have its own bias. It is suggested to analyze by dividing unipolar and bipolar. And, if there is data on the normal (euthymic) mood of patients with mood disorders, not normal control people, it is necessary to compare and analyze it. It is also important to distinguish the depressed state of mood disorder from normal people, but it is more important to distinguish the euthymic and depressed mood states of mood disorders. #8. Statistics Line 188~ How did you deal with the section that could be thought of as sleep? Did you ever think about an activity level of zero throughout the day regardless of sleep or not? #9. Outcome Metrics 197~ You mean a depressed condition? It would be better to describe it a little more clearly. Calling a condition group is easily confusing. It is recommended to describe it a depressed mood. #10. Table 1 Even in a healthy control group, it is basically necessary to present and compare the same psychometric values. In this study, MADRS was presented, but the results of this study do not know the mood state of the normal control group. #11. When data related to sleep are analyzed together, some limitations of activity data can be improved. Consideration should be given to analyzing and presenting sleep data. #12. I recommend that you try to train activity data by making it more diverse secondary variables. The strength of this data is that it is a time series. In the introduction, the characteristics of time series data and the necessity of proper analysis were explained, but in the present, only a several MLs were applied. Whether it is an analysis according to the circadian rhythm, the difference between weekdays and weekends, the difference between morning and afternoon, the difference between the most active and non-active periods, the irregularity of activities, etc. I think it is necessary. The author should considers the characteristics of the time series as much as possible, and analyzes according to the circadian rhythm, the difference between weekdays and weekends, the difference between the morning and the afternoon, the difference between the most active and non-active periods, irregularities in activities, etc. You need to do a sophisticated analysis with the possibility of creation. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
Applying machine learning in motor activity time series of depressed bipolar and unipolar patients compared to healthy controls. PONE-D-20-09443R1 Dear Dr. Jakobsen, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Kyoung-Sae Na, M.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Jonas Busk |
Formally Accepted |
PONE-D-20-09443R1 Applying machine learning in motor activity time series of depressed bipolar and unipolar patients compared to healthy controls. Dear Dr. Jakobsen: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Kyoung-Sae Na Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .