Peer Review History
| Original SubmissionApril 3, 2020 |
|---|
|
PONE-D-20-09612 Automatic classification of mice vocalizations based on different machine learning methods PLOS ONE Dear Dr. Premoli, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The authors should seriously revise the manuscript explaining advantages and technical details of the approaches they used and to address the following specific issues. Although the authors made significant effort in optimizing the hyperparameters of SVM and RF classifiers, the set of features chosen representing the vocalizations seems to be inadequate for the task of spectrogram shape classification. Please, revise introduction, methods, and discussion to address this issue. The detailed description of the architecture of ANN should be provided. Did you use use any of the overfitting-prevention technique ? Major 1. For statistical confidence, the cross validation technique(e.g. 10-fold) should be applied for model assessment and the results should be shown together with their confidence intervals 2. The feature set used for RF, SVM and non-convolutional ANN input doesn't seem to be adequate for the desired task. The authors should further justify or extend the feature set and/or use entire frequency envelope data as the input. 3. The architecture of ANNs should employ widely used techniques for overfitting prevention: batch normalization/dropout. 4. Discuss advantages of the approaches based on experimenter derived call categories versus un-biased call classification. Sangiamo, D.T., Warren, M.R. & Neunuebel, J.P. Ultrasonic signals associated with different types of social behavior of mice. Nat Neurosci 23, 411–422 (2020). https://doi.org/10.1038/s41593-020-0584-z 5. Please, compare your results to Vogel et al. 2019 which achieved 85% recall. What are the advantages of your approaches? Vogel, A.P., Tsanas, A. & Scattoni, M.L. Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework. Sci Rep 9, 8100 (2019). https://doi.org/10.1038/s41598-019-44221-3 6. Please, make a statement concerning sharing the training data, code, and finalized classifiers. Minor 1. SVM OVA approach requires confidence estimation. Which confidence estimation approach was used? 2. ANN architecture should be described more thoroughly: convolutional layers kernel sizes, strides, dropout/batch normalization usage, stochastic gradient descent optimizer used, weight initialization strategy. 3. Strain data is not used in the analysis anyhow. It would be interesting to compare the accuracies of two strains' shape prediction results taken separately. Typos 96 'can not applies' Please submit your revised manuscript by Aug 01 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Gennady Cymbalyuk, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors developed several classifiers capable of classifying individual vocalizations according to their spectrographic shapes as described by Scattoni et al, 2008. The work employs a quite comprehensive set of machine learning methods including Support Vector Machines(SVM), Random Forests(RF), fully connected and convolutional artificial neural networks (ANNs, CNNs). Two-dimensional spectrograms were used as the input data for CNN-based classifiers. For non-convolutional ANN, SVM and RF classifiers the vocalizations were represented as the sets of 16 automatically extracted features. Although the authors made significant effort in optimizing the hyperparameters of SVM and RF classifiers, the set of features chosen representing the vocalizations seems to be inadequate for the task of spectrogram shape classification. It can be hard to differentiate 'chevron' from 'complex' using only overall and marginal max/mean/min values. The confusion matrices provided in the paper confirm that. The description of the architecture of ANN provided by the authors lacks details. Two convolutional layers only can be insufficient for good performance in image classification tasks. The authors don't mention whether they use any of the overfitting-prevention techniques such as batch normalization, dropout, weight regularization and data augmentation. The fact that the CNN applied to full spectrograms performs worse than SVM applied to the ambiguous feature set indicates that more effort can be invested in the optimization of CNN architecture and the training protocol. Note also the performances shown by CNNs in much more complex image classification tasks (CIFAR100) and the RF performance demonstrated by Vogel at al, 2019 solving a similar problem. According to my assessment, the authors need to address the major and minor points listed below. Major 1. For statistical confidence, the cross validation technique(e.g. 10-fold) should be applied for model assessment and the results should be shown together with their confidence intervals 2. The feature set used for RF, SVM and non-convolutional ANN input doesn't seem to be adequate for the desired task. The authors should extend the feature set and/or use entire frequency envelope data as the input. 3. The architecture of ANNs should employ widely used techniques for overfitting prevention: batch normalization/dropout. It would be good to employ data augmentation(e.g. random shifts and crops along temporal axis) to help the CNN perform better. The batch size also can be increased to help improve the performance and/or convergence speed. Minor 1. SVM OVA approach requires confidence estimation. Which confidence estimation approach was used? 2. ANN architecture should be described more thoroughly: convolutional layers kernel sizes, strides, dropout/batch normalization usage, stochastic gradient descent optimizer used, weight initialization strategy. 3. The report of Vogel et al, 2019 'Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework' solving a similar task should be mentioned 4. Strain data is not used in the analysis anyhow. It would be interesting to compare the accuracies of two strains' shape prediction results taken separately. Typos 96 'can not applies' 124 'postnatal (PND)' Reviewer #2: Overview: The authors manually segmented and labeled an impressive number of mouse USVs (48699) according to categories developed by Scattoni and colleagues (2008). A sampling of 1199 USVs per category was used to train several supervised classification algorithms. Support Vector Machines (SVM), Random Forests (RF) and Artificial Neural Networks (ANN), in several configurations. The authors conclude that the best results are obtained by Support Vector Machines with the One-VS-All configuration. However, no method was particularly accurate. Precision, recall, and accuracy fell between %51.4 and 68.5% for all classifiers. I believe the moderate accuracy of the classifiers described in this report are not due to any deficiencies in methodology employed by the authors, rather they are due to the fundamental inaccuracy of human defined USV classification. Main Issues: The original creator of these particular USV categories, Maria Luisa Scattoni, has already published a paper using support vector machines (SVM) and random forests (RF) to categories USVs. They achieved 85% recall. What more does this paper add? Vogel, A.P., Tsanas, A. & Scattoni, M.L. Quantifying ultrasonic mouse vocalizations using acoustic analysis in a supervised statistical machine learning framework. Sci Rep 9, 8100 (2019). https://doi.org/10.1038/s41598-019-44221-3 Experimenter derived call categories are generally falling out of favor. Substantial new evidence suggests that USVs don’t categorize neatly into discrete groups, including the data in this paper. SVMs/RFs are capable of much higher accuracy when the training data actually comes from discrete groups with clear separations. The “short” calls in the present manuscript are a well-defined group and are thus categorized accurately (>90%). But other calls like “complex” and “composite” are frequently miss-categorized. This likely isn’t a fault of the classifier. Rather, it captures the uncertainty within the human created training data. (Discrete vs Continuous Vocalizations) Tim Sainburg, Marvin Thielk, Timothy Q Gentner (2019) Latent space visualization, characterization, and generation of diverse vocal communication signals. bioRxiv 870311; doi: https://doi.org/10.1101/870311 Many methods are now available for un-biased call classification. These categories can then be validated through the behavioral/contextual usage of the calls. Call categories that are used identically during behavior can be collapsed. This method is more sophisticated and ethologically relevant than creating categories based on experimenter visual inspection. Sangiamo, D.T., Warren, M.R. & Neunuebel, J.P. Ultrasonic signals associated with different types of social behavior of mice. Nat Neurosci 23, 411–422 (2020). https://doi.org/10.1038/s41593-020-0584-z The training data, code, and finalized classifiers are the most valuable elements of this manuscript, but I did not see any indication that they will be distributed to the field. Without this, the manuscript just describes their internally used classifier with moderate accuracy. Final Thoughts: While there is nothing wrong with the scientific methodology employed in this paper, I would like to see all of the issues above addressed before considering the manuscript for publication. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Alexander Ivanenko Reviewer #2: Yes: Kevin R Coffey [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-09612R1 Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks PLOS ONE Dear Dr. Premoli, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please, revise the manuscript to address the concerns raised by the reviewer including changes of the presentation for clarity, providing omitted results, data analysis justification and appropriately citing relevant studies. 1. The results of all classifiers mentioned in the paper should be presented in a table. If the resulting accuracy is especially bad and is not worth considering than it should be stated explicitly. 2. Please, clarify and directly state which approach for data augmentation was used for the best-performing CNN. Minor Abstract 1. tested some supervised ... -> tested some other supervised ... 2. extracted by the spectrograms -> extracted from the spectrograms Introduction 1. "... on such a fixed repertoire of calls typologies". May be "on a predefined set of call types" ? USV Recording and analysis 1. "postnatal (PND) 6, 8 ..." -> "postnatal day (PND) 6, 8..." 2. Threshold and hold time parameter values used in Avisoft for vocalization extraction should be provided here. Description of the experiments 1. 'the first step consisted into' -> 'the first step consisted of' 2. 'giving this way a statistical soundness' -> 'ensuring the statistical soundness' ? Features extraction section 1. 'identified by the color...' -> 'identified by the brightness...' Support vector machines 1. 'Maximum confidence strategy was used...' - how the confidence was estimated? SVM provides binary result out of the box, the actual confidence assessment approaches vary. Please submit your revised manuscript by Dec 21 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Gennady Cymbalyuk, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Though the authors addressed the points noted in the previous review, the document undergone major changes that gave rise to several more major and minor inconsistencies to be addressed. Additionally, for the next review round, I would ask the authors to add line numbers in the document and to provide the source code for the models described in the paper. Major 1. The Random Forests, MP, Stacking NN classification performance results are completely omitted in the paper. SVM performance results are hard to find in the text. At the same time, both RF and SVM are mentioned in the abstract and multiple times throughout the text. The results of all classifiers mentioned in the paper should be presented in a table. If the resulting accuracy is especially bad and is not worth considering than it should be stated explicitly. Note that the results are compared with the results of RF classifier in Vogel et al, 2019 paper, so the omitting of the RF results looks strange. 2. 'Data-preprocessing' paragraph. 'This operation prevent imbalanced dataset...' - usually, that is not the main reason to apply data augmentation when training CNN. To deal with imbalanced data one can just use weighted loss function, as you mention further in the paragraph. Data augmentation is used mostly to prevent overfitting, generating a potentially infinite, though maybe not diverse enough set of training samples. From your description it is not clear which approach for data augmentation is used for the best-performing CNN: statically generated examples with random crops and shifts, just to balance the source data set, or the on-the-fly generation of randomly altered samples during the training, together/without using loss function weighting? That should be stated explicitly. Additionally, I think that is not necessary to explain that one should not rotate on flip images in that case. 'data augmentation' is a general term for generation of pseudo-diverse samples and that is quite obvious that one should not use the techniques applied for object photographs classification here... 3. 'Proposed classification methods' paragraph : you briefly explain the principles of CNN and only after that you start explaining trivial theory of MP, talking about weights, activation functions and neurons. 'Multilayer perceptron' sub-section should go first, since it describes the most basic things on which CNN is based as well. Furthermore, I don't think Fig.2 is worth placing in the paper. That diagram is quite trivial, it occupies half of a page and is known since 70s. It can be found in any textbook about ANN basics. Why not to use that space to depict the actual architecture of the CNN + ANNs you designed, maybe together with some trained kernel weights visualization or other data? For example, see the figures in our recently published work Ivanenko et al, 2020, "Classifying sex and strain from mouse ultrasonic vocalizations using deep learning". Minor Abstract 1. tested some supervised ... -> tested some other supervised ... 2. extracted by the spectrograms -> extracted from the spectrograms Introduction 1. "... on such a fixed repertoire of calls typologies". I think this should be rephrased. May be "on a predefined set of call types" ? 2. Please cite our paper Ivanenko et al, 2020, "Classifying sex and strain from mouse ultrasonic vocalizations using deep learning", PLOS CB in the introduction. Though we don't use Scattoni classical vocalization types there, we also classify vocalizations using CNNs basing on their spectrogram shape (ascending, descending, number of jumps, peaks, complexity etc), thus implementing the 'top-down' approach you mentioned. 5. 'Simulation results' - I think the word 'simulation' is misleading here. USV Recording and analysis 1. "postnatal (PND) 6, 8 ..." -> "postnatal day (PND) 6, 8..." 2. Threshold and hold time parameter values used in Avisoft for vocalization extraction should be provided here. Description of the experiments 1. 'the first step consisted into' -> 'the first step consisted of' 2. 'giving this way a statistical soundness' -> 'ensuring the statistical soundness' ? Features extraction section 1. 'identified by the color...' -> 'identified by the brightness...' Support vector machines 1. 'Maximum confidence strategy was used...' - how the confidence was estimated? SVM provides binary result out of the box, the actual confidence assessment approaches vary. Reviewer #2: The authors have made significant efforts to improve their CNN based classification architecture and have at least discussed and considered the theoretical limitations I posed in review. I agree that the scale of the new experiment and improved classification accuracy now expand upon, rather then duplicate, the work of the Scattoni Lab. With the addition of a publicly available dateset and classification CNN, this work now makes a tangible contribution to the field and I recommend it be accepted for publication. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Alexander Ivanenko Reviewer #2: Yes: Kevin R Coffey [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks PONE-D-20-09612R2 Dear Dr. Premoli, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Please, consider minor edits suggested by one of the reviewers. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Gennady Cymbalyuk, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors carefully addressed all the comments given in the previous review round. I reccommend acceptance of the paper. Some minor grammar/stylistic/logic errors could be considered: line 391 : generating to -> generating of/for line 432: the CNN architecture has outperformed the other standard features-based methods -> the CNN architecture has outperformed the standard features-based methods (CNN is not a feature-based method in the paper ...) line 455: are arranged on the ten columns -> are arranged in ten columns line 458 : The values into the matrices are normalized -> the values in the matrices .... line 463, 502: on the x axis there are the predicted labels : "on x axis there are..." sounds a bit ungrammatical to me; usually they use something like "x axis refers to ... " to describe the meaning of axes line 519: The performance showed that by exploiting the whole time/frequency information of the spectrogram leads to significantly higher performance than considering a subset -> The performance showed that the exploiting of the whole time/frequency information of the spectrogram leads to significantly higher performance than considering only a subset of numerical features line 525: The final set up on an automatic classification method will definitely solve the current main problems in USVs manual classification: long time consuming and operator-dependent. -> The final set up of(?) an automatic classification method will definitely solve the current main problems of USVs manual classification: its being a time consuming process and operator bias. Reviewer #2: The authors have done a good job responding to the additional reviewers comments. I recommended to accept the paper in the previous revision. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Aleksandr Ivanenko Reviewer #2: Yes: Kevin Coffey |
| Formally Accepted |
|
PONE-D-20-09612R2 Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks Dear Dr. Premoli: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Gennady Cymbalyuk Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .