Peer Review History
| Original SubmissionOctober 14, 2024 |
|---|
|
PONE-D-24-44181Evaluating the performance of automated detection systems for long-term monitoring of delphinids in diverse marine soundscapes.PLOS ONE Dear Dr. White, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 21 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Vitor Hugo Rodrigues Paiva, Ph.D. Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. Thank you for stating the following financial disclosure: “This work was supported by the Natural Environmental Research Council [grant number NE/S007210/1]. The COMPASS project has been supported by the EU’s INTERREG VA Programme, managed by the Special EU Programmes Body. The views and opinions expressed in this document do not necessarily reflect those of the European Commission or the Special EU Programmes Body (SEUPB).” Please state what role the funders took in the study. If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." If this statement is not correct you must amend it as needed. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf. 4. When completing the data availability statement of the submission form, you indicated that you will make your data available on acceptance. We strongly recommend all authors decide on a data sharing plan before acceptance, as the process can be lengthy and hold up publication timelines. Please note that, though access restrictions are acceptable now, your entire data will need to be made freely accessible if your manuscript is accepted for publication. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If you are unable to adhere to our open data policy, please kindly revise your statement to explain your reasoning and we will seek the editor's input on an exemption. Please be assured that, once you have provided your new statement, the assessment of your exemption will not hold up the peer review process. 5. We note that Figure 1 in your submission contain [map/satellite] images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: a. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an ""Other"" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” b. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. The following resources for replacing copyrighted map figures may be helpful: USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/ The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/ Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/ Landsat: http://landsat.visibleearth.nasa.gov/ USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/# Natural Earth (public domain): http://www.naturalearthdata.com/ 6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: 1. Could the authors clarify the novelty of this study to highlight its unique contributions? Since the CNN model used here was originally designed in another work (citation [44]), is the novelty primarily in its application to a new dataset? 2. In section 3.1-3.4, the authors compare the CNN with Manual analysis and the C-POD, but several issues need attention: In C-POD, the KERNO classifier detects click train based on parameters such as intensity, duration, frequency content and inter-click intervals (ICI). To ensure fair comparisons, the Manual Analysis and CNN should also output click train detections specifically. 2.1 CNN: The authors state (Lines 168-169) that the CNN classifies chunks as containing delphinid clicks if the chunks include both click trains and burst pulses, but burst pulses are distinct from click trains. Additionally, the authors state on Line 175 that the CNN model prioritizes whistles over clicks. To align with the aim of detecting click trains, either the CNN model from [44] should be fine-tuned, or a new model should be trained specifically for the exclusive detection of click trains. Without this adjustment, the comparisons presented in Sections 3.1–3.4 lack validity. Furthermore, this adjustment could serve as a significant contribution or highlight the novelty of this study. 2.2 Manual analysis: (Page 8, line 151) The method for identifying click trains in the Manual Analysis needs further clarification. It would be helpful to align the manual analysis with the approach used by the C-POD --considering parameters such as intensity, duration, frequency content, and inter-click intervals (ICI), rather than relying solely on the number of clicks in a click-series. Since the aim of this work is to develop an automated tool for extracting species presence, all types of clicks (e.g. click trains, burst pulses, buzzes, and regular echolocation clicks) can be used. Therefore, the C-POD should be tuned to detect all these signals, not just click trains, and the CNN model from [44] should be adjusted to prioritize click detection over whistles. In summary, any comparisons should be made based on consistent outputs across the different methods to ensure validity and reliability. 3. The identification of vessels (both manual and CNN) is not utilized in this study. To avoid confusion for readers, please consider removing related descriptions. 4. This work lacks a background survey on the application of Deep Learning to marine mammal sound detection or classification. While the authors provide reasons for using CNN on Page 5, Line 83, could they also include examples of other state-of-the-art Deep Learning applications to further support their choice of CNN? 5. Abstract: please consider merging the two paragraphs into a single cohesive paragraph to provide a concise summary of the work. 6. page 3, line 44: missed a period after ` offline systems’. 7. Page 4 lines 56-57: Does this sentence means: the C-POD detects individual sound clicks while operating (on-line), and later, during offline processing, those clicks are grouped together into click trains based on their characteristics. The authors mention that the C-POD performs online detection; however, the sudden introduction of offline processing could confuse readers. It would be helpful to provide a clearer explanation of the processing steps involved in the C-POD detection workflow, including how and when the transition from online detection to offline processing occurs. 8. Page 4, line 60: There are two ‘for which’ at the beginning of this line. 9. Considering adding a sentence at the end of the Introduction to outline the structure of the manuscript. This will help guide readers through the subsequent sections. 10. In Figure 2 or in the text, please provide an example of the labels used for CNN, similar to the numeric labels presented for Manual Validation and C-POD. This would ensure consistency and help readers better understand the comparison. 11. Page 8 line 157: Please provide a detailed description of the CNN architecture used in this work. Relying on references to another paper for such essential information could hinder readers' understanding of your methodology. 12. The authors have provided the threshold for both whistle and click detections, but have not specified the threshold used to determine the presence of clicks. Please include this information for clarity. 13. Page 8 line 174: Please clarify the meaning of ‘frame’ in the context of this work. Is it synonymous with ‘chunk,’ or does it refer to a different concept? 14. Page 9, lines 182-186: Are there existing statistics in the literature regarding the performance of the C-POD in detecting click trains? If so, please include this information to strengthen the rationale for selecting this method for comparison. 15. For Table 2, please consider adding an additional row to show the overall number of hours for each period. This would provide a clearer summary of the data. 16: Page 10 line 215: The sum of 532, 839 and 566 equals 1937, not 1936. Please double-check this calculation and verify all numerical values in tables for accuracy. 17: Page 10 line 203: The description of FPR in this line is incorrect. FPR refers to the proportion of actual negatives that are incorrectly classified as positives. Please revise this definition for accuracy. 18. Page 11 lines 218-219: Could the authors clarify how the proportions of dph were calculated? Please also double-check that all proportions provided are accurate. 19. Page 15 line 327, page 16 line 333: In the parenthesis on both lines (Table, Figure 5), please specify which sub-figure in Figure 5 is being referred to for the discussion to ensure clarity. 20. Page 16, line 345: the authors mention that the CNN generated a single label for each audio data chunk. However, on page 8 Line 168, it is stated that the network classifies each chunk into one of four labels: ambient noise, delphinid whistles, delphinid clicks (click trains and burst pulses), or vessel noise. This inconsistency is confusing and needs clarification. Reviewer #2: The paper "Evaluation the performances of automated detection systems for long-term monitoring of delphinids in diverse marine soundscapes" is well written, interesting and with a solid conclusion. It is a useful paper that will help scientists and conservationists in their choice of a monitoring device. However, I have one serious objection (why say you are only comparing detection software and not the recording device ?), and think the presentation of the manuscript could be improved to better the general clarity of the work, and help further the future users of these instruments. I will first mention some general suggestions and then go into detailed comments. ***** General comments : * On the general purpose (and title) : to my judgement, the comparison not only involves two detection techniques, but also two recording devices - it is difficult to separate them, given the integrated nature of the C-POD . CNN analyze on data gathered with a poor recording device would probably have a different efficiency. This 'objection' does not lessen the utility of the work, it's rather a suggestion that the study should clearly mention hardware as well as software in the comparison of the performance. * Use of metrics : the metrics used are generally well defined (with some exception, see detailed comments), but may be to numerous, at the risk of losing the reader in a forest of parameters without always explaining the interest of each. I would globally advice to use fewer metrics and stick to them in all comparisons, justifying which is interesting in each case. This is already partly done in the paper but could maybe be more systematic. * The comparison is well detailed in the 'results' part (maybe with too much details?) but very few explanations are given to explain the trends. The high level of false positives in the whistle detection is explained by a deeper insight into the manually revised data. The same should be done for the other main results : why is the C-POD so little sensible ? Can it simply be because of the recording device sensibility (and not the detection software) ? From my experience, individual C-POD can have different sensibilities, especially if 'old'. You could perhaps check the signal to noise ratio of the manual detections in some cases of lost information, for instance. At high frequency, the level of noise is sometimes low and sensibility of the recording device can be quite important. I would maybe suggest that 100 m is rather deep for click detection, given the directionnality of the clicks and of the sensors at high frequency ? * The figures quality should be improved (maybe some of them could be skipped without changing the quality of the paper). ***** Detailed comments : * Abstract : l.4 "the work compares two methods" : as already mentioned in the general comment, I think what is compared is "recording device + method", the two cannot be separated. Thus I would mention the Soundtrap device as part of the general result. * Introduction : l.25 ref (1) does not seem to be exactly to the point ? l.26 "to effectively protect marine these species": not very clear l.27 ref(3) is about invertebrate and noise , not really habitat use and distribution ? l.34 ref (15) and (16) are only about signature whistles, it could be more general to illustrate your point ? l.35 "signals between 20-100 kHz" : what about narrow band high frequency (NBHF) delphinids ? Their clicks go up to 135 kHz or more ... Even if you are not concerned in Scotland, the definition should not exclude them. l.44 "systems Off-line" -> "systems. Off-line " l.52 "C-POD" presentation : you should definitely explain here that C-POD are outdated, and mention F-POD. Since a lot of scientists or ONG still use C-POD, it takes nothing away from the interest of your study. l.94 the CNN has been trained (partly) on data from the same project in one of the places (Tolsta), hasn't it ? You should maybe mention this here. * Methods : l.113 In my copy, fig 1 has a very low quality. l.143 In my copy, fig 2 has a very low quality. l.160 cf my comment on line 94 l.170 : I couldn't see where you mentioned what probability threshold you use, and why (because it was the one you used in your previous work?). At the end you mentioned a 0.9 threshold, but I seem to understand it was not your first choice ? l.175 "where whistles were prioritised over clicks due to a shorter temporal length" : I don't really understand this sentence. l.177 "a dph determined for ..." -> "a dph is determined for ..." l.179 which threshold ? l.193 and supplementary material : OK l.198 : "These calculation use the manual labels as ground truth" I would mention here the remark of l. 384 and following, that the "ground truth" is counted only on 1/3rd of the total time ! It is rather counter-intuitive, though rather well justified in l.386. l. 199 : I would mention (since these are notions that you will use later) that Recall is also called Sensitivity and it is also the TPR (True Positive Rate) l.203 : "the FPR is the proportion of the positive detections that are correct" is uncorrect l.204 : FNR should be (better) explained. It is also 1-Recall, which means, I think, that it is not very useful to mention it along with Recall. l.208 : "manually" -> "manual" * Results l.211 : Mention (in the 3.1 title?) that here you are comparing click detection only (and not delphinid detection). It is written in l.213, but not very obvious when you read it for the first time l.221 : table 2 : you could maybe mention the % of TP, to make the reading easier (E.g. N_{TP} = 452 (46%) ) l.226 : it is interesting to mention the processing time of each method. CNN has been trained in another study but with the same kind of data and environment, maybe you should mention it ? It would be interesting, also, to mention the volume of data for each case ? Also, maybe give directly the total time, not "by week of data" so the reader doesn't have to look back how many weeks you had and do the multiplying for himself ;) l.231 : C-POD in this case has a high precision but low recall (more classically paired with precision than FNR) l.238 : FNR is 1-R so it is not really indispensable to put it in this table, which is already rather large. l.247-250 : a relevant comment l.263 : In my copy, fig 3 has a very low quality. l.265 : why smooth the data ? l.275 : "correlation was weakest at Shiant Isles" : could you explain why (or in the discussion part) ? It seems very low indeed. l.291 : TPR has not been defined (=R) l.293 : FNR is a bit redundant here, isn't it (=1-TPR = 1-R) ? l.296 : In my copy, fig 4 has a very low quality. You could use a more standardized scale (0-1, except maybe for FPR which is usually low in unbalanced populations). l.311 : Could you maybe justify why you chose these metrics, not the same as before? Precision was not considered useful in this case ? l.320 : In my copy, fig 5 has a very low quality. I would definitely use the same scales for the different sites. l.329 : Storm may not be of great influence on the performance of the devices then ? Did you check in the data to see how 'storm' was translated into noise? White noise, or colored, constant, or with interruptions? It's not obvious how a storm is recorded at more than 50 m deep. l.331-338 : It seems to me the comparisons here are a bit confusing, you are using different metrics for the different methods. l.340 : this conclusion seems rather in contradiction with the sentence of l.328-329 ? l.349 : here, the threshold is mentioned. l.343 : General comment on part 3.5 : This part is rather short compared with the previous one and could maybe be a little more detailed (or removed altogether if you want to stick to clicks detections). The very high numbers of false positives, not only in April (table 6) are rather shocking (they tend to claim that dolphins were present 88 % of the time in overall, with 100 % in nov in 2 sites). During the storms, the number is especially high, triplicating the detections). This result should be addressed in the first place, I think, with the other comments coming on later. Why none of the metrics that were previously described are used in this part ? l.352 : In my copy, fig 6 has a very low quality. * Discussion l.377-382 : This paragraph is a bit repetitive and not really necessary in my opinion. l.386 : see my comment on l. 198 l.391-392 : I don't really agree with this comment, I think a lot can be explained of the performances of each when going more in detail into the data (like it is done later, L.440 for the FPR of whistle detection). l.400 : "unseen acoustic data" see my comment on l.94 l.404 : The whistle detection improved the correlation coefficient but the very high rate of FP is alarming. l.416 : "when applying ..." I don't get it : was not 0.9 the threshold you chose in the first place(l.349) ? l.427 : there are other studies which did not find the same result, I think? Could it be due to this individual C-POD performing poorly? To the set-up, not favorable for this instrument? To the fact that the C-POD is performing better for NBHF species (Narrow band High frequency) ? It could be interesting to include these elements to your discussion, for a more balanced (and thus more useful) conclusion. l.428 : see my comment l.35 ... l.428 : yes. I think the sensitivity of the instruments, as well as the detection ranges, could have been introduced higher in the paper, I feel they are constitutive of the comparison. l.433 : ref 31, not 301 l.440 : not only in April ... l.440 : this paragraph is interesting, you could maybe show the signal in the suppl. material? l.452 : I am still lost on the threshold that was applied in the first place. l.461-462 : maybe "vocal" activity is not the correct term for clicks? * Bibliography l.542 : could you give Editor ? l.604 : idem l.688 : idem ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-24-44181R1Evaluating the performance of automated detection systems for long-term monitoring of delphinids in diverse marine soundscapes.PLOS ONE Dear Dr. White, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by May 09 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Vitor Hugo Rodrigues Paiva, Ph.D. Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: 1. The quality of most figures in my copy is really low. Are they vector figures? If not, please convert them to vector figures for better quality. 2. Some of the line numbers in your responses do not match the actual line numbers in your revised PDF. For example: “Line 103, on page 5 now reads: One widely adopted ML approach makes use of Convolutional Neural Networks (CNNs) …” this sentence is in line 93. For example: “I have now removed the second ‘for which’ from line 80”, but I found the ‘for which’ in line 72. For example: “Line 231: During post-processing a threshold can be…” is in line 223 of your revised pdf. 3. There are still two paragraphs in the abstract. 4. Line 101, ‘This section outlines the methodology use, beginning….’, please change ‘use’ to ‘used’. 5. Adding a sentence about the overall structure of the manuscript at the end of the introduction is to give readers a clear picture of your manuscript. You can introduce each section at the beginning of that section, but please provide only a summary rather than listing details of all subsections. 6. For figure 2, please avoid adding many texts in the caption -- a picture is worth a thousand words. If possible, please add an example of the labels used for CNN in the figure 2, similar to the numeric labels presented for Manual Validation and C-POD. 7. Let’s revisit to the novelty of this manuscript. The purpose of your study is to detect the presence or absence of delphinids, and the objective of this study is to demonstrate the broader capabilities of deep learning and highlight how an off-the-shelf network—without extensive fine-tuning—can still produce meaningful insights into species presence. I believe this work underscored the importance of the study presented in “Automated detection of marine sound sources with a convolutional neural network”, but it still lacks sufficient novelty for a new manuscript. The comparison between the performance of your CNN model and the C-POD is valuable and represents a large amount of data processing and effort. I truly appreciate your contribution to this field, but, comparing an existing CNN model (already published) with a device (C-POD) without a clear understanding of the specific detection methods used in the device is not sufficient for publication. Reviewer #2: The authors have carefully considered both reviewers suggestions. They decided to follow most of the recommendations, if not all, and explained their choices. I don't agree with all of these decisions but consider their work should be published as such. Figures still have a rather poor resolution in my version. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy . Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Evaluating the performance of automated detection systems for long-term monitoring of delphinids in diverse marine soundscapes. PONE-D-24-44181R2 Dear Dr. White, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Vitor Hugo Rodrigues Paiva, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-24-44181R2 PLOS ONE Dear Dr. White, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Vitor Hugo Rodrigues Paiva Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .