Peer Review History
| Original SubmissionJune 11, 2021 |
|---|
|
PONE-D-21-19199Vision-based monitoring and measurement of bottlenose dolphins' daily habitat use and kinematicsPLOS ONE Dear Dr. Gabaldon, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. First, I must apologize to the authors for the delay on my decision. It was very difficult securing reviewers for this manuscript. This manuscript has now been reviewed by three experts in the field, and all three generally agree that the manuscript is technically quite sound. The reviews did provide a large number of comments, but these were mostly aimed at making the text more clear. Please thoroughly address all of these comments when submitting your revised manuscript. Please submit your revised manuscript by Dec 12 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, William David Halliday, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 3. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a well-written manuscript that presents primary scientific research in an intelligible fashion and has been written in standard English. This research uses a dual camera system to record the study area and uses deep learning computer techniques (Convolutional Neuron Networks) to detect dolphin presence and movements within the frame of view (FOV). The authors used this information to describe habitat use and calculate speed throughout the environment during different time periods of the day (blocks) and whether the dolphins were in training session (ITS) or out of training session (OTS). Habitat use was described with heatmaps and by associating environmental features such as enrichment, windows and trainer, though no statistical analyses was presented to support these assertions. Speed was velocity (fluking speed m.s-1) and was used to describe how the dolphins moved throughout the environment by being condensed into static or dynamic movement. Kolmogorov-Smirnov tests were used to independently test the differences between blocks during OTS or ITS. Independently analysing the results of ITS and OTS is valuable as during the ITS dolphins are being asked to perform specific behaviour and therefore, differences between their OTS time would be expected. More clarification around the statistics used in this manuscript are warranted as the authors have not justified their use of the Kolmogorov-Smirnoff test over other available statistics or provided adequate details on the development and use of their heatmap method. This manuscript appears to present original research and the authors have made their data accessible. The authors demonstrate that the use of deep learning computer techniques is achievable for video monitoring animals within a managed facility, and provide a good discussion of how the speed of analysis is greatly improved in comparison to manual video analyses. These conclusions are supported by the results supplied in this manuscript. Habitat use information was well-presented and reasoned, however, caution should be used when interpreting the conclusions as there was no formal test showing differences in use and environmental features. Authors state that day-scale temporal trend were able to be detected, this seems like a fair statement due to the timing of when video data was taken. Page 1 Line 8: Authors state that “there is a strong emphasis on behavioral monitoring to inform welfare practices” but do not mention this again. Is there a suggestion that the continual video monitoring and use of CNN methods would be applicable to monitor behaviours linked to the welfare of these animals in the future? If so, the authors should provide greater detail into how these methods could benefit behavioural analyses Page 3 Line 75: Authors provide an average age of the seven bottlenose dolphins in this study (17 +/- 12). Due to the large range in ages I do not believe this metric is a good descriptor of dolphin ages and authors should simply provide the ages. Page 8 Line 273 onwards: In this section the authors provided details on their heatmap production method. Supplying the software that was used in generating these heatmaps would be beneficial to readers and for reproducibility. Additionally, have these methods been described previously in other literature, or is this a method that was developed by the authors? Either way, authors should state how and why they used these methods when other commonly used methods exist (for example kernel density estimation, which can have a barrier function for use with hard limits to movement, such as the walls of a pool). Page 10 Line 325 onwards: Justification of the use of Kolmogorov-Smirnoff non parametric tests are warranted in this section. A statement of the assumptions and how the collected data has met these assumptions is important for assessing the appropriate use Page 14 Line 486 onwards: The linking of dolphin distribution to environmental features, such as gates and windows has sound logic, however, no formal quantitative tests were performed. The links made between these features and kinematics supports these assumptions but it is difficult to tell if this is a correlation or driver of habitat use. The authors could consider analysing the differences in habitat use using species distribution models. General comments: 1. The authors focus their conclusions around the enhanced ability these methods provide to efficiently analyse video data. These conclusions are in line with regards to the results, however, it would be of interest to hear how the results and the methods presented in this manuscript may be applicable to different systems. With the increased use of drones, video data is more frequently being collected for wild cetaceans, can CNN computational methods aid in analysing this data and what are the benefits or potential uses of using the methods presented in this manuscript. 2. Through describing a method that enables a more efficient analysis of video monitoring data, the authors have also created a greater understanding of the distribution (and potential environmental reasoning) for bottlenose dolphins present in this captive environment. a. Are the difference observed in kinematics a natural occurring behaviour, a function of the environment, or due to different stimuli occurring at different times of day? b. If the dolphins are spending the majority of their OTS at areas where more enrichment is occurring (windows, gates etc.) are there suggestions the authors could make about how these animals are managed? Expanding on the habitat use and kinematic results is relative because the title and points made throughout the manuscript suggest that habitat use and kinematics is the main focus and result. If this point is true than more emphasis should be place on discussing the implications of these findings for these dolphins. However, upon reading the manuscript, the impression is that the authors are describing new methods for analysing video data, and that habitat use and kinematics are potential uses for this tool. If this latter point is true, then the authors should consider changing the title to suggest a methods manuscript e.g. “Application of CNN in analysing video-based monitoring data for daily habitat use and kinematics of captive bottlenose dolphins”. 3. A point of interest, the kinematics and habitat use of bottlenose dolphins was provided as a group summary, rather than for individuals. Do all dolphins consistently group together and follow the same path while in the enclosure, or, is there individual variation present. Can CNN analysis techniques detect and follow individuals through time? If the authors could provide comment on this I believe it would, at least, be a valuable discussion point as a future direction for this work. 4. The authors have provided an ethics statement that lists the names of two organisations that approved this research. Do they have an ethics/permit number that could be referred to, that they could supply? Reviewer #2: This study describes the implementation of a camera based auto-tracking approach to monitor dolphin locomotion in a managed area. The approach described is sound and the results suggest that it is effective for monitoring animal behavior, with the possibility to expand or enhance its performance through additional cameras or sensors. This work is consistent with other auto-tracking programs that have been described recently (reviewed in Panadeiro et al, 2021; https://doi.org/10.1038/s41684-021-00811-1), but is focused on the specific application of dolphins in an managed enclosure. Overall, I don't have any major concerns regarding this study. It is clear and well written, and will likely be of interest to scientists working in this area. One minor comment: On line 101, formal training sessions are defined as ITS, but under Table 2, it states that OTS blocks 2 & 4 are formal presentations. What is the difference between formal training and formal presentations? Perhaps consider using more distinct language to describe these? Reviewer #3: This article struggles between two possible narratives: (1) a new ML-based method to track dolphins in captivity, with data to show the utility of the methods, and (2) a dolphin distribution and behavior study in captivity that uses a ML-based method to compare behaviors while in two different behavioral conditions. I believe that the goal of the authors is the first narrative and as such, my comments below are reflective of this. Abstract - Lacks a "so what" or big picture. Why should someone want to read this article? What is the new innovation that helps to advance monitoring of dolphins in captivity? Introduction - This introduction does not prepare the reader for what is to come in the article and lacks reference to the body of work done with dolphins in captivity and in machine learning detection. - The statement "Here we present an automated..." in paragraph 1 comes too soon. You haven't even described any of the previous research in the field. This should be in your last paragraph. - The paragraph about tags in the wild "Biomechanics and behavioral studies..." does not add anything to the article and is not relevant given that the entire study is done in captivity and there is no comparison or attempt to implement this in the wild. - I did not feel like the last paragraph that describes this study is accurate. It did not prepare me for what was to come. Please make sure they match in content and breadth as well as order ideas are presented in methods. - Suggested structure: paragraph 1 = what we've previous learned from behavior studies of captive animals, paragraph 2 = methods used to study captive animals and limitations, paragraph 3 = overview of machine learning detection algorithms showing other applications of RPN and Fast R-CNN (since you are not inventing them in this study) and remaining gaps, paragraph 4 = here's what we are going to do and show in this article Methods - Overall, methods are not organized in a manner that the reader feels like they build on each other. We switch from behavior related things to ML related things. Too many details on the behavior stuff given that its not the focus. - First paragraph in M&M feels unnecessary and almost more appropriate to do in intro. - None of the details about the tank feel necessary nor relevant as they are really never revisited. Maybe add these measurements to the top panel of figure 1 and then there is no need to repeat them in text. - Figure 1 and 2 are way too busy and need to be broken into different panels. I would combine top panel of fig1 and fig 2 as 1 figure. Tracking algorithm its own figure and prob dist their own as well. Label panels (a), (b), (c) as top, bottom etc. didn't really help because there were so many panels in each figure. - Table 2. Add your speed, yaw metrics in this table with the time intervals. - What are the metrics of detector performance? - False positives should be their own section - Tracklet is first used in title so doesn't lend well to preparing reader. Use this term in intro when describing what is going to be done. - Should reference tracklet figure earlier on in this section. - What is proximity region? How was it defined? - Heatmap representation is really confusing given the aims of the study to track. Why isn't an individual track shown? This would be more powerful representation that the left column panels in figure 4 and 5. - Suggested structure: section 1 = camera setup in tank area + how much data was collected over what periods of time, section 2 = manual labelling and analysis, section 3 = neural net description + training + metrics of detect performance, section 4 = detection processing to combine frames, 5 = position uncertainty, 6 = tracklets + heatmaps to visualize, 7 = drain detector, 8 = about different behavior states and dolphins training sessions Results - Given that this article seems to focus a lot on the detector and tracklet, there is only a single paragraph describing the performance of these algorithms and most of it is about the detector. How was the tracklet algorithm performance? What there any human ground-truthing of animal trajectory to confirm the tracklet trajectories? - Make a table to summarize performance of algorithms. - The comparison between OTS and ITS feels odd given that it is framed as comparing behavioral states and locations of the animal. In ITS, it is really obvious that the dolphins are going to be where the algorithm finds them in front of the stage area because that is where the trainer tells them to be. So it really isn't a "distribution" or their space use in the tank. I would frame this more as a confirmation of your detector working because the detector found the dolphins where they are supposed to be and maybe should be included in detector performance - Figure 4 and 5. The differences in the space use don't really stand out with these plots. It may be best to plot a single track of an animal in OTS and one in ITS to show the space use. There's also a ton of text in the figure captions that isn't mentioned in the main text. - Table 3 should be in the supplementary. - I don't think yaw is ever actually defined or described. - Statistical comparison and entropy should not be their own sections but rather woven into the Speed + Yaw description of the animals. - Suggested structure: section 1 = detector performance, 2 = tracklet performance, 3 = animal space use during OTS and ITS, 4 = speed + yaw of animals Discussion - Most of the paragraphs in the discussion read as results paragraphs and should be moved to the results. These paragraphs explain the results so much more clearly than the results section. I really didn't comprehend the results of the article until I read the discussion section. - section kinematic diversity = 1st paragraph reads like a results paragraph - section behavioral classification = where these results even mentioned in the results section? it feels very unclear - Order the kinematic diversity, habitat use and behavioral classification in the same order as the results. (i.e., habitat use, behavioral classification and kinematic diversity??) Overall Comments: - Distribution is not the same as behavior and it feels like these are used interchangeably throughout. When you are talking about the position of the animal, you are referring to their distribution within the tank. As such, the word habitat use should also not be used as a tank is not a habitat. Space use in tank would be more appropriate. Behavior refers to the speed, yaw and dynamic swimming state. - Statistics confirm that there are real, statistical differences and similarities in data. They aren't used to "quantify" them or "give clearer view". Be careful of how you use the KS statistics in the discussion. Really, they just tell you that the patterns, differences and similarities that you see are real and not due to lack of sample size. - Using the word "managed" is confusing as there are many wild populations that are managed. Use the word "captivity". Make it clear to the reader that this sort of study is only possible in captivity. This setup would not work in the wild. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Computer-vision object tracking for monitoring bottlenose dolphin habitat use and kinematics PONE-D-21-19199R1 Dear Dr. Gabaldon, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, William David Halliday, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-19199R1 Computer-vision object tracking for monitoring bottlenose dolphin habitat use and kinematics Dear Dr. Gabaldon: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. William David Halliday Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .