Peer Review History
Original SubmissionOctober 24, 2019 |
---|
PONE-D-19-29749 Does training method matter?: Evidence for the negative impact of aversive-based methods on companion dog welfare PLOS ONE Dear Dr Vieira de Castro, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Thank-you for your research on this important topic regarding dog training methods and individual welfare. I agree with the two reviewers that this topic is valuable; however, I also agree with their suggestions for improving the manuscript. This will involve some significant revisions, but I would encourage you to take on these re-writing requirements with the perspective that they are intended to allow your findings to have the strongest impact possible. Before addressing the specific reviewer comments that you need to consider, however, I first need to state my concern about a lack of clarity around your process for recruiting dog training schools to participate in your study, and whether fully informed and on-going consent was possible for the head trainers/dog school management. While you declare in your Ethics Statement that “All head trainers of dog training schools and owners completed a consent form authorizing the collection and use of data”, your statement on lines 110-117 indicates that you used partial disclosure or partial deception when recruiting the dog training schools, since you did not reveal that the aim of the study was to compare the effects of different training methods. I believe that the use of partial disclosure could be justified during the recruitment process to avoid biasing the sample; however, the general principles of ethical conduct for research involving humans is that following the use of partial disclosure or deception, a debriefing be carried out, and the participants allowed the opportunity to withdraw their consent/involvement/data after it has been collected. Although this particular circumstance is different from those that typically involve humans as participants in experiments, I think a case could be made in which the head trainers/business owners claim that their lack of knowledge about the study’s purpose might increase their risk from continued participation in the study. That is, particularly in light of the manner in which you categorize training schools as “aversive” and “reward-based”, and the details provided about these schools in Appendix S1, Tables S1a and S1c, it is not unfeasible that particular training schools could be identified and suffer economic consequences (e.g., lower enrolment in future classes). I have brought this to the attention of the PLoS One editorial staff and the Staff Editors require you to provide further information about the Ethics process you engaged in (please see “Request from Editorial Staff” copied below). Apart from the Ethics information we require, here are additional revisions for you to address; 1) Statistical Methods: Reviewer 1 has raised some concerns around your choice of statistical methods. Although your data are non-normally distributed for the most part, which I understand to be the reason you use non-parametric statistics, the reviewer points out- using the example of “panting”- that such statistics preclude the ability to evaluate interactions among your variables. I would ask you to consider whether you could use more sophisticated statistics that would allow you to examine possible interactions and, if not, please further justify your choice. As well, as the reviewer points out, the sheer number of statistical tests run greatly inflates your risk of committing Type II errors. While I noted that you mention using Bonferonni corrections in Table S3b, there appears to be no use of a correction for multiple comparisons elsewhere- e.g., when reporting your significant correlations. There are many other ways to address the problem of multiple testing (False Discovery Rate, etc.)- please consider adopting one such method. 2. Training Categories Both Reviewers point out that there may be issues with your categorization of the methods/schools as “aversive” vs. “rewards-based”- please see their specific comments below. While your Table S1c shows the total number of “aversive” techniques used, with the schools identified as “reward-based” showing none of these, you do not differentiate between the schools on the basis of relative amount of positive reinforcement used by these “aversive” schools. As well, when such techniques are used (timing during learning) could affect stress in the dogs. While you address some of this in your Discussion, I would like you to consider how you might be able to integrate this more fully into your results and interpretation of the results. Related to the above, please provide some more clarity around the 4 videotaped sessions you used to categorize the schools’ training methods. Did this occur prior to the beginning of dog/owner recruitment/video-recording? In your results, when you correlate the number of aversive stimuli used by schools with behaviour, cortisol, etc., are you using the data collected during these sessions? If so, have you considered using data on the number of aversives used during training for your actual subjects? Why or why not? I could argue that you would make a much stronger case if you had measures of the numbers of aversives used per dog in each school to correlate with the dog’s behaviours/cortisol. Please comment on this. 3. Cognitive Bias Test Reviewer 1 suggests that you have possibly over-interpreted the CB test results and I tend to agree. This test can be influenced by both acute and long-term stressors. As suggested, please add some information to your Discussion around interpretation issues of the test itself. Even if you effectively argue for interpreting the CB test results as indicators of long-term welfare, this is unlikely due to welfare based solely on training methods used earlier. This is an over-simplification since you clearly show that your two groups differ in other characteristics that might be expected to influence cognitive bias- e.g., as in Appendix S3, bred, age, age when separated from mother, owner/main trainer gender, and whether more than one dog lives in the home. These factors may impact measures of welfare (especially those you call “long term”, i.e., occurring outside the training context). This should be clearly acknowledged and the result interpretation modified. As well, in Figure 7, the data point for the Reward Group “N” position appears to be missing. 4. Cortisol measures My own experience with cortisol is that it is a “messy” hormone and interpretation can be a challenge. One of this issues with your averaging of cortisol levels by group is that the time of day training classes occurred and saliva samples were taken appears to vary widely among training schools/dogs. Please clarify if you think this had no effect, and why – for example, if the time of day for saliva collection between dogs in each group was about the same, then you could argue it had little effect. I do congratulate you on getting multiple samples for most of your dogs. However, your cortisol analysis is based on what might be considered a relatively small sample size (16 vs. 15 per group), and power is likely an issue as well. Is there any relationship for the individual dogs sampled with the specific number of aversives used in training for those dogs? I suspect that the result reported on lines 543-544 uses the average number of aversives per school? Please comment. As well, Figure 5 is not compelling. Instead, the data on baseline vs. post-training CORT levels for the two groups would be better shown in the Figure. 5. Terminology Reviewer 2 raises an important point about how you use the terms “positive punishment” and “negative reinforcement”. Please consider how best to address this, since its not likely that you have data to show that the techniques were actually reinforcing or punishing behaviour, based on their consequences. This will likely require re-wording throughout the manuscript and in Table S1b. I will note here that your Ethograms were fantastic! 6. Inter-rater Reliability I commend you on the efforts you made to standardize behavioural coding across your three observers. However, it appears that, following training – which gave a high Cohen’s Kappa coefficient- the vast majority of videos were coded by the only observer (ACVC) who was not blind to the dogs’ condition/group. Following training, some percentage (~20% seems to be standard) of the videos coded by ACVC should have been coded by at least one other observer, and inter-rater reliability (e.g., intra-class correlation) for specific behaviours provided. If this was done, please provide these data. If this was not done, there is some concern that implicit bias from the “non-blind” coder could influence the data, and this must be addressed. I hope you find these reviews and editorial comments to be helpful to you. I look forward to your responses. Please see further requirements from the Editorial Staff below (Additional Editor Comments). We would appreciate receiving your revised manuscript by Jan 27 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Carolyn J Walsh, PhD Academic Editor PLOS ONE Additional Editor Comments: ***Request from the Editorial Staff: For research involving human subjects, PLOS ONE expects prior approval by an institutional review board or equivalent ethics committee, and reporting of details on how informed consent for the research was obtained (https://journals.plos.org/plosone/s/human-subjects-research). We noticed that you obtained ethics approval for your study and consent from the head trainers of dog training schools and dog owners for the collection and use of the data. We also noticed that you did not reveal the full purpose of the study to the dog training schools during the recruiting (lines 113-117). We are uncertain whether your ethics approval covered this partial disclosure of the purposes of the study. Please could you clarify if your ethics approval covered your research with human participants, in particular, the partial disclosure of the purposes of the study. If ethical approval was not required, please provide a clear statement of this and the reason why, and any relevant regulations under which the work is exempt from the requirement for approval. If the ethics approval was waived by your ethics committee, please provide a copy of this documentation formally confirming that ethical approval was not needed in this case, in the original language and in English translation as supporting information files. This is for internal use only and will not be published. We kindly request you please clarify all the above concerns in your revision of the manuscript Case Number: 06481122 ref:_00DU0Ifis._5004P10cD3s:ref Journal Requirements: When submitting your revision, we need you to address these additional requirements:
[Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Interesting study that advances the knowledge of training methods & is worth publishing. Nice to extend observations outside the training ring. That said, there are several overarching concerns: The number of pairwise tests without corrections & the resulting inflated risk of Type II errors. Further, nonparametric tests such as pairwise comparisons & Friedman's don't allow for consideration of interactions. This is especially concerning as the authors discuss effects of training session on outcomes specific to groups (i.e. it appears there are indeed interactions, which makes discussion of main effects inappropriate). At a minimum, alpha corrections or multiple comparisons should be included. More elegant statistical approaches would benefit the manuscript. For example, 'panting' could have been considered using logistic regression with training group, training session, & group*session as independent variables. The (over)interpretation of the judgement bias paradigm used to infer long-term welfare effects. It would be nice to have the same or similar parameters compared both acutely and long-term (i.e. long term effects on cortisol & behavioral indicators), although I appreciate there are limitations with that approach. Cognitive bias paradigms are widely used as the authors state, but they are also widely over-interpreted. If one refers back to the original Mendl paper & the sources cited within, it becomes readily apparent that judgement biases can be influenced by *acute* stressors as well as long-term stress. That means it is not de facto evidence of long term affective states. The discussion does a nice job considering nuance in relation to other elements of the study, but presents no discussion of difficulties with interpretation of cognitive bias paradigms. This needs to be added to the discussion. Finally, lumping all schools that use *any* aversives together oversimplifies the approaches used by different training schools. While the reward based training schools used *no* aversives, there is a strong argument to be made that mainly positive punishment differs from truly balanced approaches that use as much or more positive reinforcement as aversives when training operant behaviors but also employ aversives when proofing already learned behaviors. The authors do discuss this nicely in lines 673-698, but this should be considered in the interpretation of the findings. The methods, per se, may not cause welfare problems but if used unpredictably (e.g. as found in the Schalke et al (2007) study) or in the absence of instruction during acquisition phases may lead to stress during training. Reviewer #2: It was a pleasure to review “Does training method matter?: Evidence for the negative impact of aversive-based methods on companion dog welfare.” In this study the authors evaluated short and long term effects on aversive and reward based training methods on companion dog welfare. They tested 92 dogs, 42 were tested in reward base training facilities and 50 were tested from aversive based training facilities. I really enjoyed the concept of this paper. I think investigating training methods is a very important topic. Little is known about the impacts of training methods overall. And specifically the difference between aversive and reward based methods even though these methods have been around for decades and promoted and televised by many different groups. However, I have some minor revisions for this manuscript. General Comments: I would argue that the terms positive reinforcement, negative reinforcement and positive punishment are used incorrectly throughout this paper. In Table S1b the authors define Positive Punishment as an “unpleasant stimulus applied to the dog” and Negative reinforcement as an “unpleasant stimulus that was applied and stopped”…”. However, in the field of behavior analysis, were these terms originated, reinforcement and punishment are not necessarily unpleasant or pleasant stimuli as viewed by an observer. For a stimulus to be considered punishment it must decrease a behavior and for it to be considered reinforcement it must increase a behavior. In this study data wasn’t collected on what happened to the behavior after these methods were implemented, thus we don’t know if punishment or reinforcement was used. I would suggest that the authors change these terms to something other than reinforcement and punishment so that it is not confusing to the reader. The authors stated in the methods that they collected the frequency of the methods and labeled the school aversive or reward based on the frequency that they observed. Those that used more aversive methods were labeled aversive and those that used more rewards were labeled rewarding. It would be helpful if we can see these frequency and a list of specific methods for the aversive and reward schools so that the reader has a better idea of how aversive and rewarding these schools were determined. I find this very important as this is what the whole paper is based on. Methods Line 140 “free of behavioral problems…” should clarify “ free of certain behavioral problems” or you could use specific instead of certain. Line 116 the authors state that the first 15 minutes of three training sessions were used. Were these training sessions randomly picked? If so how many training sessions did the dog have during this study that you could have chosen from? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
PONE-D-19-29749R1 Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare PLOS ONE Dear Dr Vieira de Castro, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Dear Ana, Thank-you for the revisions to your manuscript and the clarification about the ethical procedures for the study. I think that your statement from your reply regarding ‘de-briefing’ the instructors/owners at the training schools regarding the true goal of the study should be included in your section on Recruitment (2.2.1). I truly appreciate you and your co-authors intentions for conducting this study, and I wish to see it published. However, despite the very high quality data collection, analyses, and writing, I feel that the study suffers from one fundamental issue with which I am having much difficulty. This is, in part, why I have been delaying my decision letter, as I have been trying to assess what the best course of action might be. Specifically, my major issue is with the presentation of the study: its overall treatment is as if the study is an experiment - even though you very clearly acknowledge in the Discussion that it is not. Despite this acknowledgement of the non-experimental design, however (along with an acknowledgement that other factors may have influenced your outcomes), you have evaluated the group differences in cortisol change and cognitive bias using “training school approach” as if it is the ONLY significant difference between the three groups. And, as you very transparently state, it is not. Because dogs/owners were NOT randomly assigned to the training schools, these multiple pre-existing differences between the groups could explain both your cort and CB findings just as well as the Training School differences can, but have not been assessed in any way. In my opinion, most relevant among these factors are the significant BETWEEN-Group differences in: 1) Breed composition- in the Group Reward, 74% of the dogs are mixed breed or Retrievers vs. lower percentages of these ‘breed groups’ in the Aversive and Mixed Groups; 2) Age of the dogs – e.g., in Group Reward, 76% are under 1 year old vs. only 39% in Group Aversive; 3) Owner gender- In Group Reward, 74% of owners are Female vs. 46% and 40% in Groups Aversive and Mixed, respectively; and 4) Presence of children in the house- about three-quarters of owners in each of Groups Reward and Mixed do NOT have children vs. about half who do in Group Aversive (which also might simply be a proxy for differences in owner age and lifestyle?). Although I appreciate that you spend considerable effort evaluating for the effects of these differences WITHIN groups, due to the small sample sizes for the categories within each training type group, there is little statistical power to detect any such effects. Thus, this is a weak test. As well, there are suggestions in the literature that some of these factors MIGHT be viable explanations for your findings (e.g., lower stress in dogs with women handlers; breed differences in temperament, etc. ). So, as it currently stands, the fundamental problem that the manuscript has is that you are actually unable to actually dismiss any of the above competing ‘alternative’ explanations to ‘training school approach’ as explanations for the differences in your welfare assessment measures. Indeed, it would be fair to criticize your approach as biased in your view that ONLY training approach differences ‘created’ the welfare differences. Although this is absolutely fine to have as your hypothesis, because of the study design, it cannot be the only differences that is evaluated. Until you can convincingly show that the welfare differences you found are based on training school approach as the most compelling explanation, then your study will fall short of being able to advocate for the position that reward-based training generates the best welfare outcomes for dogs. Personally (as a dog owner and agility/rally enthusiast who uses only R+ methods), I very much want your conclusion to be the case!! As well, my understanding of learning theory tells me that your prediction about the relationship between training approach and welfare is likely to hold true. However, in evaluating this work as a researcher and academic editor, I believe that the confounds among your group have to be given equal (statistical) consideration before we can suggest that, in this quasi-experimental design, your conclusion is justified. I agree with you that it will be rare for any canine researcher to be able to run a randomized controlled trial with pet/companion dogs for ethical reasons. So, it is important for us to make the strongest and most unbiased case possible based on study designs such as your current one. My decision for your manuscript is “Major Revision” based on the above concerns. I do not consider myself a statistician, so I would suggest that you consult with one about this broader issue of how to effectively address the confound issues. Whatever the statistical technique used, I believe you have to convincingly show that the BEST explanation for differences in your welfare measures is NOT breed, age of dogs at testing, owner gender, or other lifestyle aspects, but IS training school approach to which the dogs have been exposed. If this turns out not to be the case- for example, if some of these other differences do explain the welfare measure differences equally as well as the training school approach, then this is also very important to publish, as it will advance our knowledge. Of course, such findings would involve re-framing the paper significantly. I hope you find this critique helpful. I believe your work is valuable and needs to be published in a way that leaves little room for reproach or criticism about the study’s findings. As well, please review all the comments from Reviewer #1 and Reviewer #3 (new). I believe you have adequately dealt with the part of comment #1 of Reviewer #3 regarding baseline cortisol measures (although, as the reviewer correctly points out, there are no ‘baseline’ measures prior to each training session- and this should be acknowledged in the paper); however, the other suggestions in comment # 1 as well as the other comments (particularly # 3) should be considered further. I invite you to revisit your manuscript in light of my comments and those of the Reviewers and respond to the above decision. Best, Carolyn ============================== We would appreciate receiving your revised manuscript by May 30 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Carolyn J Walsh, PhD Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #3: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors did a nice job editing the text for language & overhauling the statistics section. I'm puzzled why a mixed model negative binomial regression was not used to control for the random effects of individual? This is a repeated measures design so partitioning that error is appropriate. Though they have made many changes to the manuscript, the cognitive bias issue has not been addressed in a substantive way. Because so much is made of assessing long-term welfare, I think it demands a more detailed examination of why they chose to substitute an entirely novel welfare indicator for Phase II without including any of the behavioral or physiological indicators used in Phase I. It bears discussing how a possibly transient negative affective state indicates long-term poor welfare. This is a new comment, but I suggest including means + SEMs in the text of the results section. The graphs are difficult to see, and it seems odd to have to go the supplemental materials to view results used as dependent variables. A summary table with average occurrence by Group would be helpful. It would be nice if the Discussion considered the uniformly low occurrence of stress indicators per training session... while statistically significant one wonders how biologically relevant some of these differences are (e.g. 3 versus 4 "move away")? Similarly I think the authors have captured nice data worthy of further consideration - e.g. what do they tell us about potentially robust indicators of acute stress? Reviewer #3: I have now read the manuscript entitled “Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare” by Vieira de Castro, Fuchs, Pastur, Morello, de Sousa & Olsson. The study investigates the effect of aversive- and reward-based training on short- and long-term welfare of dogs. The authors grouped the dogs in three categories according to the prevalence of aversive-based methods in the training - i.e, the Group Aversive, the Group Mixed and the Group Reward. They studied short-term welfare scoring the stress-related behaviours during the training. They also compared the amount of cortisol in dogs’ saliva at home and after training. Finally, they studied long-term welfare using a Judgement Bias paradigm, which assessed the affective state of the dogs outside the training context. Results show that stress-related behaviours were prevalent in both the Group Aversive and the Group Mixed. The cortisol in dogs' saliva increased during the training only in the Group Aversive. Both the Group Aversive and the Group Mixed showed a pessimistic-like judgement bias. The findings are not surprising given the previous literature on this topic, but this study is one of the few combining different objective measures of stress to address the relationship between training methods and dogs’ welfare. Given the importance of companion dogs in our daily life, it is therefore important to study the appropriate conditions to train dogs while preserving their welfare. Major concerns: 1. Authors found that the stress-related behaviours changed across the three groups during the training. But we do not know what the dogs’ level of stress was before training. It is possible, therefore, that other factors may have led to stress-related behaviours. Consider for instance the living conditions of the dogs. Owners that choose specific training centres may have different approaches to the dogs when interacting with them in their home context. Moreover, several factors in training methods influence dogs’ performance and possibly their level of stress, like a tight schedule of reinforcement/punishment, the characteristics of the to-be-punished behaviour, and several other features. There is plenty of literature on the topic (I just mention one review chapter, but there are several others: Hineline, P. N., & Rosales-Ruiz, J. (2013). Behavior in relation to aversive events: Punishment and negative reinforcement. In G. J. Madden, W. V. Dube, T. D. Hackenberg, G. P. Hanley, & K. A. Lattal (Eds.), APA handbooks in psychology®. APA handbook of behavior analysis, Vol. 1. Methods and principles (p. 483–512)) which seems not only to be missing in the Introduction but also in the Authors’ hypothesis, as revealed by the absence of a baseline measurement for each participating dog. 2. The non-parametric test used for the Judgment Bias Paradigm does not allow to test the Group x Bowl Location interaction. This interaction would attest that the latency to approach the different locations of the bowl changed according to the training conditions. Therefore, it provides stronger evidence in support of your hypothesis. This interaction should be explored. On the topic, see Gygax (2014, doi: 10.1016/j.anbehav.2014.06.013). 3. It is not clear to me whether the dogs had finished the training before doing the Judgment Bias task or they had just completed phase 1 of your experiment. If they had not completed the training, then it is misleading to claim that the Judgment Bias task assessed long-term welfare. In general, I think that discussing the results in terms of welfare within and outside the training context would be more appropriate because “short-“ and “long-term” are ambiguous concepts. Minor points: 4. Could you please report the effect sizes of your results to facilitate future meta-analysis on this topic? 5. Line 165: “… in order to mitigate familiarization to training methods…”; 6. Line 456: “… and a tendency for body turn…”; 7. Line 495: “… there was a tendency for…”; 8. Line 602: “… methods used also matters,…”; 9. Line 609: “This result is most likely a consequence of dogs’ familiarization with the training context…”; 10. Line 690: “… was a quasi-experimental rather than experimental study…”. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 2 |
PONE-D-19-29749R2 Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare PLOS ONE Dear Dr. Vieira de Castro, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Dear Ana- Thank-you for all your efforts in revising the paper to date- it has become a much stronger manuscript! I believe that you have made significant improvements in the statistical analyses, in accordance with the advice you received. You defended your decisions regarding the new analyses quite well in your response to the reviewers; however, your approach is not explained quite as well in the actual paper itself. In addition, both I and the two reviewers (one of which, Reviewer 4, is new), have some suggestions and requirements for clarifying the interpretation of your results in the manuscript, as outlined below. Finally, as indicated by Reviewer 4, it is not clear in the manuscript whether there is evidence of strong inter-rater reliability for the behavioural coding you performed. As described below, this section needs to be addressed more fully. These are the reasons for requiring an additional revision. My hope is that you find the following constructive criticisms useful. I know (from experience!) that once a paper has been revised twice, it can become frustrating for authors to deal with additional changes that are required prior to acceptance. I believe that there are strengths to your study that make a real (and “real world”) contribution to the literature on dog training and welfare. Given the topic, it is likely that your manuscript will be widely read and cited. Therefore, I feel it is of utmost importance that your story as demonstrated by your data be “iron-clad” and solid, and free of any criticism around how the data are analyzed and interpreted, given your study design and methodology. I. Confounds: I appreciate your comments in the response to reviewers letter regarding the fact that the confounders that appeared in your study- i.e., that the groups differed in variables other than those you are testing (training method), such as owner gender, dog breed, etc.- were not originally part of your hypotheses, and that you should not test specifically for them. I agree. It is an unfortunate fact that one difficulty in such “real world” research, in which we cannot randomly assign subjects to groups, is the presence of confounders which make our interpretation of our variables of interest tricky. If confounds are present (and they almost always are!), they have to be fully explicated, addressed statistically to the greatest extent possible, and then not forgotten when the data are interpreted. You have done each of these steps to a good degree, but I think a few more additions could improve the manuscript. 1) First, please consider whether in the Introduction you can introduce the notion that this is (what I call) a “real world” study, and you are not randomly assigning dogs/owners to training schools. Indicate that because of this, you will be evaluating for the presence of specific differences among the training groups that might influence to overall variables of interest. I believe this, in fact, is a strength of your paper, as you have taken considerable efforts to collect and analyze the demographic data for owners and dogs! However, while you mention the “Questionnaire” in the methods section and the data appear in the results, your efforts are not mentioned anywhere in the Introduction. It is fine to not have hypotheses about the questionnaire outcomes, of course, but I believe it would be good to highlight the care that you are taking in collecting such data, and your awareness that such factors could impact your group differences on the variables of interest. This point was also raised in comment #1 by Reviewer #3. 2) It would be useful to shorten what you write in response to the reviewers (under Editor Comments, 1), and put this in Section 2.7, to provide a fuller description of the choices you made regarding how you analyzed the “other” group differences (confounders) in relation to your variables of interest. This might also address the first part of comment #1 by Reviewer #3... i.e., if the training method still affects body turn, shake, yawn, and low state after controlling for potential significant confounders. (However, please see the entire comment and respond appropriately). It also might address the comment on “Statistical analysis” by Reviewer #4. 3) With respect to this reviewer’s comment on not including breed as a confounder, even though it differed significantly among groups, please address this more fully in the paper if you decide not to analyze it further. Currently, there is only one line in the Discussion (line 711-712) that dismisses the potential breed effect, which I- and likely many other readers- feel might be having an effect. 4) For the results of the “Questionnaire”, I agree with Reviewer #3 that the interpretation some of the differences in dog and owner demographics is difficult without looking in Appendix S3. However, instead of placing the appendix in the main text, I would recommend that for Sections 3.1.1 and 3.1.2, the direction for the significant results be placed in the main text by stating the medians/range (means/sd)– or proportions, whatever is most appropriate for the variable (vs. just the statistic and associated probability for each finding). 5) Although you describe the possible effects of confounds as limitations in the Discussion, I think more exploration of these variables as alternate possible explanations for some of the findings- OR why they are NOT as strong explanations- is warranted. It is clear that your hypotheses are focussed on training method comparisons, and that should be the main focus of the Discussion. But engaging in some more discussion of how the ‘other’ group differences which appeared might also impact the behaviours during training, the cognitive bias findings, and the cortisol outcome (no difference) could be worthwhile and generate further research ideas. Also, the comment by Reviewer #4 that owner-dog interactions during a training session might reflect daily non-training interactions should be integrated into the Discussion as well. II. Cognitive Bias Outcomes The new analysis in line with Gygax’s (2014) recommendations is sound. However, it is not clear to me that your interpretation of its meaning is! As I understand the concept of cognitive bias, it is specifically the difference in behaviour towards the “ambiguous” stimulus (M, in this case) which is interpreted as either a more optimistic or a more pessimistic bias. In your data, the Group Aversive dogs responded to ALL the food bowl locations more slowly, and the latency to bowl M did not change for this group relative to their response to the other bowls. This is interesting indeed and might indicate that in the Group Aversive dogs, there is more behavioural inhibition or the like. However, is it accurate to call this pessimism? Can you please either address this issue in the manuscript and support your interpretation with some citations that also interpret latency to perform “all” (vs. just ambiguous) tasks as a pessimistic bias, or update how this cognitive bias finding is interpreted in the manuscript? Currently, given the new cognitive bias findings, I believe there is NOT much solid support for any welfare effect outside the context of training, as the cortisol shows no differences in “non-training day” measures. So, supporting your current interpretation of the cognitive bias outcome is critical for your argument to stand. If you cannot sufficiently bolster the cognitive bias interpretation as above, it might be necessary to pull back from claims about “poorer welfare” for dogs exposed to aversive training classes, and instead focus on your strong effects, which are the group differences emerging from the behaviour coded during training sessions, and the post-training cortisol levels. III. Inter-rater Reliability There is still a lack of clarity on how strong inter-rater reliability (IRR) for the behavioural measures actually is, as pointed out by Reviewer #4. It is critically important for you to be able to convince readers that there is acceptable/high inter-rater reliability for these behaviours, as the behaviour effects are some of your strongest. Currently, as you report it, there were 3 observers coding videos, only 2 of which were blind to the group assignment of the dogs. The first observer, who was NOT blind to condition, was responsible for coding the vast majority of the videos. This, in and of itself, is not necessarily a problem, IF you can demonstrate convincingly that there is high IRR for each behaviour coded among the observers. This requires reporting: 1) the total number of videos watched/coded and the percentage of videos coded by each observer, and 2) for each behaviour, a value for IRR (whatever statistic best suits your situation)- which can be presented as an appendix. Without this additional information, we are unable to ascertain the extent to which it is possible that unconscious bias in coding by the non-blinded observer might have influenced the outcome. So, please augment this section. If it requires additional coding by observers, this is worthy investment in time and effort. IV. Effect sizes: Inclusion of effect sizes is excellent, as pointed out by reviewers. However, they are lost in the Appendix. Please include them in the main text, with each result reported. The magnitude of the effect sizes for the behaviours is a strength! V. Appendix vs. In-text: Both reviewers recommend moving some of the information in the Appendices to the main text. It is my preference that the ethograms/behavioural definitions appear in the main text in a table, not in an appendix. However, for the other appendices, I believe it is “author’s choice”. As usual, please respond to each of the Reviewer comments in your letter. There are additional recommendations from Reviewer #4 which are quite useful. I encourage you to keep going with this manuscript, as it conrtibutes knowledge not currently in the literature! Best, Carolyn ============================== Please submit your revised manuscript by Aug 14 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Carolyn J Walsh, PhD Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: (No Response) Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: (No Response) Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: (No Response) Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: (No Response) Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: (No Response) Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: 1) I appreciate the fact that you have considered several potential confounders. Since you have found that the three groups differ along several demographic factors, you should report which group is different in the main text - not only in the SI. It is otherwise hard to interpret the results from line 472 to 475. Also, and more importantly, you should check if the training method still affects Body turn, Body shake, Yawn and Low State after controlling for the potential significant confounders. I am still not convinced that the measurement of cortisol levels during - and not before - Phase 1 can be considered as a reliable baseline. In fact, in Phase 2, you demonstrated that the training method had affected dogs' welfare outside of its immediate context. Why should cortisol levels not be affected in the same way? Also, I still think it is important for a potential reader to understand that several factors involved in training methods might have affected your outcome variables. These factors may also have affected your results. You should at least mention this in the Introduction. 2) Thanks for your reply, I believe that you have properly addressed this point. 3) Thank you for clarifying this point. Because some dogs had finished the training while others had not, I wonder if the three groups differed in the number of training sessions they had attended before Phase 2. 4) Thank you for adding the effect-sizes. Reviewer #4: I have read the paper and past reviews with interest and commend the authors on their work. It is indeed difficult to disentangle the many factors potentially affecting dogs’ welfare. While previous studies regarding relationships of owners’ training style and dog welfare have been mostly correlational, this manuscript has several strengths, as designation of training methods, as well as the welfare indicators were done based on objective measures and not owner report. The authors used a multimodal approach – behavioural indicators of acute stress, cortisol measures as well as the judgement bias test. I also appreciate that it takes a lot of effort to recruit and test a sample size of 92 dogs. The revised statistics appear to be well-founded, and the authors appropriately acknowledge the limitations of their study. Clearly there are many influencing factors that can affect a dog’s daily welfare. Nonetheless, it would not be unreasonable to assume that owners’ interactions during the training session are indicative of their interactions during everyday life, and this could potentially explain the differences in the cognitive bias tests. This concern (according to reviewer 3 of the last round of reviews) is actually something I would view as an advantage, with the results likely not only having implications for the time the dog spends in dog school, but potentially the everyday interactions with their owners. Probably this should be discussed, as different reviewers independently brought this up. Effect sizes (Cohen’s d) reported in the appendix were large, as many were >1. I think it would be worth pointing out in the main text that there were large effect sizes, which is even more informative than the p values. Abstract: One of the study’s strengths, in my opinion, is that training method was objectively measured. Since not everybody reads the whole paper, I would recommend to include this information in the abstract such as was stated in the Introduction “By performing an objective assessment of training methods (through the direct observation of training sessions) and by using objective measures of welfare (behavioral and physiological data to assess effects during training, and a cognitive bias task to assess effects outside training)” Line 29: I don’t think the authors can claim to have investigated the “entire range of aversive-based techniques (beyond shock-collars)”. Rather, it is relevant that the observed intended positive punishments were presumably less aversive than shock collars, and still clear differences between the groups were found. So I would rather frame it such that previous studies used very highly aversive stimuli such as shock collars which may not be relevant to most dogs’ everyday lives, whereas the observed techniques were. Line 104: “we addressed the question of whether aversive-based methods actually compromise the well-being of companion dogs” - Perhaps it would be beneficial to state this in a more neutral way such as “assessed the effects of reward-based and aversive-based methods on welfare of companion dogs”. Although welfare is unlikely to be influenced by time in the training school alone, it is likely to reflect on the everyday interaction of the dogs and owners Line 125: term “posteriorly” – I believe you mean “Prior to inclusion in the study”, rather than after? Line 147: include a reference for the statement “In order to be coherent with the standard for classification of operant conditioning procedures as reinforcement or punishment (which is based not on the procedure itself but on its effect on behavior), Line 155: I feel it is important how the schools were designated as aversive or reward based, so personally I would prefer to have this information in the main manuscript, rather than the appendix Line 327: As above, I would prefer to know the details of behaviour codings to assess welfare from the paper, rather than the appendix Line 337: it is not totally clear to me on the basis of how many videos reliability was assessed at the end, and what percentage of videos was coded by each of the coders Statistical analysis: Line 397: Why were confounders tested one at a time and not simply included in the full model? (I realise it might possibly be due to power/ sample size if too many variables are included in the model?). While I wouldn’t insist on it, in my opinion including breed in the model might be worthwhile. The authors commented that they found doing this not useful given that mixed breeds are not a homogenous group. There are, however some potentially relevant systematic difference also between mixed breeds and purebreds: Turcsán, B., Miklósi, Á., & Kubinyi, E. (2017). Owner perceived differences between mixed-breed and purebred dogs. PloS One, 12(2), e0172720. Riemer, S. (2019). Not a one-way road – severity, progression and prevention of firework fears in dogs. Plos One, 14(9), e0218150. Line 426: Effect sizes could be reported in the results, rather than the appendix Line 538: maybe “require” instead of “take”? (English suggestions) Line 619: maybe “possibly reflects” instead of “is possibly a reflex of” Line 658: also one year since the “treatment” is a long time for this to still have an effect Discussion: perhaps it could be discussed that the cognitive bias test indicates welfare differences between the three groups, but this was not reflected in baseline cortisol measures Line 714: However, recent studies show that adoption >8 weeks is also associated with a higher incidence of behaviour problems than adoption at 8 weeks Jokinen, O., Appleby, D., Sandbacka-Saxén, S., Appleby, T., & Valros, A. (2017). Homing age influences the prevalence of aggressive and avoidance-related behaviour in adult dogs. Applied Animal Behaviour Science. Puurunen, J., Hakanen, E., Salonen, M. K., Mikkola, S., Sulkama, S., Araujo, C., & Lohi, H. (2020). Inadequate socialisation, inactivity, and urban living environment are associated with social fearfulness in pet dogs. Scientific reports, 10(1), 1-10. Appendix 1: I would suggest to write “presumably unpleasant”/ “presumably pleasant” stimulus, rather than having “unpleasant” or “pleasant” in parentheses. I was wondering how often petting the dog was observed compared to feeding? (as being petted might not necessarily be perceived as pleasant in a training context, even if it is meant as a reward by the human) I would appreciate a full list of all behaviours included in the definitions of “pleasant” and “unpleasant”, and perhaps their frequencies. Perhaps the current Appendix 1 could go into the main text, and the frequencies of different types of pleasant and unpleasant stimuli in the Appendix. Appendix 2: I think in the definition for move away it should read “dog takes” not “dog gives” The visible lines seem to be slightly mixed up for vocalisations. Paw lift: “for a brief or a more prolonged time” is very unspecific Fig 3 and 4 differ in that there are lines in Fig 3 at the x-axis and between labels but not 4 in the same position. General: I found some double empty spaces in the text, which can be found with the search and replace function. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 3 |
PONE-D-19-29749R3 Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare PLOS ONE Dear Dr. Vieira de Castro, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== Dear Ana- I completely agree with the reviewer that this paper is important to the growing literature around ethical training and also wish to not create further delays in having it published. However, I also completely agree with the reviewer that you haven't actually sufficiently addressed the criticisms around inter-rater reliability (IRR). Although the methods you used to evaluate IRR are quite transparent, I believe that they are barely sufficient/likely insufficient to convince skeptics that there has been no bias in the coding of behaviours (as the researcher who coded the majority of behaviours was not blind to training condition of the dogs). This can be fairly easily rectified, with the suggestions made in the last round of revisions and those made currently by Reviewer #4. As an author who has also been asked by reviewers/editors to re-evaluate IRR measures and report them in more detail (i.e., by behaviour), I understand that although this is a straight-forward and relatively easy exercise, it is also frustrating and involves coordinating new work of at least one other coder. However, please re-consider doing so to increase the strength of your IRR reporting in accordance with these suggestions, as it will close a 'window of vulerability' regarding readers' confidence in your outcome- which, in some circles, might be controversial! As well, please action and/or respond to the other suggestions of Reviewer #4. Finally, one of the prior reviewers suggested that I ask you to please take a look at the various affiliations given to the authors on the title page; are these affiliation histories OR are authors' affiliations current (e.g., cross-appointments)? I recommend limiting affiliations to those held when the research was conducted, along with "current" address, if the affiliation has changed. For reference, here is the instruction from the PLoS One website (which I am sure you are familiar): Each author on the list must have an affiliation. The affiliation includes department, university, or organizational affiliation and its location, including city, state/province (if applicable), and country. Authors have the option to include a current address in addition to the address of their affiliation at the time of the study. The current address should be listed in the byline and clearly labeled “current address.” At a minimum, the address must include the author’s current institution, city, and country. If an author has multiple affiliations, enter all affiliations on the title page only. In the submission system, enter only the preferred or primary affiliation. Author affiliations will be listed in the typeset PDF article in the same order that authors are listed in the submission. <o:p></o:p> <o:p></o:p> <o:p></o:p> Looking forward to your response (which I do not anticipate sending back to reviewers), Carolyn ============================== Please submit your revised manuscript by Oct 11 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Carolyn J Walsh, PhD Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #4: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #4: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #4: This is a great paper, describing an important piece of work and including a lot of interesting information. Also the discussion has still gained much from the revision and is exciting to read. I still feel a bit uneasy about the inter rater reliability as it is my opinion that this should generally be done for each coded behavior individually, rather than lumping all together, and as using four videos of over 200 for IRR does not seem much. Nonetheless, I would not wish to delay the publication of this paper any further. I only have a few more comments Line 469 3.2.1.1. Stress-related behaviors -> I would prefer the results in a table, making the text easier to read, but this is of course my personal preference. Line 640: “the use of both shock collars [9] and other negative reinforcement techniques [10]” -> I know shock collars can be used for negative reinforcement, but aren’t they most often used for positive punishment? So it looked a bit weird to me to say “shock collars and other negative reinforcement techniques” (although I know the terminology is sometimes inconsistent) Line 732 : This seems to suggest that, although dogs trained in ‘least aversive’ schools -> this could be a little confusing, as actually the pos reinforcement school was the least aversive school. Maybe just refer to “low aversive” and “highly aversive” schools; or refer to “mixed” as you did in the rest of the text? Line 788: Presently, there is a lack of scientific evidence regarding the efficacy of different training methods [3], which limits the extent of evidence-based recommendations ->Here this recent study could be cited which supports higher effectiveness of pos. reinforcement methods: China, L., Mills, D. S., & Cooper, J. J. (2020). Efficacy of Dog Training With and Without Remote Electronic Collars vs. a Focus on Positive Reinforcement. Frontiers in Veterinary Science, 7, 508. Figure 5: unlike in the other figures, there is no x axis and the names of the different columns are not written in boxes. I.e., the style is different from the other figures. I may have missed it, but I didn't find where the dataset can be accessed. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 4 |
Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare PONE-D-19-29749R4 Dear Dr. Vieira de Castro, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Carolyn J Walsh, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Thanks for persevering through the revision process! Best wishes, Carolyn Reviewers' comments: |
Formally Accepted |
PONE-D-19-29749R4 Does training method matter? Evidence for the negative impact of aversive-based methods on companion dog welfare Dear Dr. Vieira de Castro: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Carolyn J Walsh Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .