Peer Review History
| Original SubmissionJuly 30, 2021 |
|---|
|
PONE-D-21-22448Examining the effectiveness of blanket curtailment strategies in reducing bat fatalities at terrestrial wind farms in North AmericaPLOS ONE Dear Dr. Adams, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 29 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Ignasi Torre Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Thank you for stating the following in the Acknowledgments Section of your manuscript: The authors are grateful to the Wind Wildlife Research Fund for providing financial support to make this work possible. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: This study was funded by the AWWI Wind Wildlife Research Fund (Award B-03. https://awwi.org/wind-wildlife-research-fund/). All authors were funded by this award and the funders provided four anonymous reviewers for an earlier version of the manuscript. Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 4. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical. 5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Additional Editor Comments: Our author guidelines for systematic reviews/meta-analyses are at http://journals.plos.org/plosone/s/submission-guidelines#loc-systematic-reviews-and-meta-analyses. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors conducted a very impressive meta-analysis. The manuscript is extremely well written and clear, although the method section is very long to read and complex to understand. This study not only provides a concrete assessment of the effectiveness of wind turbines curtailement strategies to limit bat fatality risk, but also provides power analyses and simulations that provide extremely important insights into our ability to detect such effects as a function of parameters at the meta-analysis level but also at the study level. This study is in my opinion extremely rich, and of a very high scientific level. I don't have many comments, I enjoyed reading this study although it is not easy to understand (I don't see how it could be otherwise with such rich results/analyses and such a complex theme). Please find some specific comments in the pdf. Reviewer #2: The study uses meta-analysis and a power analysis to attempt to estimate the efficacy of wind turbine curtailment to reduce bat fatalities. The topic is timely and important given that it has been widely reported that wind turbines kill large numbers of bats and numerous individual studies have been conducted to determine the reduction in bat fatalities by changing the cut-in speeds of turbines. However, there are some structural flaws to the way the analysis appears to be conducted and the writing is often quite opaque and confusing, making it difficult to carefully discern exactly what was done and the assumptions and choices made by the authors. The approach to the meta-analysis and the way those methods and results are presented and then interpreted raise serious concerns. I’ve made exhaustive comments and I would encourage the authors to rethink the categorical vs linear approach to the meta-analysis, as presenting both fails to lend additional insight and violates some core assumptions of how model selection works. The flaws and problems with the methods need to be addressed. In general, the analysis and interpretation of the results is confusing and appears forced and on shaky grounds inferentially. Title + Line 9 - suggest removing the term 'blanket'. There's no additional meaning/value added by the word blanket in this context. I'm aware that is a jargon/term used by wind industry to try and distinguish from 'smart' curtailment but that distinction is arbitrary and I suspect designed to make it sound like curtailment strategies based on wind speeds are insufficiently sophisticated. Line 9 - 'accepted' -- really? Accepted by whom? Certainly not universally accepted by the wind industry. Line 11 - reduces impacts? or reduces fatalities? Line 14-15 - What is meant by 'tested multiple statistical models' ? did you use statistical models to test hypotheses? Or use multiple statistical models to explore relationships? Testing models seems odd wording and makes the intent unclear. Line 16-20 - seems a bit detailed for an abstract Line 22 – response ratio is the way to measure the effect size, not an ‘approach’ Line 24 –the power analysis shows power is low if relationship is <50% (which would be the case with small increases in cut-in speed), so is this interpretation minimizing the existing evidence? Line 33 –explanation is not the right word here. Turbine attraction is a hypothesis that has not been proven, but is what we think may be happening (without knowing the ‘why’). The explanation for fatalities is that blades hit and kill bats Line 38 –specify conservation concern is for populations due to cumulative impacts Line 39 - not sure what is meant by 'accepted' ; also the citation here doesn’t seem to relate to the sentence as written re operational minimization as ‘accepted’ tactic? Line 42 – reduce blade spinning rates below cut-in speed. The way this is written reads like blade spinning is reduced all the time vs below specified cut-in speed. Line 43- replace ‘still’ with ‘can’ spin Line 43/44 – make feathering a new sentence and define it more clearly for clarity Line 48 – this sentence is misleading and incorrect as written. The AWWI summary report cited does not report fatalities associated with turbines operating under curtailment regimes (see Page 9 of AWWI summary report, criteria 2: Turbines operated at normal procedures (e.g., studies conducted while turbines were operating under a curtailment regime were not included)). The statement “a great deal of variability has been reported in the level of fatality reduction achieved by curtailment” needs clearer attribution and substantiation with data/appropriate reference. Line 49 - replace 'impacts' with 'fatalities' Line 49 - the wording here is verbatim as in the abstract .. what does 'exact nature of the relationship' mean? Seems like this is written to justify the work but I'm not sure the point of a meta-analysis is to get at the 'exact nature' ... isn't it more to determine general patterns or estimate average/mean effects from site-specific studies? Line 51 - the fact that you had to put 'blanket' in quotes supports my earlier comment to just delete the term. It's jargon without much useful purpose. What does the leading preposition phrase, "For this study..." mean here in this context? Doesn't curtailment have both operational and financial implications for wind facility operators outside the context of this study? The cited study does not use the word “blanket” Line 53 - the phrase 'exact nature' stands out to me here as well.. it's very rare we can estimate or determine the exact nature of anything.. almost all relationships have some uncertainty associated with them. This sentence could be simplified by reducing to, "The trade-offs between turbine energy productions and bat fatality minimization are poorly understood." Line 55 - the word "Still" doesn't work here. Delete. "this type of assessment" - isn't clear what type of assessment you are referring to. Line 57 - delete 'blanket' (here and elsewhere used in the manuscript) Line 57-59 – this confuses the reasons for raising cut-in speed to 6.9m/s. The reality is that curtailment, especially high wind speed curtailment, is done as an avoidance measure to prevent fatalities of endangered species. 6.9 m/s is not often implemented as a minimization method for non-listed species. Line 62 – meta-analysis is a statistical technique to aggregate studies. It is part of a review framework, but is not a framework itself. Line 64 – This is not an accurate explanation of random/mixed effects meta-analysis. This is why moderators are used. Random effects are used to extrapolate results outside the studies that are summarized. See JSS paper introducing package metafor. Line 67 – is ‘knowledge’ what is being evaluated here? Meta-analysis doesn’t evaluate knowledge. It aggregates study results and identifies sources of variation. Line 68: objective 1 is long, complex and likely multiple objectives Line 72 – isn’t obj 3 listed as part of obj 1? Line 79 - this first sentence seems like it belongs in the last paragraph of the Intro vs first paragraph of methods.. and then delete the "To achieve this goal," and start methods with "We used a response ratio approach..." Line 83 - don't think you need the (hereafter...) clause in this sentence.. I'd assume the meta-analysis approach be referred to as the meta-analysis already... ? That said, I'm not sure saying you used a meta-analysis approach to control for variability among studies is an appropriate explanation of meta-analysis – as it doesn’t ‘control’ for variability so much as quantify variability and estimate efficacy. Line 85 - similar to comment earlier.. I'm unclear about what is specifically meant by "tested multiple statistical models".. did you compare models using a model selection approach? Or another way of testing GOF? Line 87 - not sure what you mean here by absolute cut-in speed and change in cut-in speed.. and were these in competing models? Line 88 - unclear how best models were selected and the framework for model comparison ? Line 89 - were there other covariates? I think it would be very helpful to describe the model structure and set of covariates or predictor variables and how they relate to the hypotheses being tested. Line 108&111&117 – need to clarify/specify how the n=43 + n=22 ends up with final n=36? *see comments on Fig 1. Line 119-120 – was a defined correlation matrix used here? See: Gleser, L. J., & Olkin, I. (2009). Stochastically dependent effect sizes. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 357–376). New York: Russell Sage Foundation. See also metafor documentation/examples on the package website Line 121 – given that variability among studies is high and greatly dependent on sample size, this approach is dubious No mention of weighting, which is highly important. Problematic to give the same weight to a study of 1 vs 100 turbines. Table 1: which studies used global average SE? What was the global average SE and how was it calculated? Confused re the Source column and the footnotes re source? – what’s the difference between footnote #2 and listing AWWIC in the Source column (same re CanWEA and footnote 3). Add sample size of number of turbines in each study to table for comparison. What were the assumptions that were violated to warrant exclusion for Talbot Wind (footnote 1)? Line 148-156 – Unclear. If I understand correctly, then this approach assumes fatalities are distributed evenly during the night, which is likely not the case. Were moderators used to account for study length and seasonal timing (spring vs fall) ? Line 160/161 – effect size usually takes into account a measure of variability, how was this included? Line161/162 – effect size does not account for changes in studies. It is simply a common metric. Moderators can be used to account for differences among studies Line 165/168 – this seems to contradict an early statement in Methods on line 87 re evaluating models based on change in both relative and absolute cut-in speed? Line 173/175 – fatality estimates are not normally distributed or even close to normal. They are bounded by zero and therefore using a symmetrical/normal distribution to estimate the variance around the fatality rate is flawed. A different error distribution needs to be specified and simply stating that normal approximation was the best available strategy is insufficient. Line 177 – unclear when the delta method used or was the CI SE calc previously described was used. Line 179/181 – description appears to be using an outdated and flawed approach and does not reflect current available advice/vignettes from the metafor package that notes that studies with shared controls and multiple treatment need a defined correlation matrix Line 182 – mean SE likely flawed given SE is correlated with mean fatality rate and sample size Line 186 – unclear the value of binning into 3 groups vs the linear approach? Was there a different question being asked here or were there problems fitting with using delta as continuous variable? In general, if you can avoid arbitrary binning, you should, unless there is a specific question that requires it. Line 195-ish – were studies weighted by their sample size at all? Line 199 – needs clearer explanation (in supplemental is fine) re what constituted high leverage and justified removal. Line 202 – What level of ecoregions were used? The names/geographic descriptions given here do not correspond to EPA ecoregions. Eco regions are based on ecological delineations not geographic descriptors like Northeast, East, etc. Unclear how geographic regions were delineated and studies assigned to those groups. Line 205 – “There were” [data are plural]. How did you determine a ‘lack of data for site dependencies’? This is important and should be included. Stating ‘lack of data’ without a test result is insufficient Line 208 – Is there a list of a priori candidate models used for AIC selection?* The statement re model weights being calculated for each model type separately is confusing given that model weights are calculated based on the relative likelihood of a given model compared to the set of models. Candidate model lists, their model structure, and model selection criteria (deltaAIC, weights, etc) should be provided*. Oh you meant for categorical vs. linear types… why separately? *I found it in the Supplemental. Suggest moving into main document. Given the description of how the analysis was conducted, I’m unclear why categorical vs linear are treated as two separate candidate model sets? Couldn’t you include the categorical binning factor as a term in the model and test within the same context of the same candidate model set and determine whether that fit the data best? As presented, it leads to results section that is somewhat repetitious and unclear without much additional value in terms of understanding the core questions being asked. After examining the figures, I think the categorical approach should just be scrapped. What basis is there for binning between Category 1 and 2? Unclear what value the categorical approach provides. Were there null models used in the model selection set? Looks like all models have Δcut-in ? Report either AIC or AICc but not both in table. Supplemental material mentions that the parameter estimates are provided in Table A3 but no such Table exists in the Supplemental document. Line 212 – what is a meta-analysis scale? Line 217 – B1 and B2 are not subsequent reductions; they are directly compared to B0 since the decrease at B2 is not B1+B2 it is simply B2 line 224 – uniform distribution is not appropriate; why not random based on actual distribution? line 266 – a negative binomial likely a superior distribution to Poisson. Was that tested? line 266-278 – how were the distributions selected here chosen? Does this align with the statistical guidance on fatality estimation available from the Fatality Estimator from USGS that specifies distributions for carcass persistence and searcher efficiency? Line 280 – insufficient information of data used Line 324 – 16 projects contradicts the earlier statement that site effects could not be calculated due to insufficient data. General random effects (how you could account for site) suggests only 4-5 levels are needed. Line 327 – 329 – I’m still confused as to the value of comparing the binned vs continuous modeling Line 330 – is the forest plot a figure? How would this show publication bias? Lines 323-371 – generally the model comparison methods and results needs better description and presentation. Table A2 should be in results section. See comments earlier regarding confusion on why presenting categorical vs continuous as two separate model sets. Confusing in lines 337-338 reporting on parameter significance of rotor diameter and geographic region as these were not in either best-fit model. Table A3 giving parameter estimates mentioned in Supplemental document is missing Line 442 – The beginning of the Discussion should emphasize the main findings of the study vs recap older findings. Line 446 – why report the estimate from Category 1 bin in the Categorical analysis vs the results from the continuous linear analysis? The model selection approach of separating into two different analyses of categorical vs linear but then emphasizing the results of the categorical here is confusing and somewhat misleading. Why not present the linear estimate? If the continuous linear response to change in cut-in speed fits better than the categorical, even if the slope of the coefficient is only marginally significant (with low power), then that’s an indicator that there is a ‘marginally significant’ linear relationship with fatality reduction and cut-in speed? I’m not convinced the comparison of the coefficient estimates in the categorical bins approach is a robust or reasonable way to analyze/interpret. Line 456-7 – report the result of Whitby et al vs stating it corroborates.. hard to tell what Whitby et al found or how it’s relevant here with the way it is worded. Not clear what is meant by “how volatile these results can be with sample sizes are low” – were results from Whitby et al volatile or the results here? Or they disagree? Unclear. Line 490 – why unlikely? Doesn’t seem logical (or based on any of the data) to claim the relationship between fatality reduction and absolute cut-in speed would not be linearly related. Line 491 – control cut-in speed appears in the top models for both model sets! Why use AIC model selection if you are going to ignore the results? Also not clear how refs 5 and 39 are appropriate here as neither of those studies address efficacy of curtailment (or spatial variation in efficacy of curtailment). Line 492-502 – the writing and logic are muddled in this paragraph – seems in part to be justifying RR approach (not sure why that needs to be done in Discussion; and the logic is inconsistent on this point between lines 492 and 493) but then the rest of the paragraph is about variation in fatality rates and need for more research on curtailment. This paragraph needs a clearer topic and supporting points to make sense. Line 545 – 554 – this seems irrelevant to this paper and not evaluated or discussed in enough detail or in the context of the findings to be included. Line 558 – this is different than the result presented at the top of the Discussion section or in the Abstract. Line 563 – not clear what “this is the case” is referring to here or what “that result” – which result from Whitby et al? Line 566-568 – unclear what is meant here Line 568 – the development of the efficacy of “smart” curtailment isn’t a conclusion of the work as presented. Line 575 – what is meant by adaptive management framework in this context? Figure 1: how does n=26 from 17 project sites turn into n=36? Figure 2: change x-lab to Δ cut-in speed. Figure legend says relationship between bat fatality, but it’s the fatality ratio plotted on the y-axis. Explanation of how/why Talbot Wind was excluded as an ‘outlier’ needs more justification. FYI - It appears to have one of the lowest uncertainty estimates, yet is removed because of lower fatality ratio? Figure 4: insert “speed” on x-label. Are all the grid lines necessary? Seems cluttered. See comments above re the value of doing both categorial vs linear version of analysis. Looking now at this plot, there doesn’t seem to be a logical binning between category 1 and 2. I suggest scrapping the categorical approach to the analysis. If you can justify a reason to keep it, then you should color code the points on this graph to what category they are placed in. The arbitrary cut-off at 1.4 m/s without clear bins between Category 1 and 2 will be pretty obvious when you do that, I suspect. Figure 5: not clear what “current knowledge” means in the Scenario legend. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
A review of the effectiveness of operational curtailment for reducing bat fatalities at terrestrial wind farms in North America PONE-D-21-22448R1 Dear Dr. Adams, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Ignasi Torre Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-22448R1 A review of the effectiveness of operational curtailment for reducing bat fatalities at terrestrial wind farms in North America Dear Dr. Adams: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Ignasi Torre Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .