Same law, different results: comparative analysis of Endangered Species Act consultations by two federal agencies

Evaluating how wildlife conservation laws are implemented is critical to determining how best to protect biodiversity. Two agencies, the U.S. Fish and Wildlife Service and National Marine Fisheries Service (FWS and NMFS; Services collectively), are responsible for implementing the U.S. Endangered Species Act (ESA). This creates a “natural experiment” for understanding how implementation and interpretation of the same law varies between agencies with different histories, cultures, priorities and funding levels. We take advantage of this natural experiment to quantify differences in how FWS and NMFS implement a core component of the ESA, section 7 consultations. The ESA requires federal agencies to consult with the Services if an action an agency proposes might affect ESA-listed species or their habitats. We quantified the quality of consultations by comparing > 120 consultations to the requirements laid out in the Services’ consultation handbook. These analyses were complemented with in-person interviews of biologists from the Services to help understand how some observed variation arises. Among these consultations, we found those from NMFS had significantly higher quality scores than those from FWS. A common shortcoming from both agencies, but especially severe for FWS, was the lack of accounting for effects that were previously authorized through consultations. The biologist interviews indicated some discrepancy between how they perceive consultations and the outcomes from our quantitative analysis. Building from these results, we recommend several actions that can improve quality of consultations, such as using a single database to track and integrate previously authorized harm in new analyses, and the careful but more widespread use of programmatic consultations.


28
The U.S. Endangered Species Act (ESA) is considered one of the strongest wildlife laws in the world (Gosnell 29 are updated periodically when FWS provides a new batched data release. 138 Using PCTS and the Section 7 Explorer, we randomly selected 30 formal and 30 informal consultations 139 from each Service from 2008 to mid-2015. To minimize natural history and geographic variation of the species 140 consulted on by NMFS and FWS, we limited our consultations to those dealing with sea turtles in Florida We recorded general information for each consultation, such as the start and end dates of the consultation,

155
year it was completed, regional office it was filed through, species of sea turtles concerned, and page length.

156
The full dataset and metadata describing all variables are provided alongside the consultations at OSF 157 (https://dx.doi.org/10.17605/OSF.IO/KAJUQ). Below we describe the scoring methodology, noting that 158 formal and informal consultations required different scoring rubrics because they involve different content.

159
All scoring rubrics are provided in SI Appendix 1 (formal consultations) and SI Appendix 2 (informal 160 consultations). It is important to note that it is not feasible to blind scorers as to the Service that wrote a 161 biological opinion because of the nature of the documents: any familiarity with the consultation process 162 makes the Service immediately apparent. Therefore, the reviewers were not blind to the Service when 163 analyzing quality. To minimize bias, we used a strict set of standards from the section 7 Handbook to analyze 164 quality to the best of our ability. When there was any ambiguity as to the appropriate score, a second reviewer 165 (JWM) would read the consultation in question, then decide on the appropriate score with the primary 166 reviewer (ME).

167
For formal consultations, we selected the four core sections from the Handbook to score the quality of 168 each biological opinion: "Status of the Species," "Environmental Baseline," "Effects of the Action," and 169 "Cumulative Effects." While not an exhaustive list of biological opinion sections, these four sections contain 170 the bulk of the information and analysis of the species and the proposed action. Each section received a score 171 from 0-5 or 0-2 based on how well they met the specific requirements set out for that section by the Handbook.

172
In developing the scoring system, we found that rating the quality of these core sections of the biological 173 opinion was clear because criteria set by the Handbook allowed for a simple present/absent scoring system.

174
These present/absent scores were summed for each of the four core sections, giving them a maximum possible 175 score of 2 or 5 points. We calculated total quality by summing the scores across all four sections. The overall assessed, and whether the consultation was part of a programmatic consultation. We incorporated these 203 variables into a set of nine candidate models for the analysis of overall quality using the GLM (Table 1,

213
To evaluate the quality components, we used a set of three candidate ordinal regression models (Table 1, 214 "Ord. regress") with random effects for the consultation document in which the components were nested.