Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluating the use of semi-structured crowdsourced data to quantify inequitable access to urban biodiversity: A case study with eBird

  • Aaron M. Grade,

    Roles Data curation, Formal analysis, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation George Perkins Marsh Institute, Clark University, Worcester, Massachusetts, United States of America

  • Nathan W. Chan ,

    Roles Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    nchan@umass.edu (NWC); pswarren@eco.umass.edu (PSW)

    Affiliation Department of Resource Economics, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America

  • Prashikdivya Gajbhiye,

    Roles Data curation, Formal analysis, Investigation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Resource Economics, University of Massachusetts Amherst, Amherst, Massachusetts, United States of America

  • Deja J. Perkins,

    Roles Investigation, Writing – review & editing

    Affiliation Center for Geospatial Analytics, North Carolina State University, Raleigh, NC, United States of America

  • Paige S. Warren

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    nchan@umass.edu (NWC); pswarren@eco.umass.edu (PSW)

    Affiliation Department of Environmental Conservation, University of Massachusetts, Amherst, Massachusetts, United States of America

Abstract

Credibly estimating social-ecological relationships requires data with broad coverage and fine geographic resolutions that are not typically available from standard ecological surveys. Open and unstructured data from crowdsourced platforms offer an opportunity for collecting large quantities of user-submitted ecological data. However, the representativeness of the areas sampled by these data portals is not well known. We investigate how data availability in eBird, one of the largest and most popular crowdsourced science platforms, correlates with race and income of census tracts in two cities: Boston, MA and Phoenix, AZ. We find that checklist submissions vary greatly across census tracts, with similar patterns within both metropolitan regions. In particular, census tracts with high income and high proportions of white residents are most likely to be represented in the data in both cities, which indicates selection bias in eBird coverage. Our results illustrate the non-representativeness of eBird data, and they also raise deeper questions about the validity of statistical inferences regarding disparities that can be drawn from such datasets. We discuss these challenges and illustrate how sample selection problems in unstructured or semi-structured crowdsourced data can lead to spurious conclusions regarding the relationships between race, income, and access to urban bird biodiversity. While crowdsourced data are indispensable and complementary to more traditional approaches for collecting ecological data, we conclude that unstructured or semi-structured data may not be well-suited for all lines of inquiry, particularly those requiring consistent data coverage, and should thus be handled with appropriate care.

1. Introduction

Addressing the big challenges of the Anthropocene increasingly calls for big datasets [13]. Crowdsourced platforms have emerged as a powerful tool to gather large quantities of ecological data relatively inexpensively [3, 4]. Use of these datasets is particularly appealing for studying social-ecological patterns in cities, where spatial heterogeneity is high and complex patterns of land ownership pose challenges for accessing sampling locations [5, 6]. Critical reviews have identified a number of potential issues of bias, however, with the collection, use, and dissemination of information gathered by volunteers [79]. Many of the same social and technological barriers that have limited data gathering and information via traditional sampling methods (i.e. by trained experts) are likely to plague volunteer, user-gathered data as well [7]. This may yield socioeconomic, gender, and racial biases in who collects the data and uses the platforms [911], where users go to collect data [10, 12, 13], and consequently the representativeness of these datasets, particularly for historically marginalized communities [8, 13, 14]. The extent of sample selection bias in large, user-submitted data platforms remains largely unexplored (but see [13, 14]), but may pose particular challenges for open, semi-structured data platforms. This paper aims to assess how sample selection bias may affect studies using eBird, one of the largest crowdsourced platforms for biodiversity sampling [11], to analyze disparities in access to urban biodiversity.

Cities are spatially structured as a function of both historical and ongoing social and political processes that in most US cities yield a patchwork of racially and socioeconomically segregated communities [1518]. A growing body of literature indicates that this spatial segregation is often associated with access to ecological amenities like open space, tree canopy cover, and biodiversity [1821]. Non-representative sampling of urban spaces would therefore miss important variation in urban landscapes [13].

Crowdsourced contributory science (also known as citizen-science, hereafter, CS) data collection platforms decentralize data collection by crowdsourcing information from a broad base of users [22, 23]. eBird is among the most popular biodiversity-specific CS platforms. Its reach is global; eBird compiles information from users around the world for over 100 million bird sightings annually, with information on the location, time, and nature of bird encounters [24, 25]. This expansive geographic coverage makes it possible to assess biodiversity access in urban areas throughout the U.S. Given the widespread popularity of birds [26, 27], eBird represents a valuable resource for understanding not just the distribution of birds but of an important ecological amenity in urban spaces.

eBird relies on opportunistic semi-structured collection of data. Semi-structured methods allow users to gather data wherever and whenever they wish while collecting information on users’ observation process, such as effort, method, and location [22, 28, 29]. Collecting user process data distinguishes eBird from unstructured platforms like iNaturalist (iNaturalist.org), which is even more flexible and open to users’ submissions [29]. Structured programs, by contrast, set strict protocols. One example of structured programs is the Christmas Bird Count, which has set dates, sampling locations, and routes [30]. For brevity, we will refer to unstructured, semi-structured and opportunistic CS platforms as simply “SSCS platforms” hereafter, while recognizing the diversity of forms contributory science takes.

The decentralized nature of eBird presents both benefits and challenges. On the one hand, the SSCS model permits data collection at immense scale, and it can also generate public engagement with the scientific process. Platforms like eBird make possible analysis at a broader scope and finer scale than traditional ecological datasets, and can help pave the way for deeper inquiry into the relationship between access to biodiversity and human outcomes (and the disparities thereof). Yet these platforms, and SSCS initiatives more broadly, may also suffer from important shortcomings in terms of the representativeness of their data [31, 32]. This concern is particularly acute for investigations into diversity and socioeconomic disparities, as SSCS user bases and the places they report their data from within a city may be non-representative across racial and income groups. This type of selection bias may present two critical problems for within-city analyses using eBird data:

First, research using such data sources will not be able to shed light on impacts for underrepresented groups. Indeed, prior research has shown that CS users differ from the population as a whole along important demographic and locational dimensions [33, 34] and that the locations sampled by CS users are biased toward particular places [12, 13, 35]. Depending on the line of inquiry, these biases can jeopardize the generalizability of results.

Second, even deeper problems can arise for statistical inference, above and beyond the problem of generalizability. Because sampling sites are selected along variables of analytical interest (e.g., race and income composition of local communities), researchers may uncover spurious relationships between these variables and biodiversity. In particular, analyses of selected samples are prone to collider bias [36], a type of statistical bias that can lead to erroneous conclusions about the relationships between variables in a dataset. What is especially vexing is that one cannot predict, in general, whether collider bias leads to over- or underestimates of correlations in the data. In the presence of collider bias, variables that are truly uncorrelated may appear correlated, while variables that are truly correlated may appear uncorrelated [36, 37]. Both of these issues may pose a challenge to the utility of SSCS databases for conducting fine-scaled social-ecological assessments within cities, as SSCS databases are not set up within an experimental design framework and because of limitations to data coverage. Thus, sample selection presents a critical challenge for statistical inference in studies seeking to understand the relationship between community characteristics (like race and income) and access to urban biodiversity. Notably, this problem is not limited to the use of SSCS databases; it can arise in many settings with large, readily available datasets if researchers are inattentive to potential sample selection in how those datasets are constructed.

A few recent studies point to potential problems of sample selection bias in eBird and other SSCS platforms [1214, 35]. Perkins [13] found a consistent pattern of geographic bias in the distribution of eBird sampling locations; neighborhoods in lower income groups were generally underrepresented among sites with checklists. In Buffalo, New York, another study found that sites with greater green space connectivity had greater numbers of submitted checklists, particularly sites near Lake Erie [35]. This study did not find a significant effect of socioeconomic factors on the distribution of checklists, but the sample size was relatively small since eBird checklists were reported for only 17% of the 50 block groups in the city [35], a fact which in itself suggests the potential for sample selection bias. A study of bird sightings posted via eBird, iNaturalist, and Flickr found that bird observations in Chicago occurred more often in open spaces than in residential areas, with high proportions of observations in recreation areas [12]. In addition, greater numbers of bird observations were posted for neighborhoods with higher median incomes, those with larger populations, and those located closer to Lake Michigan. While this latter study does not explicitly examine sample selection bias, the net outcome in all of these studies is that our information from SSCS data sources on the distribution of birds in the city may be biased and may have large gaps, particularly in those areas with little recreational space and/or low income.

We seek to fill three primary gaps with this study. First, prior work has not examined how the racial composition of an area correlates with its likelihood of being sampled for SSCS data collection. We test whether eBird checklists—an indication of the amount of available information about bird communities and biodiversity for a given area—are more prevalent in census tracts with a higher proportion of white residents and with higher median incomes, even after controlling for other relevant factors that may drive birding activity, like tract size and availability of green space. Note that our approach is distinct from related work that focuses on the racial and demographic composition of CS users [38, 39]; instead, we ask whether the spatial coverage of information available from SSCS datasets may be systematically biased toward neighborhoods of particular racial and socioeconomic composition. Second, much of the extant literature comprises case studies within single cities, leaving unclear whether results generalize across locales (but see [14]). We study two cities that are embedded in different ecoregions (temperate forest and arid desert, respectively) and have distinct urban geographies, finding similar results for both. Although our choice of study cities is not intended to be a comprehensive assessment of potential bias across the United States, the fact that two such distinct cities showed similar patterns of bias for a dataset as large and well-known as eBird suggests that sample selection is a wider spread problem when using SSCS datasets to assess social-ecological relationships. Lastly, we elucidate the broader implications of sample selection in SSCS databases. Not only does selection limit the generalizability of analyses of SSCS data, it may even compromise the validity of statistical inference in such studies due to collider bias. This problem is especially pronounced for studies using SSCS databases, like eBird, to link social and ecological variables, as the datasets themselves may be selected along social factors, leading to problems of collider bias. Our work is the first, to our knowledge, that pinpoints this challenge for using SSCS databases to study social-ecological relationships.

2. Methods

2.1 Study system

In this study we chose two United States (U.S.) cities for the analysis. Both cities are major metropolitan areas, but represent different bioclimatic and historic contexts, and thus represent much, but not all, of the variation in urban conditions across the U.S. Boston is one of the oldest cities in North America, and its metropolitan region is heterogeneous, dominated by high levels of wildland-urban interface [40] and aging post-industrial urban centers. The Phoenix metropolitan area experienced rapid urban growth beginning after World War II, and is dominated by extensive swaths of low-density residential lands that envelope large desert parks [41]. We conducted parallel, within-city analyses in both cities, demonstrating similar patterns of bias regarding what types of tracts are sampled in eBird.

2.2 Dataset compiling and filtering

To examine the possibility of sample selection bias by the racial and income composition of areas sampled by eBird checklists, we compared land cover, socioeconomic, and demographic variables to numbers of eBird checklists in U.S. Census tracts within portions of the Boston, Massachusetts Metropolitan Statistical Area (BOS MSA) and the Phoenix, Arizona Metropolitan Statistical Area (PHX MSA). For spatial bounding purposes, we are using metropolitan statistical areas MSAs, but for ease of reading we are calling locations by their city names. The BOS MSA consisted of five counties within Massachusetts (Essex, Middlesex, Norfolk, Plymouth, and Suffolk Counties, Fig 1A–1C), and the PHX MSA consisted of only Maricopa County (Fig 1D–1F). To obtain socioeconomic and demographic variables, we downloaded and filtered U.S. Census and American Community survey datasets using the R package tidycensus (version 0.9.9.5; [42]). For all geographic data analysis, the datum used was GCS North American 1983 with an Albers Equal Area projection. We filtered the datasets to the counties indicated above, and summarized data at the census tract levels. We used five-year estimates (2006–2010) at the census tract level from the American Community survey to determine proportion of white residents within each census tract and median household income within a 12-month window in 2011 inflation-adjusted U.S. dollars. We used 2010 U.S. Census data to obtain overall population density (total people / ha) in each tract, since census tracts vary by total area.

thumbnail
Fig 1.

Study extent of census tracts analyzed for the Boston MSA (A,B,C) and Phoenix MSA (D,E,F). Darker blue indicates (A,D) higher percentage white residents, (B,E) more completed eBird checklists, and (C,F) higher median household incomes.

https://doi.org/10.1371/journal.pone.0277223.g001

We then examined the total number of eBird checklists collected between 1 Jan 2006 and 1 Jan 2016 [25] within each census tract and linked these to data from the 2010 U.S. Census [43] and American Community Survey for socioeconomic and demographic data [44]. The main object of analysis was thus the number of eBird checklists performed per tract, rather than the data collected within the checklists themselves. Checklists are a measure of effort and visitation per tract, which addresses the underlying question of sample selection bias by tract. We performed all data preparation and analysis using program R (version 3.6.2; [45]) and generated all plots using the R package ggplot2 (version 3.2.1; [46]). We downloaded and filtered eBird data using the R package auk (version 0.1.1; [47]; also see [23]). We filtered the dataset by clipping the data extent to all of the census tracts within the counties above, only complete checklists, the spatial and temporal extents indicated above, stationary and traveling eBird protocols only, distances traveled between 0–2.5 km, and checklist durations between 5–240 min [23].

To account for land cover and size of tracts, which were likely to be related to the total number of eBird checklists in a tract (e.g., due to birder activity), we quantified key tract-level land cover metrics using ArcMap (version 10.5; [48]). For BOS MSA, we included the proportion of green space in the tract by using the Protected and Recreational Open Space GIS layer available from MassGIS ([49]; https://www.mass.gov/orgs/massgis-bureau-of-geographic-information), as well as the total area of each tract (ha). Since the majority of open space in PHX MSA is not dominated by green vegetation, we included three separate measures of land cover: proportion of green space (combination of forest, emergent wetlands, golf courses, and parks), proportion of desert/scrub, and proportion of cropland, all of which we compiled from the National Land Cover Dataset 2011 GIS layer ([50]; https://www.mrlc.gov/data/nlcd-2011-land-cover-conus-0). We also included the total area of each tract (ha). For PHX MSA, we determined that many census tracts in Maricopa County were large uninhabited or very low-inhabited areas. To capture only the inhabited census tracts of interest in PHX MSA, we eliminated census tracts that were in the top 5% of land area (i.e., tracts ≥ 2,424 ha in area; tract size ranged from 32.6–345,074 ha).

2.3 Statistical analyses

Our goal was to study the relationship between sociodemographic variables and the prevalence of eBird checklists. We hypothesized that for each MSA, the number of eBird checklists at each census tract is related to metrics of race (proportion white) and income (median household income) of residents within the tract, even accounting for land area and areas available to birders (e.g., publicly available green and open space). We used the total number of eBird checklists for each census tract as the response variable in both BOS MSA and PHX MSA. For the predictor variables, we included proportion white and median household income as the hypothesized predictors, tract area, population density, and publicly available open space for covariate predictors. The publicly available open space specifically included proportion of green space for BOS MSA, and proportion of green space, proportion of desert, and proportion of cropland for PHX MSA. We check for excessive collinearity of predictors using the R package corrplot (v. 0.89). In the PHX MSA, median household income was highly correlated with total tract area and population density. Thus, we excluded total tract area and population density in the PHX MSA models. We found through exploratory analysis that many of the predictor variables were nonlinear. Thus, we decided to use generalized linear models with multistage model selection for analysis to choose the best fit probability distribution and link functions. We then validated the assumptions of normality of residuals post hoc.

Because of the possibility for multiple models with competing predictor structures, we used a maximum likelihood model selection and model averaging framework to assess the slope, significance, and effect size of variables [5153]. To this end, for each MSA we conducted separate two-stage model selection procedures for the total number of eBird checklists in each city. We fit the GLMs using the R packages mgcv (version 1.8–31; [54]) and MASS (version 7.3–51.4; [55]; for models with negative binomial PDFs). For the first stage, we selected the best fit probability distribution (PDFs) and link functions of the GLMs by comparing hypothesized global models with different PDFs via Corrected Akaike’s Information Criterion (AICc; [56]) using the R package AICcmodavg (version 2.3.0; [57]). We considered any models within ΔAICc ≤ 2 to be equally likely [52]. Given the distributions of the eBird checklist count data, we tested the following PDFs: (1) gaussian (log + 1 transformation), (2) Poisson, (3) negative binomial, (4) inverse hyperbolic sine, (5) zero-inflated Poisson, and (6) zero-inflated negative binomial. The predictor structure for the BOS MSA hypothesized global model was:

number of checklists = proportion of white residents × median household income + proportion of white residents + median household income + proportion of white residents2 + median household income2 + proportion of green tracts + proportion of green tracts2 + total tract area × population density + total tract area + population density + total tract area2 + population density2

The predictor structure for the PHX MSA hypothesized global model was:

number of checklists = proportion of white residents × median household income + proportion of white residents + median household income + proportion of white residents2 + median household income2 + proportion of green tracts + proportion of green tracts2 + proportion of cropland × proportion of desert + proportion of cropland + proportion of desert + proportion of cropland2 + proportion of desert2

Once we selected the PDFs for each model set, we fit models using predictor variable structures from an a priori list of hypothesized and plausible models, which included additive and interactive linear terms as well as additive-only squared terms (as seen above in the global models; n = 220 models for each procedure for each MSA, see S1 Table for full candidate model set). We then assessed the models with the lowest AICc for spatial autocorrelation using the R packages sp (version 1.3–2; [58]) and gstat (version 2.0–6; [59]. We did not find evidence for spatial autocorrelation in the BOS MSA models, but we did find evidence for spatial autocorrelation at relatively short distances for models in the PHX MSA [60]. To account for spatial autocorrelation in PHX MSA, we reran the model selection procedure for PHX MSA and included latitude and longitude as additive terms in every model [61]. Once we fit all plausible models, we used the R package AICCmodavg to average the models and provide multimodal inference, using a 95% unconditional confidence interval to assess variable significance.

3. Results

After filtering the checklists, we assessed n = 122,299 eBird checklists in n = 740 census tracts for BOS MSA, and n = 17,779 eBird checklists in n = 861 census tracts for PHX MSA. In BOS MSA, the average census tract that we sampled had a median income of $79,102 and 81% of its residents are white (Table 1). The average census tract that we sampled in PHX MSA had a similar proportion of white residents (81%) and lower median income ($59,422). However, the variation in racial composition across census tracts is much wider in PHX MSA than BOS MSA; in PHX MSA, the tract with the lowest proportion of white residents is 4% white, while for BOS MSA this value is an order of magnitude large (40% white). We found no difference in the relationship between predictors and responses between total number of checklists versus number of stationary or traveling checklists, thus we present only the results of total number of checklists here. All models had a selected PDF structure of Gaussian with a log + 1 transformation (see S1 Table).

thumbnail
Table 1. Summary statistics by census tract for BOS and PHX MSA hypothesized predictor variables.

https://doi.org/10.1371/journal.pone.0277223.t001

3.1 Boston MSA model averaging

When we averaged the model set for BOS MSA total eBird checklists, we found that median income, proportion white, proportion green space, total tract area, population density, median income × proportion white, and total area × population density were all significant at a 95% confidence level (Table 2). Proportion of green space × total area, and proportion of green space × population density were not significantly related to the total number of eBird checklists. Median household income and proportion of white residents were both included in all top models (ΔAICc < 2, w = 0.57); the R2 for these top models is 0.34–0.35. Median household income was positively related to the total number of checklists (β = 0.071, SE = 0.022, CI = 0.028, 0.114, Fig 2). Proportion of white residents in a tract was also positively related to the total number of checklists (β = 1.37, SE = 0.39, CI = 0.60, 2.13, Fig 3). These positive correlations between checklists, proportion of white residents, and median household income are robust and also hold for the top fit individual models.

thumbnail
Fig 2. Model averaged relationship between median household income and predicted number of eBird checklists per tract in the Boston Metropolitan Statistical Area (BOS MSA).

Gray ribbon indicates estimated standard error bounds.

https://doi.org/10.1371/journal.pone.0277223.g002

thumbnail
Fig 3. Model averaged relationship between proportion of white residents and predicted number of eBird checklists per tract in the Boston Metropolitan Statistical Area (BOS MSA).

Gray ribbon indicates standard error.

https://doi.org/10.1371/journal.pone.0277223.g003

thumbnail
Table 2. Model averaged parameters for BOS and PHX MSA total eBird checklists by predictor variables.

https://doi.org/10.1371/journal.pone.0277223.t002

3.2 Phoenix MSA model averaging

When we averaged the model set for PHX MSA total eBird checklists, we found that median income, proportion white, proportion desert, proportion cropland, and proportion green space × proportion desert were all significant at a 95% confidence interval (Table 2). Proportion green space, median income × proportion white, proportion green space × proportion cropland, and proportion desert × proportion cropland were not significantly related to the total number of eBird checklists. Median household income and proportion of white residents were both included in all top models (ΔAICc < 2, w = 0.34, Table 2); the R2 for these top models is 0.13. Median household income was positively related to the total number of checklists (β = 0.060, SE = 0.023, CI = 0.015, 0.106, Fig 4). Proportion of white people in a tract was also positively related to the total number of checklists (β = 2.31, SE = 0.53, CI = 1.27, 3.34, Fig 5). These positive correlations between checklists, proportion of white residents, and median household income are robust and also hold for the top fit individual models.

thumbnail
Fig 4. Model averaged relationship between median household income and predicted number of eBird checklists per tract in the Phoenix Metropolitan Statistical Area (PHX MSA).

Gray ribbon indicates estimated standard error bounds.

https://doi.org/10.1371/journal.pone.0277223.g004

thumbnail
Fig 5. Model averaged relationship between proportion of white residents and predicted number of eBird checklists per tract in the Phoenix Metropolitan Statistical Area (PHX MSA).

Gray ribbon indicates estimated standard error bounds.

https://doi.org/10.1371/journal.pone.0277223.g005

4. Discussion

4.1 Sample selection and bias

We find strong evidence of sample selection in the locations where eBird activity is prevalent. This finding is in line with our hypothesis, and it suggests that census tracts are differentially likely to be surveyed for birds, and these differences are correlated with tract-level income and race. Especially notable is the prominence of tract-level income and proportion of white residents in predicting the number of available eBird checklists.

To our knowledge, only two other studies have directly examined racial and socioeconomic factors in relation to eBird checklist submission [13, 35]. A multi-city examination found significant differences in the distribution of eBird checklist submissions by neighborhood income level, with significant underrepresentation in the lower income areas [13]. In Buffalo, New York, socioeconomic factors were not a significant predictor of checklist submission [35]. However, this lack of relationship may have been due to the relatively small sample size (only 50 of 287 Census tracts had any eBird checklists submissions). The Buffalo study did find that eBird users tended to report checklists more often from places with greater green space connectivity and to avoid areas where active urban demolition projects have taken place [35]. Since green space coverage is greatest in wealthier areas of Buffalo, the biases in sample selection create a similar set of outcomes, i.e., that bird communities are under sampled in lower income portions of the city. Ongoing work by Ellis Soto and colleagues [14] finds biased sampling with respect to historic redlining in data from the Global Biodiversity Information Facility, a source that includes eBird records. Redlining refers to the racialized zones that the Home Owners Loan Corporation developed to guide lending practices in the early 20th century [16]. We note that none of these related studies explicitly examined biases associated with current racial composition of neighborhoods, though the Buffalo study and the redlining analysis accounted for covariates like the availability of green space [35]. In addition, our study encompassed the entire metropolitan region of our case cities, which extends far beyond the historically redlined portions of both cities. Together, these findings suggest that racial biases in sampling stem from ongoing social processes as well as historical ones, and that they merit further examination for eBird as well as for other SSCS data platforms.

The reasons for biased sampling in eBird and other SSCS datasets are likely to be multi-faceted. One possibility is that checklist submission might be driven by the availability of public green spaces where birders are attracted [12]. To the extent that green space or other natural features are less abundant in lower income neighborhoods or communities of color [19, 21, 62], the biases we detected might be attributable to disparities in preferred birding locations. However, as we have already indicated, accounting for percent green space in the tracts and population density did not eliminate race/income biases in the number of checklists submitted from census tracts in Boston or Phoenix (Table 2), nor did similar controls eliminate biased sampling aligned with redlining in a study of 195 cities [14]. There may, of course, be other factors associated with the accessibility of birding locations that we did not examine; a study in Sweden found factors like road density to be negatively associated with the sampling intensity across multiple taxa in a similar SSCS program [63]. Racism and classism may contribute to biased sampling if negative perceptions of lower income and non-white communities in cities leads to lower reporting from those areas [14, 18]. Likewise, a perception that certain birds are less interesting or worthy of reporting [64, 65] might contribute to under-sampling of areas perceived to be unlikely to support more prized species.

To the extent that the “where” of eBird sampling is related to who participates in eBird, racial and income biases in sampling may be a function of a lack of diversity among participants. eBird registrants are overwhelmingly white (94.8%; [11]). Analysis of participation in a water monitoring CS program, Illinois RiverWatch, provides some evidence for an association between the “who” and the “where” of SSCS sampling [10]; participants were disproportionately white and affluent and sites in areas of high environmental justice concern were under sampled. Broadening participation represents a key challenge for SSCS programs generally [11, 66]. Yet, a focus solely on the demographics of eBird contributors is likely an oversimplification. eBird participants do not solely or even predominantly submit checklists from their immediate home environments [12, 67]. Black birders and other birders of color may also experience significant barriers to accessing or using urban green spaces [68, 69], leading to further biases in spatial coverage of data availability. In addition, focusing only on rectifying sampling bias by calling for greater diversity in eBird participants could unfairly place the onus of gathering representative data on members of marginalized groups.

Critical reviews have examined the broad notion of volunteer-based geographic information systems (VGI). This term encompasses platforms that go beyond the goal of ecological sampling to include more general mapping tools (e.g. OpenStreetMaps—[9]), disaster management tools [8], and many other functions. Elwood [7] suggests that critical and feminist geography approaches provide frameworks for thinking about the sources of bias in VGI, and these can be extended to eBird and other environmental CS datasets. These approaches analyze the relationships between data and social and political power. For example, Elwood [7] notes that “unequal social and political relationships [may] influence spatial data access and sharing” and that the dynamics of inclusion and exclusion of spatially-explicit data may be influenced by “socially or politically-grounded motivations for volunteering or withholding.” In low income and communities of color, scientists have frequently engaged in extractive data collection [18, 70]. While SSCS data platforms like eBird have the potential to empower communities, there is also a risk that these platforms can contribute to further marginalization, depending on how these platforms are perceived and how the data is used [71, 72]. Further research is needed to assess how user perceptions of eBird might be influencing the spatial distribution of eBird checklist submissions.

4.2 The value of semi-structured crowdsourced data and future work

The preceding section suggests a need for caution when using SSCS data. In particular, unstructured or semi-structured data collection can lead to biased sampling along factors such as race, income, or geography. For this reason, SSCS data will not provide a reliable basis for assessing the relationship between these factors and access to biodiversity. Such analyses may suffer from collider bias, resulting in spurious associations. Fig 6 provides an illustrative example (adapted from Cunningham [36]) of collider bias. Panel A shows the “true” underlying distribution of biodiversity and income for a hypothetical (simulated) set of tracts, while Panel B shows the distribution when tracts with low income or low biodiversity are unobserved due to selection. By construction, biodiversity and income are completely uncorrelated in the underlying simulation dataset (Panel A), but selection will lead the analyst to find a negative correlation between these two variables (Panel B). This figure clearly illustrates how selected sampling can lead to incorrect inferences about the relationship between variables of interest.

thumbnail
Fig 6. Simulated data to demonstrate the statistical bias that can occur with unbalanced sample selection.

Panel A. Full sample with random, independent draws for biodiversity and income (β = -0.0042, SE = 0.031, p-value = 0.89). Panel B. Same sample with selection, where tracts with low income or low biodiversity are unobserved (β = -0.17, SE = 0.032, p-value < 0.001).

https://doi.org/10.1371/journal.pone.0277223.g006

These problems are especially pronounced for studies examining within-city variations, where incomplete data coverage can pose challenges for valid statistical inference. Systematically sampled biodiversity data will be better suited for answering questions at such fine scales where SSCS platforms have data gaps. That being said, we should note that our primary empirical analysis focuses on the number of checklists, which is not a perfect proxy for biodiversity. Tracts with a low number of checklists may still have a sufficient number of checklists to describe biodiversity in that location, in which case concerns about collider bias will be mitigated. However, the problem of collider bias will remain a trenchant issue if many tracts have zero checklists. These tracts will appear to the researcher as though they have zero biodiversity or they may be systematically dropped from the analysis altogether—either of which would introduce bias into subsequent analyses.

In spite of these challenges, SSCS datasets remain important and powerful for many critical lines of inquiry. SSCS platforms like eBird have generated a wealth of data that are available at fine local levels and at unprecedented scales across the globe. These platforms provide unique and precious insights into social-ecological phenomena unfolding at regional and continental scales, such as migratory patterns [73, 74], changes in the geographic extent of plant and animal species [3, 75, 76], and human encounters with nature [67, 77]. In this light, we stress that SSCS data are indispensable and complementary to more traditional approaches for collecting ecological data. The central insight of this paper is that such unstructured or semi-structured data may not be well-suited for all lines of inquiry–particularly those requiring consistent data coverage–and should thus be handled with appropriate care. Against this backdrop, there are also opportunities to improve data collection to overcome some of the obstacles described above. SSCS platforms, like eBird, can create targeted campaigns to increase data coverage in tracts or areas with data gaps, thus ameliorating concerns about sample selection and collider bias. Moreover, expanding the user base of SSCS platforms, especially to underrepresented communities and areas, will improve data coverage by expanding the locations from which SSCS users originate and therefore where they are likely to report bird sightings.

Inclusivity, therefore, is crucial to CS efforts at multiple levels. Inclusivity is important to the outreach value of these efforts by expanding participation in and engagement with the scientific process [33, 66]. Yet, it is also essential to the rigor and validity of analyses that rely upon SSCS datasets. As we have demonstrated in this article, lack of inclusivity and incomplete data coverage can lead to critical errors in analyzing socio-ecological relationships, and these problems ultimately narrow the scope of questions that are answerable with SSCS data. We note, however, that these issues may also occur within many non-CS geospatial datasets across many fields [18, 20].

CS remains a powerful approach to connecting the public with science [13, 22, 66, 78]. Whose responsibility is it to address the gaps identified here and in other studies? Is it the managers of CS projects, or the academics who use the data? CS projects can have a variety of goals depending on who initiated the project and its primary purpose [22, 78, 79]. For example, a project whose aim is engagement and one whose aim is data collection are structured differently and engage people in different ways, yet are both categorized together as CS [66, 78, 80]. We urge ourselves and our colleagues in the field to do a better job of collaborating, and to bridge gaps between academia and project managers, in order to build CS datasets with greater reach, representativeness, and utility for scientific inquiry.

Supporting information

S1 Table. Models included in the model averaging for BOS MSA.

https://doi.org/10.1371/journal.pone.0277223.s001

(DOCX)

S2 Table. Models included in the model averaging for PHX MSA.

https://doi.org/10.1371/journal.pone.0277223.s002

(DOCX)

Acknowledgments

We thank the research groups at the University of Massachusetts Institute of Diversity Sciences (UMass IDS) for review and contribution at various stages of the manuscript. We thank eBird and the Cornell Lab of Ornithology for providing open access to eBird data. We thank the editor and two anonymous reviewers for their comments, which greatly enhanced the manuscript.

References

  1. 1. Hampton SE, Strasser CA, Tewksbury JJ, Gram WK, Budden AE, Batcheller AL, et al. Big data and the future of ecology. Frontiers in Ecology and the Environment. 2013;11: 156–162.
  2. 2. Farley SS, Dawson A, Goring SJ, Williams JW. Situating Ecology as a Big-Data Science: Current Advances, Challenges, and Solutions. BioScience. 2018;68: 563–576.
  3. 3. La Sorte FA, Lepczyk CA, Burnett JL, Hurlbert AH, Tingley MW, Zuckerberg B. Opportunities and challenges for big data ornithology. Condor. 2018;120: 414–426.
  4. 4. Gadsden GI, Malhotra R, Schell J, Carey T, Harris NC. Michigan ZoomIN: Validating Crowd-Sourcing to Identify Mammals from Camera Surveys. Wildlife Society Bulletin. 2021;45: 221–229.
  5. 5. Alberti M, Marzluff JM, Shulenberger E, Bradley G, Ryan C, Zumbrunnen C. Integrating humans into ecology: Opportunities and challenges for studying urban ecosystems. BioScience. 2003;53: 1169–1179.
  6. 6. Pickett STA, Cadenasso ML, Rosi-Marshall EJ, Belt KT, Groffman PM, Grove JM, et al. Dynamic heterogeneity: a framework to promote ecological integration and hypothesis generation in urban systems. Urban Ecosyst. 2017;20: 1–14.
  7. 7. Elwood S. Volunteered geographic information: future research directions motivated by critical, participatory, and feminist GIS. GeoJournal. 2008;72: 173–183.
  8. 8. Haworth B, Bruce E. A Review of Volunteered Geographic Information for Disaster Management. Geography Compass. 2015;9: 237–250.
  9. 9. Gardner Z, Mooney P, De Sabbata S, Dowthwaite L. Quantifying gendered participation in OpenStreetMap: responding to theories of female (under) representation in crowdsourced mapping. GeoJournal. 2020;85: 1603–1620.
  10. 10. Blake C, Rhanor A, Pajic C. The Demographics of Citizen Science Participation and Its Implications for Data Quality and Environmental Justice. Citizen Science: Theory and Practice. 2020;5: 21.
  11. 11. Rutter JD, Dayer AA, Harshaw HW, Cole NW, Duberstein JN, Fulton DC, et al. Racial, ethnic, and social patterns in the recreation specialization of birdwatchers: An analysis of United States eBird registrants. Journal of Outdoor Recreation and Tourism. 2021;35: 100400.
  12. 12. Lopez B, Minor E, Crooks A. Insights into human-wildlife interactions in cities from bird sightings recorded online. Landscape and Urban Planning. 2020;196: 103742.
  13. 13. Perkins DJ. Blind Spots in Citizen Science Data: Implications of Volunteer Bias in eBird Data. Master of Science, North Carolina State University. 2020. Available: https://repository.lib.ncsu.edu/handle/1840.20/38156
  14. 14. Ellis-Soto D, Chapman M, Locke D. Uneven biodiversity sampling across redlined urban areas in the United States. EcoEvoRxiv; 2022.
  15. 15. Palen J. J. The Urban World. New York: McGraw-Hill; 2005.
  16. 16. Boone CG. Environmental Justice as Process and New Avenues for Research. Environmental Justice. 2008;1: 149–154.
  17. 17. Warren PS, Harlan S, Boone C, Lerman SB, Shochat E, Kinzig AP. Urban ecology and human social organization. In: Gaston K, editor. Urban Ecology. Cambridge: Cambridge University Press; 2010. pp. 172–201.
  18. 18. Schell CJ, Dyson K, Fuentes TL, Roches SD, Harris NC, Miller DS, et al. The ecological and evolutionary consequences of systemic racism in urban environments. Science. 2020;369. pmid:32792461
  19. 19. Watkins SL, Gerrish E. The relationship between urban forests and race: A meta-analysis. Journal of Environmental Management. 2018;209: 152–168. pmid:29289843
  20. 20. Kuras ER, Warren PS, Zinda JA, Aronson MFJ, Cilliers S, Goddard MA, et al. Urban socioeconomic inequality and biodiversity often converge, but not always: A global meta-analysis. Landscape and Urban Planning. 2020;198: 103799.
  21. 21. Locke DH, Hall B, Grove JM, Pickett STA, Ogden LA, Aoki C, et al. Residential housing segregation and urban tree canopy in 37 US Cities. npj Urban Sustain. 2021;1: 1–9.
  22. 22. Johnston A, Hochachka WM, Strimas-Mackey ME, Gutierrez VR, Robinson OJ, Miller ET, et al. Best practices for making reliable inferences from citizen science data: case study using eBird to estimate species distributions. bioRxiv. 2019; 574392.
  23. 23. Callaghan CT, Major RE, Lyons MB, Martin JM, Wilshire JH, Kingsford RT, et al. Using citizen science data to define and track restoration targets in urban areas. Journal of Applied Ecology. 2019;56: 1998–2006.
  24. 24. Callaghan CT, Gawlik DE. Efficacy of eBird data as an aid in conservation planning and monitoring. Journal of Field Ornithology. 2015;86: 298–304.
  25. 25. eBird. eBird. Ithaca, NY: Cornell Lab of Ornithology; 2021. Available: https://ebird.org/home
  26. 26. Schuetz JG, Johnston A. Characterizing the cultural niches of North American birds. PNAS. 2019; 201820670. pmid:30988189
  27. 27. USFWS. Birding in the United States: A Demographic and Economic Analysis Addendum to the 2016 National Survey of Fishing, Hunting and Wildlife-Associated Recreation. Washington, D.C.: USDI Fish and Wildlife Service; 2019 p. 372 p. Report No.: 2016–2. Available: https://digitalmedia.fws.gov/digital/collection/document/id/2252
  28. 28. Callaghan CT, Rowley JJL, Cornwell WK, Poore AGB, Major RE. Improving big citizen science data: Moving beyond haphazard sampling. PLoS Biol. 2019;17: e3000357. pmid:31246950
  29. 29. Kelling S, Johnston A, Bonn A, Fink D, Ruiz-Gutierrez V, Bonney R, et al. Using Semistructured Surveys to Improve Citizen Science Data for Monitoring Biodiversity. BioScience. 2019;69: 170–179. pmid:30905970
  30. 30. Dunn EH, Francis CM, Blancher PJ, Drennan SR, Howe MA, Lepage D, et al. Enhancing the Scientific Value of the Christmas Bird Count. The Auk. 2005;122: 338–346.
  31. 31. Mentges A, Blowes SA, Hodapp D, Hillebrand H, Chase JM. Effects of site‐selection bias on estimates of biodiversity change. Conservation Biology. 2020; cobi.13610. pmid:32808693
  32. 32. Zhang G, Zhu A-X. The representativeness and spatial bias of volunteered geographic information: a review. Annals of GIS. 2018;24: 151–162.
  33. 33. Dibner KA, Pandya R. Demographic Analyses of Citizen Science. Learning Through Citizen Science: Enhancing Opportunities by Design. National Academies Press (US); 2018. Available: http://www.ncbi.nlm.nih.gov/books/NBK535967/
  34. 34. Pateman RM, Dyke A, West SE. The Diversity of Participants in Environmental Citizen Science. Citizen Science: Theory and Practice. 2021 [cited 17 Dec 2021]. Available: https://doi.org/10.5334/cstp.369
  35. 35. Walker CM, Colton Flynn K, Ovando-Montejo GA, Ellis EA, Frazier AE. Does demolition improve biodiversity? Linking urban green space and socioeconomic characteristics to avian richness in a shrinking city. Urban Ecosyst. 2017;20: 1191–1202.
  36. 36. Cunningham S. Causal Inference: The Mixtape. 1st ed. Yale University Press; 2021.
  37. 37. Knox D, Lowe W, Mummolo J. APSR. 2020;114: 619–637.
  38. 38. Martin VY, Greig EI. Young adults’ motivations to feed wild birds and influences on their potential participation in citizen science: An exploratory study. Biological Conservation. 2019;235: 295–307.
  39. 39. Pateman R, Tuhkanen H, Cinderby S. Citizen Science and the Sustainable Development Goals in Low and Middle Income Country Cities. Sustainability. 2021;13: 9534.
  40. 40. Radeloff VC, Hammer RB, Stewart SI, Fried JS, Holcomb SS, McKeefry JF. The wildland-urban interface in the United States. Ecological Applications. 2005;15: 799–805.
  41. 41. Gammage G. Phoenix in Perspective: Reflection on Developing the Desert. Herberger Center for Design Excellence, College of Architecture and Environmental Design, Arizona State University; 1999.
  42. 42. Walker K, Herman M, Eberwein K. tidycensus. 2020. Available: https://cran.r-project.org/web/packages/tidycensus/tidycensus.pdf
  43. 43. United States Census Bureau. 2010 Census. Washington, D.C., USA: U.S. Census Bureau; 2011. Available: http://www.census.gov/2010census/data/
  44. 44. United States Census Bureau. 2007–2011 American Community Survey. Washington, D.C., USA: U.S. Census Bureau; 2011. Available: http://ftp2.census.gov/
  45. 45. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2019. Available: https://www.R-project.org/
  46. 46. Wickham, Hadley. ggplot2: Elegant Graphics for Data Analysis. New York, NY; 2016. Available: https://cran.r-project.org/web/packages/ggplot2/
  47. 47. Strimas-Mackey M, Miller ET, Hochachka W, Cornell Lab of Ornithology. auk. 2020. Available: https://cornelllabofornithology.github.io/auk/
  48. 48. ESRI. ArcGIS Desktop. Redlands, CA: Environmental Systems Research Institute; 2011.
  49. 49. John P. MassGIS Data: Land Use (2005). MassGIS (Bureau of Geographic Information); 2018. Available: https://docs.digital.mass.gov/dataset/massgis-data-land-use-2005
  50. 50. NLCD. National Land Cover Database (NLCD) 2011 Land Cover Conterminous United States. U.S. Geological Survey; 2011. Available: https://doi.org/10.5066/P97S2IID
  51. 51. Buckland ST, Burnham KP, Augustin NH. Model Selection: An Integral Part of Inference. Biometrics. 1997;53: 603.
  52. 52. Burnham KP, Anderson DR. Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach. New York, NY: Springer Science & Business Media; 2003.
  53. 53. Symonds MRE, Moussalli A. A brief guide to model selection, multimodel inference and model averaging in behavioural ecology using Akaike’s information criterion. Behav Ecol Sociobiol. 2011;65: 13–21.
  54. 54. Wood S. mgcv: Mixed GAM Computation Vehicle with Automatic Smoothness Estimation. 2019. Available: https://cran.r-project.org/web/packages/mgcv/index.html
  55. 55. Ripley B, Venables B, Bates DM, Hornick K, Gebhardt A, Firth D. MASS. 2020. Available: https://cran.r-project.org/web/packages/MASS/MASS.pdf
  56. 56. Cavanaugh JE. Unifying the derivations for the Akaike and corrected Akaike information criteria. Statistics & Probability Letters. 1997;33: 201–208.
  57. 57. Mezerolle, Marc J. AICcmodavg: Model selection and multimodel inference based on (Q)AIC(c). 2017. Available: https://cran.r-project.org/package=AICcmodavg
  58. 58. Pebesma E, Bivand R, Rowlingson B, Gomez-Rubio V, Hijmans R, Sumner M, et al. sp. 2020. Available: https://cran.r-project.org/web/packages/sp/index.html
  59. 59. Pebesma E, Graeler B. gstat. 2020. Available: https://cran.r-project.org/web/packages/gstat/index.html
  60. 60. Dormann C F., McPherson J M., Araújo M B., Bivand R, Bolliger J, Carl G, et al. Methods to account for spatial autocorrelation in the analysis of species distributional data: a review. Ecography. 2007;30: 609–628.
  61. 61. Nishida T, Chen D-G. Incorporating spatial autocorrelation into the general linear model with an application to the yellowfin tuna (Thunnus albacares) longline CPUE data. Fisheries Research. 2004;70: 265–274.
  62. 62. Gerrish E, Watkins SL. The relationship between urban forests and income: A meta-analysis. Landscape and Urban Planning. 2018;170: 293–308. pmid:29249844
  63. 63. Mair L, Ruete A. Explaining Spatial Variation in the Recording Effort of Citizen Science Data across Multiple Taxa. PLOS ONE. 2016;11: e0147796. pmid:26820846
  64. 64. Garnett ST, Ainsworth GB, Zander KK. Are we choosing the right flagships? The bird species and traits Australians find most attractive. PLOS ONE. 2018;13: e0199253. pmid:29944681
  65. 65. Andrade R, Franklin J, Larson KL, Swan CM, Lerman SB, Bateman HL, et al. Predicting the assembly of novel communities in urban ecosystems. Landscape Ecol. 2021;36: 1–15.
  66. 66. Cooper CB, Hawn CL, Larson LR, Parrish JK, Bowser G, Cavalier D, et al. Inclusion in citizen science: The conundrum of rebranding. Science. 2021;372: 1386–1388.
  67. 67. Kolstoe S, Cameron TA. The Non-market Value of Birding Sites and the Marginal Value of Additional Species: Biodiversity in a Random Utility Model of Site Choice by eBird Members. Ecological Economics. 2017;137: 1–12.
  68. 68. Byrne J. When green is White: The cultural politics of race, nature and social exclusion in a Los Angeles urban national park. Geoforum. 2012;43: 595–611.
  69. 69. Finney C. Black Faces, White Spaces: Reimagining the Relationship of African Americans to the Great Outdoors. Chapel Hill, NC: UNC Press Books; 2014.
  70. 70. Trisos CH, Auerbach J, Katti M. Decoloniality and anti-oppressive practices for a more ethical ecology. Nature Ecology & Evolution. 2021; 1–8. pmid:34031567
  71. 71. Sieber RE, Haklay M. The epistemology(s) of volunteered geographic information: a critique. Geo: Geography and Environment. 2015;2: 122–136.
  72. 72. Christine DI, Thinyane M. Citizen science as a data-based practice: A consideration of data justice. Patterns. 2021;2: 100224. pmid:33982019
  73. 73. Sorte FAL, Fink D. Migration distance, ecological barriers and en-route variation in the migratory behaviour of terrestrial bird populations. Global Ecology and Biogeography. 2017;26: 216–227. https://doi.org/10.1111/geb.12534
  74. 74. Zaifman J, Shan D, Ay A, Jimenez AG. Shifts in Bird Migration Timing in North American Long-Distance and Short-Distance Migrants Are Associated with Climate Change. International Journal of Zoology. 2017;2017: e6025646.
  75. 75. Girish KS, Srinivasan U. Preliminary evidence for upward elevational range shifts by Eastern Himalayan birds. bioRxiv. 2020; 2020.10.13.337121.
  76. 76. Kelly JF, Horton KG, Stepanian PM, Beurs KM de, Fagin T, Bridge ES, et al. Novel measures of continental-scale avian migration phenology related to proximate environmental cues. Ecosphere. 2016;7: e01434. https://doi.org/10.1002/ecs2.1434
  77. 77. Kolstoe S, Cameron TA, Wilsey C. Climate, Land Cover, and Bird Populations: Differential Impacts on the Future Welfare of Birders across the Pacific Northwest. Agric Resour Econom Rev. 2018;47: 272–310.
  78. 78. Callaghan CT, Poore AGB, Major RE, Rowley JJL, Cornwell WK. Optimizing future biodiversity sampling by citizen scientists. Proc R Soc B. 2019;286: 20191487. pmid:31575364
  79. 79. Tauginienė L, Butkevičienė E, Vohland K, Heinisch B, Daskolia M, Suškevičs M, et al. Citizen science in the social sciences and humanities: the power of interdisciplinarity. Palgrave Commun. 2020;6: 89.
  80. 80. Dickinson JL, Zuckerberg B, Bonter DN. Citizen Science as an Ecological Research Tool: Challenges and Benefits. Annual Review of Ecology, Evolution, and Systematics. 2010;41: 149–172.