Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement

We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.


Introduction
Historical datasets provide critical context to understand forest ecology. They allow researchers to define 'baseline' conditions for conservation management, to understand ecosystem processes at decadal and centennial scales, to track forest responses to shifting climates, and, particularly in regions with widespread land use change, to understand the extent to which forests after conversion and regeneration differ from the original forest cover.
Euro-American settlement and subsequent land use change occurred in a time-transient fashion across North America and were accompanied by land surveys needed to demarcate land for land tenure and use. Various systems were used by surveyors to locate legal boundary markers, usually by recording and marking trees adjacent to survey markers. These data provide vegetation information that can be mapped and used quantitatively to represent the period of settlement. Early surveys (from 1620 until 1825) in the northeastern United States provide spatially-aggregated data at the township level [1,2], with typical township size on the order of 200 km 2 and no information about the locations of individual trees; we refer to these as the Town Proprietor Survey (TPS). Later surveys after the establishment of the U.S. Public Land Survey System (PLS) by the General Land Office (GLO) provide point-level data along a regular grid, with one-half mile (800 m) spacing, for Ohio and westward during the period 1785 to 1907 [3][4][5][6]. At each point 2-4 trees were identified, and the common name, diameter at breast height, and distance and bearing from the point were recorded. Survey instructions during the PLS varied through time and by point type. Accounting for this variation requires data screening to maximize consistency among points and the application of spatially-varying correction factors [6] to accurately assess tree stem density, basal area and biomass from the early settlement records, but the impact on composition estimates is limited [7]. Surveyors sometimes used ambiguous common names, which requires matching to scientific names and standardization [6,8].
Logging, agriculture, and land abandonment have left an indelible mark on forests in the northeastern United States [2,6,9,10]. However most studies have assessed these effects in individual states or smaller domains [11,12] and with various spatial resolutions, from townships (36 square miles) to forest zones of hundreds or thousands of square miles. [6] provide a new dataset of forest composition, biomass, and stem density based on PLS data for the upper Midwest that is resolved to an 8 km by 8 km grid cell scale, providing broad spatial coverage at a spatial scale that can be compared to modern forests using Forest Inventory and Analysis products [13]. Combined with additional, coarsely-sampled PLS data from Illinois and Indiana, newly-digitized data from southern Michigan, and with the TPS data, this gives us raw data for much of the northeastern United States. However, there are several limitations of using the raw data that can be alleviated by the use of a statistical model to develop a statistically-estimated data product. First, the PLS and TPS data only provide estimates of within-cell variance that do not account for information from nearby locations. Second, there are data gaps: the available digitized data from Illinois and Indiana represent a small fraction of those states, and missing townships are common in the TPS data. Third, the TPS and PLS data have fundamentally different sampling design and spatial resolution. Our statistical model allows us to provide a spatially-complete data product of settlement-era tree composition for a common 8 km grid with uncertainty across the northeastern U.S.
In Data we describe the data sources, while in Statistical model we describe the statistical model used to create the data product. In Model comparison we quantitatively compare competing statistical specifications, and in Data Product we describe the final data product. In Discussion we discuss the uncertainties estimated by and the limitations of the statistical model, and we list related data products under development.

Data
The raw data were obtained from land division survey records collated and digitized from across the northeastern U.S. by a number of researchers (Fig 1). For the states of Minnesota, Wisconsin, Illinois, Indiana, and Michigan (the midwestern subdomain), digitized data are available at PLS survey point locations and have been aggregated to a regular 8 km grid in the Albers projection. (Note that for Indiana and Illinois, at the moment trees are associated with township centroids and then assigned to 8 km grid cells based on the centroid, but in the near future we will have point locations available for each tree.) For the states of Ohio, Pennsylvania, New Jersey, New York and the six New England states (the eastern subdomain), data are aggregated at the township level. We make predictions for all of the states listed above; these constitute our core domain. There are also data from a single township in Quebec and a single township in northern Delaware; these data help inform predictions in nearby locations within our core domain, but predictions are not made for Quebec or Delaware. Digitization of PLS data in Minnesota, Wisconsin and Michigan is essentially complete, with PLS data for nearly all 8 km grid cells, but data in Illinois and Indiana represent a sample of the full set of grid cells, with survey record transcription ongoing. Data for the eastern states are available for a subset of the full set of townships covering the domain; the TPS data for some townships were lost, incomplete, or have not been located [1].
Note that surveys occurred over a period of more than 200 years as European colonists (before U.S. independence) and the United States settled what is now the northeastern and midwestern United States. Our estimates are for the period of settlement represented by the survey data and therefore are time-transgressive; they do not represent any single point in time across the domain, but rather the state of the landscape at the time just prior to widespread Euro-American settlement and land use [1,14]. These forest composition datasets do include the effects of Native American land use and early Euro-American settlement activities, e.g. Locations are grid cells in midwestern portion and townships in eastern portion. In addition to locations without data being indicated in white, grid cells completely covered in water are white (e.g., a few locations in the northwestern portion of the domain in the states of Minnesota and Wisconsin). [15], but it is likely that the imprint of this earlier land use is highly concentrated rather than spatially extensive [16].
Extensive details on the upper Midwest (Minnesota, Wisconsin, Michigan) data and processing steps are available [6]; key elements include the use of only corner points, the use of only the two closest trees at each corner point, spatially-varying correction factors for sampling effort, and a standardized taxonomy table. The lower Midwest (Illinois, Indiana) data were purchased from the Indiana State Archives (Indiana) and Hubtack Document Resources (hubtack.com; Illinois) and processed using similar steps as for the upper Midwest data. Digitization of the Illinois and Indiana data is still underway, so many grid cells contained no data at the time the statistical model was fit. Note that the number of trees per grid cell varies depending on the number of survey points in a cell, with an average of 124 trees per cell. The gridded data at the 8 km resolution for the midwest subdomain are available through the NIS data portal [17]. The TPS data were compiled by C.V. Cogbill from a myriad of archival sources representing land division surveys conducted in connection with local settlement and are available through the NIS data portal [18,19].
The aggregation into taxonomic groups is primarily at the genus level but is at the species level in some cases of monospecific genera. We model the following 22 taxa plus an "other . Note that in several cases (black gum/sweet gum, ironwood, poplar/tulip poplar, cedar/juniper), because of ambiguity in the common tree names used by surveyors, a group represents trees from different genera or even families. For the midwestern subdomain we do not fit statistical models for Atlantic white cedar and chestnut as these species have 0 and 7 trees present, respectively. The taxa grouped into the other hardwood category are those for which fewer than roughly 2000 trees were present in the dataset; however, we include Atlantic white cedar explicitly despite it only having 336 trees in the dataset because of specific ecological interest in Atlantic white cedar wetlands.
Diameters are only recorded in the PLS data. Although surveyors avoided using small trees, there was no consistent lower diameter limit. The PLS data generally represent trees greater than 8 inches (ca. 20 cm) diameter at breast height (dbh), but with some trees as small as 1 inch dbh (smaller trees were much more common in far northern Minnesota). TPS data have no information about dbh, but the trees were large enough to blaze and are presumed to be relatively large trees useful for marking property boundaries.
There are approximately 860,000 trees in the midwestern subdomain and 420,000 trees in the eastern subdomain. In the midwestern subdomain, oak is the most common taxon and pine the second most common, while in the eastern subdomain oak is the most common and beech the second most common.
Our domain is a rectangle covering all of the states using a metric Albers (Great Lakes and St. Lawrence) projection (PROJ4: EPSG:3175), with the rectangle split into 8 km cells, arranged in a 296 by 180 grid of cells, with the centroid of the cell in the southwest corner located at (-71000 m, 58000 m). For the midwestern subdomain we use the western-most 146 by 180 grid of cells when fitting the statistical models. For the eastern subdomain we use the eastern-most 180 by 180 grid of cells and then omit 23 rows of cells in the north and 17 rows of cells in the south, as these grid cells are outside of the states containing data.

Statistical model
We fit a Bayesian statistical model to the data, with two primary goals: 1. To estimate composition on a regular grid across the entire domain, filling gaps where no data are available, and 2. To quantify uncertainty in composition at all locations. Even in grid cells and townships with data, we wish to quantify uncertainty because the empirical proportions represent estimates of the true proportions that could be calculated using the full population of all the trees in a grid cell or township.
At a high level, the Bayesian statistical model estimates composition across the domain, even in locations with sparse or no data, by combining the raw composition data with the assumption that composition varies in a smooth spatial fashion across the domain. The information in the data is quantified by the data model, also known as the likelihood. The assumption of smoothness is built into the model by representing the true unknown spatially-varying composition using a statistical spatial process representation that induces smoothing of estimates across nearby locations. This spatial process representation is a form of prior distribution and is a function of model parameters called hyperparameters that determine the correlation structure of the process and are also estimated based on the data.
The result of fitting the Bayesian model via Markov chain Monte Carlo (MCMC) is a set of representative samples from the posterior distribution for the composition in the 23 taxonomic groupings at each of the grid cells. These samples are the data product (described further in Data Product) and can be used in subsequent analyses. The mean and standard deviation of the samples for each pair of cell and taxon represent our best estimate (i.e., prediction) of composition and a Bayesian "standard error" quantifying the uncertainty in the estimate.
In the remainder of this section we provide the technical specification of the model and of the computations involved in fitting the model.
Data model. We start by describing the basic model for those states for which we have raw data on the 8 km grid, and in Model for township data we describe the extension of the model to accommodate data aggregated at the township level.
The statistical model treats the observations as coming from a multinomial distribution with a (latent) vector of proportions for each grid cell, where y i is the vector of counts for the P taxa at the ith cell, n i is the number of trees counted in the cell, and θ(s i ) is the vector of unknown proportions for those taxa at that cell. Note that we use a standard multinomial distribution without overdispersion, because the set of trees in the dataset is roughly uniformly sampled across the cells or townships [6].
The proportions, θ p (s i ), p = 1, . . ., P, are modeled spatially by a set of P Gaussian spatial processes, one per taxon, α p (s i ), p = 1, . . ., P. This collection of processes defines a multivariate spatial process for composition. The α p (s) processes are defined on the 8 km grid, α p = {α p (s 1 ), . . ., α p (s m )} for the m grid cells. In Latent variable model we introduce a multinomial probit model that relates the α p (s) processes to the proportion processes, θ p (s), via the introduction of latent variables, with an implicit sum-to-one constraint, P P p¼1 y p ðsÞ ¼ 1. A multinomial probit model is similar to logistic regression, used for modeling a binary outcome based on an underlying probability the outcome will occur, but generalizes to modeling a categorical variable based on probabilities for each category.
The critical component of the statistical model is the representation of α p (s) as a spatial process. A spatial process is a statistical representation that models spatially-correlated values. It provides a prior structure that serves to smooth across noise in the observations and allows for prediction at locations based on information from nearby locations, including interpolation to locations with no data. Apart from the sum-to-one constraint, the taxa are considered to be independent in the prior. We did not want to impose any structure that ties the different taxa together, as any correlation will likely vary across space.
In the next section, we consider two spatial models to define the structure of the α p (s) processes, a standard conditional autoregressive model [20] and a Gaussian Markov random field (MRF) approximation to a Gaussian process with Matérn covariance [21]. These models are specific statistical formulations of spatial processes that represent spatial correlation by defining neighborhoods around each location that are used to help inform predictions at the location.
Spatial process models. MRF models represent the neighborhood information by working directly with the precision matrix (the inverse of the covariance matrix) of the values of the spatial process, so calculation of the prior density of α p is computationally simple [22]. However, in situations in which the likelihood is not normal, such as our multinomial likelihood, it can be difficult to set up effective MCMC algorithms that are able to move in the high-dimensional space of α p . The latent variable representation helps to alleviate this problem. Next we describe two alternative spatial models that we considered; in Model comparison, we evaluate the models on held-out data to choose between the two.
Standard conditional autoregressive models. Our first model is a standard conditional autoregressive (CAR) model; technical details can be found in [20]. We use a standard form of this model that treats the four cardinal neighbors of each grid cell as the neighbors of the grid cell. The corresponding precision matrix has diagonal elements, Q ii , equal to the number of neighbors for the ith area (i.e., four except for cells on the boundary of the domain), while Q ik = −1 (the negative of a weight of one) when areas i and k are neighbors and Q ik = 0 when they are not. This gives the following model for the values of α p (s i ) collected as a vector across all of the grid cells, i = 1, . . ., m: The use of the generalized inverse notation indicates that Q is not full-rank, but is of rank m − 1; this gives an improper prior on an implicit overall mean for the process values. Note that we specify an explicit mean of zero because a non-zero mean would not be identifiable in light of the implicit mean. This specification is called an intrinsic conditional autoregression (ICAR) and we can write Q = D − C where C is the m × m adjacency matrix defining the neighborhood relation of the locations; that is C ik = 1 if locations i and k are neighbors and zero otherwise. The matrix D is an m × m diagonal matrix containing the row sums of matrix C as the diagonal We refer to this as the CAR model. Gaussian process approximation. Gaussian processes (GP) are also standard models for spatial processes [20]. GP models are computationally challenging for large datasets because of computational manipulations involving large covariance matrices. Given this, [21] proposed a new framework for using Gaussian MRFs (GMRFs) as approximations to GPs, based on the use of stochastic partial differential equations (SPDEs).
Gaussian processes are generally constructed using one of a number of correlation functions that define how the strength of correlation between the values of the process at two locations decays as a function of the distance between the locations. We consider Gaussian processes in the commonly-used Matérn class, using the following parameterization of the Matérn correlation function, where d is Euclidean distance, ρ is the spatial range parameter, and K n ðÁÞ is the modified Bessel function of the second kind, whose order is the smoothness (differentiability) parameter, ν > 0. ν = 0.5 gives the exponential covariance. For any pair of locations, R(d) defines the correlation of the process, (i.e., α p (s) in our context), as a function of the distance between the locations. Considering all pairs of locations, this defines a correlation matrix for all locations of interest. The approach of [21] allows us to consider MRF approximations to the Matérn -based GP for ν = 1 and ν = 2. Our second spatial model is this Lindgren approximation for Matérn -based GPs with ν = 1. To implement this Lindgren model, one modifies the Q matrix defined previously as follows based on the technical specification of the precision matrix provided in [21]. Let a ¼ 4 þ 1 r 2 . The diagonal elements of Q are 4 + a 2 . The entries corresponding to cardinal neighbors are −2a. Those for diagonal neighbors are 2, and those for 2nd-order cardinal neighbors are 1. This extends the neighborhood structure relative to the CAR model and parameterizes it as a function of ρ.
The primary difference between the CAR and Lindgren models is that the Lindgren model provides an additional degree of freedom by estimating ρ. In particular ρ allows us to estimate the locality of the spatial smoothing. As ρ decreases, the model uses increasingly localized data to estimate the compositional proportions at a given location, effectively averaging the empirical proportions over smaller neighborhoods. In general, the [21] model will generally provide for a smoother estimate than the CAR model [23].
To ensure that the σ 2 parameter is mathematically equivalent between the two models, we reparameterize, producing our second model: We refer to this model as the SPDE model. Prior distributions. The ICAR specification contains a set of hyperparameters fs 2 p g; p = 1, . . .P. Following [24] we use a uniform distribution on each σ p parameter, with upper bound of 1000. For the SPDE model we also have hyperparameters {μ p }, which we give flat, non-informative priors (truncated at ±10), and {ρ p }, which we give uniform priors on the interval (0.1, exp (5)). These various hyperparameters are unknown parameters that control the spatial structure of the two spatial models and are estimated from both the data and the prior distributions just specified based on the Bayesian approach.
Latent variable model. It is well-known that devising an effective MCMC algorithm for models with latent Gaussian process(es) and a non-Gaussian likelihood is difficult [22,25,26]. To develop an algorithm, we make use of a latent variable representation for the multinomial probit model [27]. The representation introduces latent variables that allow one to develop an MCMC sampling strategy that takes advantage of closed-form full conditional distributions (so-called Gibbs sampling steps) for α p .
Suppose that compositional counts are available at a number of locations. At location i, a sample of size n i observations is collected, and each observation (i.e., each tree) can be classified into P distinct categories. For a given tree j at location i, let Y ij denote the response variable indicating the category. Let Y ij be associated with P latent variables W ij1 , . . ., W ijP such that Y ij = p if and only if W ijp ¼ max p 0 fW ijp 0 g; in other words, the maximum of the set of latent variables fW ijp g P p¼1 determines the category of observation j at location i. The final piece of the latent variable representation is the relationship between the W variables and the α p (s) processes. We have that independently for all of the W ijp values. Consider the following example with two locations that are neighbors and P = 2 categories. Each tree j at location i is associated with two variables W ij1 and W ij2 , governed by the latent variables α 1 (s i ) and α 2 (s i ), respectively. Suppose that α 1 (s i )>α 2 (s i ) for a given location i. Then this model implies that any tree j is more likely to be labeled 1 than 2 at location i. The difference between α 1 (s i ) and α 2 (s i ) explains the difference in probability of categories 1 and 2 at location i, and the similarity between α p (s 1 ) and α p (s 2 ) explains the correlation between the probabilities at locations 1 and 2 for category p.
Model for township data. We developed an extension of the model described in previous sections to account for data at a different aggregation than our core 8 km grid. This extension introduces a new set of latent variables, one per tree, that indicate the grid cells in which the trees are located and that can be sampled within the MCMC as additional unknown parameters. Specifically, c tj is the latent "membership" variable for tree j in township t, t = 1, . . ., T. The prior for c tj is a discrete distribution that puts mass, ψ ti , i = 1, . . ., m, proportional to the areal overlap between the township in which the tree is located and the m grid cells, giving c tj $ Multinomð1; fc t1 ; . . . ; c tm gÞ; independently across all trees. Because the townships overlap a limited number of grid cells, most of the ψ t1 , . . ., ψ tm values are zero.
Using the latent variable representation, we have that W tjp * N(α p (s c tj ), 1) for tree j in township t. In updating the other parameters in the model during the MCMC (specifically the α values), we condition on the current values, {c tj }, which provides a "soft" (i.e., probabilistic) assignment of trees to grid cells that respects both the known township in which the trees occurred and the uncertainty in which grid cells the trees occurred.
Note that this prior represents the location of each tree in a township as being independent of the other trees; this is somewhat unrealistic because it does not represent our knowledge that the trees in a township would be distributed somewhat regularly across the area of the township because the witness trees were used to indicate property boundaries.
Computation. The [27] representation is convenient for MCMC sampling, particularly in this high-dimensional spatial context, as it allows us to draw from the posterior conditional distributions of the W ijp variables (these distributions are truncated normal) in closed form and to draw the entire vector of latent process values for each taxon, α p , as a single sample that respects the spatial dependence structure for each taxon.
While the latent variable representation provides great advantages in the MCMC sampling for each α p compared to joint Metropolis updates or updating each location individually, there is still strong dependence between the hyperparameters, fs 2 p ; m p ; r p g and the latent process values (as well as between the latent process values and the latent W ijp variables). To address the first, we developed a "cross-level" joint updating strategy for the CAR model in which we propose ϕ p = σ p , p = 1, . . ., P, (and for the SPDE model, ϕ p 2 {μ p , (σ p , ρ p )}) via a Metropolis-style random walk and then given the proposed value, Ã p , propose α p from its full conditional distribution given Ã p and the latent W p variables, where W p is the vector of all W ijp values for taxon p: W p = {W ijp }, i = 1, . . ., m;j = 1, . . ., n i . This is equivalent to sampling from the marginalized (with respect to α p ) distribution of ϕ p conditional on W p . For these various joint samples of hyperparameters and α p , we use adaptive Metropolis sampling [28]. The full description of the MCMC sampling steps is provided in Appendix. In addition, in the latent variable representation, θ p (s) never appears explicitly and cannot be calculated in close form. Instead we use Monte Carlo integration over W ijp , p = 1, . . ., P to estimate θ p (s i ), also described in Appendix.
The model is implemented in R [29] with core computational calculations coded in C+ + using the Rcpp package [30]. We also make extensive use of sparse matrix representations and algorithms, using the spam package in R [31]. All code is available on Github, including pre-and post-processing code, at https://github.com/PalEON-Project/composition.

Model comparison
Design. We compared the CAR and SPDE models by holding out data from the fitting process and assessing the fit of the model on the held-out data. We used two experiments with held-out data: 1. The first experiment used a subregion containing most of Minnesota and a small amount of western Wisconsin, defined to be the cells whose x-coordinate was less than 300,000 m (this defines a north-south line that approximately goes through Duluth, Minnesota) and hereafter referred to as the "Minnesota subregion". We chose this subregion for evaluation because of its high data density, allowing us to experiment with the effects of increasing data sparsity on model performance. We held out all data from 95% of the cells in this Minnesota subregion, with cells selected at random. This was meant to assess the ability of the model to interpolate from a sparse set of cells/townships and mimics the limited data in Illinois and Indiana.
2. We held out 5% of the trees from all of the trees in the dataset for the midwestern subdomain (leaving aside the held-out Minnesota subregion cells). This was meant to assess the ability of the model to estimate the composition in cells in which data were available.
Finally, in a separate sensitivity analysis we instead left out 80% of the cells in Minnesota subregion at random. This variation on the first experiment above was meant to indicate whether our model comparison conclusions would be robust as the digitization process for Illinois and Indiana progresses and provides us with increasingly dense data.
There has been extensive work in the statistical literature on good metrics to use to compare the predictive ability of models; these metrics are referred to as scoring rules. A general conclusion from this work is that predictive distributions should maximize sharpness subject to calibration. That is, the predictive distribution should be as narrow as possible while being calibrated such that the observations are consistent with the distribution [32]. When thinking in terms of prediction intervals as summaries of the predictive distribution, we seek intervals that are as narrow as possible while still covering the truth the expected proportion (e.g., 95% for a 95% prediction interval) of the time.
Following the suggestions in [32], we considered the following metrics: Brier score, log predictive density, mean square prediction error, mean absolute error, and coverage and length of prediction intervals. Further details on each are given below. For experiment 1, we define Y i = {Y i1 , . . ., Y iP } as the count of all trees in held-out cell i and for experiment 2, Y i is the count of held-out individual trees in the cell, while y ijp is an indicator variable taking value either 0 or 1 depending on whether the jth held-out tree in the ith cell is of taxon p.ŷ ip ¼ Y ip =n i is the empirical proportion in category p for the n i held-out trees in cell i. We calculated each of the metrics in two ways. First, we used the posterior mean composition estimates (as an evaluation of our core predictions), withỹ p ðsÞ being the posterior mean. Second, we averaged the metric over the posterior samples (as an evaluation of our full data product, including uncertainty), takingỹ p ðsÞ to be an individual MCMC sample and then averaging the metric over all the posterior samples.
1. Brier score: [32] suggest this metric, which has been in use for decades. For multi-category as opposed to binary outcomes, this takes the form where n ¼ P m i¼1 n i is the total number of held-out trees for a given experiment and j indexes across held-out trees in cell i. While in principle, this metric should be optimal [33], it is very sensitive to small predictions near zero [32]. Even worse, our Monte Carlo estimation of θ used 10000 samples, so in some casesỹ p ðsÞ ¼ 0. When a tree is present in a cell but its corresponding proportion is 0, this gives a log density of −1, preventing use of the metric. As an informal solution to this we setỹ p ðsÞ ¼ 1 100000 in such cases, but given these issues we treat the log predictive density as a secondary measure. These metrics calculate the error of the estimated proportions relative to the empirical proportions based on the held-out trees, averaging over cells and taxa. We weight by the number of held-out trees in each cell to account for the greater variability in the empirical proportions in locations with few held-out trees.
4. (Experiment 1 only) Coverage and length of 95% prediction intervals for Y ip . We considered only cells with at least 50 trees to focus our assessment on cases where empirical proportions were reasonably certain and avoid being strongly influenced by predictive inference for cells where observational variability dominates.
Note that all of the metrics except coverage and interval length can be applied to individual posterior samples and therefore allow us to estimate the posterior probability that one model has a lower (better) value of the metric than the other model by simply calculating the proportion of samples for which the model has a lower value of the metric. Also note that in addition to allowing comparison between models the MAE and RMSPE metrics allow one to assess absolute performance of each model in predicting composition.
In our initial exploratory fitting, we noticed that the SPDE model produced boundary effects in the predicted composition near the edges of the convex hull of the observations. To attempt to alleviate this, we added a buffer zone with a width of six grid cells around our entire original domain, but note that the boundary effects were still evident even after inclusion of the buffer. For the model comparison, we included this buffer for both the SPDE and CAR models. We ran each model for 150,000 iterations. After discarding 25,000 iterations for burn-in, we retained a posterior sample of 250 subsampled iterations-we use a subsample instead of the full 125,000 post-burn-in iterations to reduce post-processing computations and storage needs.
Results. Here we summarize the results of our analyses that inform the choice between the CAR and SPDE models.
For Experiment 1 (full cells held out) for cells in the Minnesota subregion held out of the fitting process, the CAR model outperforms the SPDE model based on the posterior distribution over the predictive metric values (Table 1). For the posterior mean predictions, the SPDE model appears to outperform the CAR model to a lesser degree, but we do not have any uncertainty estimates for this comparison. Coverage and interval lengths are similar between the two models ( Table 2). From a practical perspective, based on the difference in mean absolute error, the differences between the models are small ( Table 1).
The results for the variation on Experiment 1 in which the proportion of cells that are held out decreases from 95% to 80% show that the SPDE model generally outperforms the CAR model, but again differences from a practical perspective, based on mean absolute error, are limited (Tables 3 and 4).
In Experiment 2 (individual trees held out), we have evidence (posterior probability of 0.93) that the SPDE model is better based on the Brier score, but the Brier score values for the two models are numerically almost the same (Table 5). Settlement-Era Tree Composition in the Northeastern U.S.
The differences between models are not consistent across the various comparisons, so there is not a clear choice. In our final data product we use the CAR model, for three reasons. First, the CAR model has modestly better performance when data are sparse, as is still the case for Illinois and Indiana. Second, the model is simpler and easier to explain, and computations can be done more quickly. Third, predictions from the SPDE model showed boundary effects, with some taxa showing non-negligible posterior mean values at the edges of the domain, well away from where the taxa were present in the empirical data. This included non-negligible values within (but near the edge of) the convex hull of locations with data. Settlement-Era Tree Composition in the Northeastern U.S.

Data Product
The final data product is a dataset that contains 250 posterior samples of the proportions of each of the 23 tree taxa at each grid cell in the states in our domain of the northeastern United States. For this final data product, we ran the model using the CAR specification with all of the data (including the data held out in the model comparison analyses) for 150,000 iterations with the same burn-in and subsampling details as described in Model comparison. Based on graphical checks and calculation of effective sample size values, mixing was generally reasonable, but for some of the hyperparameters was relatively slow, particularly for less common taxa. Despite this, mixing for the variables of substantive interest-the proportions-was good, with effective sample sizes for the final product generally near 250.
Maps of estimated composition for the full domain for several taxa of substantive interest illustrate the results, contrasting the raw data proportions, the posterior means, and posterior standard deviations as pointwise estimates of uncertainty (Fig 2). We also present the posterior means for all 23 taxa (Fig 3).
The data product is publicly available at the NIS Data Portal under the CC BY 4.0 license as version 1.0 as of February 2016 [34]. The product is in the form of a netCDF-4 file, with dimensions x-coordinate, y-coordinate, and MCMC iteration. There is one variable per taxon. In addition, dynamic visualizations of the product using the Shiny tool are available at https:// www3.nd.edu/~paleolab/paleonproject/maps. The PalEON Project (in particular the first author) will continue to maintain this product, releasing new versions as additional data in Illinois, Indiana and Ohio are digitized. Note that digitization of data from Illinois and Indiana is ongoing, and digitization of additional data from Ohio is planned as well. As a result, at some point we expect to have complete data for the midwestern half of the domain.

Discussion
In the parts of the modeled region with spatially complete data (in particular, Minnesota, Wisconsin, and Michigan), the statistical estimates of forest composition closely match the patterns apparent in the raw data (Fig 2), as expected. In these areas, the estimated tree composition from the model has the advantage of downweighting unusual or outlier values in the empirical proportions of individual grid cells, which are likely due to stochastic sampling variation within that grid cell (compare the first two columns in Fig 2). Some stochastic variation is expected given that, even in the most spatially complete regions, each grid cell contains an average of 124 trees (120-135 is typical) [6] and some cells contain many fewer trees. Hence, some smoothing of this stochastic variation is appropriate. This smoothing is based on information on data from nearby cells, and the estimates from the model reflect the smooth trends in forest composition across the spatial domain. A partial cost is that these maps can smooth out sharp ecotones or other forms of true spatial heterogeneity, particularly in areas with sparse data (including areas with low tree density). For example, the sharp increase in Elm along the Minnesota River (Fig 2, first column) likely represents a real ecotone in the settlement-era vegetation. Vegetation gradients and ecotones were sharper in the settlement-era forests in the upper Midwest than in contemporary forests [6], and the modeled estimates may partially obscure this change. Users interested in using the original unsmoothed data are directed to the data product described earlier [17]. Additional investigation of other statistical representations to better capture sharp gradients is of interest, in particular nonstationary spatial models and use of covariates. Potential environmental covariates include soils, firebreaks, and topography [35,36]. Here, however, we chose to limit our model to be exclusively a function of spatial distance in order to avoid dependence on the environmental drivers of pre-settlement forest composition that might lead to circular reasoning in subsequent inferences based on this dataset. Use of covariates could also lead to prediction that a taxa is present well beyond its range boundary in places where data are sparse.
A key advance of this work over prior reconstructions of settlement-era vegetation lies in the estimates of uncertainty across the spatial domain. These estimates of uncertainty include the sampling uncertainty within grid cells (as do the within-grid cell estimates of uncertainty available from the raw proportions), but, because this is a spatial model, predictions and their associated uncertainty estimates are also informed by the information content of nearby cells. The maps of standard errors across species (Fig 2, third column) highlight the advantages of this approach in areas of high data coverage (Minnesota, Wisconsin, Michigan) and in areas of sparse coverage (e.g., Illinois, Indiana, parts of Ohio). Where there are not large gaps in the data, the model provides low and fairly smooth estimates of uncertainty. Uncertainty is generally higher in the eastern subdomain than in the areas of the midwestern subdomain with high data coverage because of missing townships and lower sampling density even in townships with data. In areas of sparse coverage and in areas with low tree density (e.g., southwestern Minnesota), the standard error of our estimates increases appropriately. Nevetheless, these uncertainties surround reasonable estimates of trends in composition. For example, the model does a good job of capturing the oak ecotone in Indiana and Illinois, representing a shift from oak savannas and woodlands to closed mesic forests (Fig 2). Experiment 1 showed that both models predicted composition at cells with no data reasonably well, mimicking the case of sparsely sampled data and giving confidence in the broad spatial patterns predicted in more poorly sampled regions, particularly those with regular, but sparse sampling that mimic the experiment (Illinois and Indiana, but not Ohio). The apparent blockiness of uncertainty estimates in a few places such as Ohio is caused by spatial gaps and variations in sampling resolution. Absolute uncertainty generally increases with abundance for all taxa (Fig 2, column 3).
The exploration of alternative approaches to spatial modeling of composition showed similar results for the SPDE and CAR models, both in terms of prediction accuracy and performance of prediction intervals. Small differences among the various metrics of goodness of fit favored each model in turn, but applied users of the models would find little pragmatic difference affecting scientific inference. Ultimately, we slightly favor the CAR model, because it avoids the boundary effects apparent in the SPDE model at the edges of the domain.
The models presented here estimate only the relative abundance of tree taxa, which does not directly tell us about tree density or other aspects of vegetation structure. This becomes a particular limitation for interpreting vegetation where trees become sparse at the prairie-forest transition from northern Minnesota through southern Illinois [37]. Our model (correctly) predicts that tree composition there is dominated by oak, but this is less useful considering the sparseness of trees. This limitation can be addressed by developing estimates of absolute abundance (e.g., biomass) rather than compositional estimates. A gridded dataset of biomass, stem density, and basal area is already available for Minnesota, Wisconsin, and northern Michigan [6], based on the PLS data. An extension to southern Michigan, Illinois, and Indiana is planned. We are currently developing statistical estimates of biomass for Minnesota, Wisconsin, and Michigan using a statistical model applied to the gridded biomass dataset, with extension to  Illinois and Indiana planned. We also plan to estimate stem density and basal area using a similar approach to that used for biomass. effectively sampling 10000 hypothetical trees and estimating the probabilities of the different taxa in the population from the empirical proportions in this sample of trees. For each of the saved MCMC samples, k = 1, . . .K, we estimate y ðkÞ p ðs i Þ numerically. Specifically, for t = 1, . . ., 10000 samples (i.e., hypothetical trees), we independently draw In other words, we calculate the proportion of times that the maximum of W itp , p = 1, . . ., P corresponds to taxon p. Considering y ðkÞ p ðs i Þ; k ¼ 1; . . . ; K, we have a sample from the posterior of θ p (s i ).