^{1}

^{2}

^{*}

^{¤}

^{3}

The authors have declared that no competing interests exist.

Conceived and designed the experiments: EL ECA. Performed the experiments: EL ECA. Analyzed the data: EL ECA. Contributed reagents/materials/analysis tools: EL ECA SEH. Wrote the paper: EL ECA SEH. Contributed data: EL SEH.

Current address: Rubenstein School of Environment and Natural Resources, University of Vermont, Burlington, Vermont, United States of America

Litter decomposition rate (_{c}_{c}

Litter decomposition strongly influences carbon and nutrient cycling within ecosystems

Often, ^{2} is the variance

Mean mass remaining versus standard deviation of replicates at each time point for (A) Long-term Intersite Decomposition Experiment Team (LIDET) data, (B) Hobbie data; (C) EL data; and (D) HG data.

One solution could be to model the variance ^{2} as a function of

The variance ^{2} depends on

Consistent with patterns often found in decomposition data (^{2} is smaller near the bounds (0 or 1): if ^{2} = 0.005; if ^{2} = 0.125. The denominator shows that higher precision ^{2}.

In summary, the beta distribution may be better suited than the normal distribution to model proportional litter mass loss data because it is bounded between 0 and 1, its ^{2} is smaller near its bounds, as with decomposition data (

Since the beta distribution is bounded between 0 and 1, proportional litter mass loss data must also be bounded between 0 and 1. However, litter mass loss data often contain values equal to 0 (no mass remaining), or ≥1 (no decomposition or sample contamination by soil), so the data,

The goal of this paper is to compare the normal model (

We hypothesized that nonlinear beta regression would provide better fits to proportional mass loss data and give more accurate

We simulated ^{−1}. These

Time (d) | |||||

Stage | Mass remaining | ||||

Early | 80% | 446 | 112 | 22 | 2 |

Mid | 50% | 1386 | 347 | 69 | 7 |

Late | 20% | 3219 | 805 | 161 | 16 |

End | 1% | 9210 | 2303 | 461 | 46 |

Years to end | 25.2 | 6.3 | 1.3 | 0.1 |

To investigate whether the number of mass loss measurements taken within a given study would affect a given regression type’s ability to accurately estimate

Finally, we used three different error structures that resembled those found in real data (

We took random samples from the normal distribution

where σ increases from Var _{1}to Var _{3}. Values

We took random samples from the beta distribution, with

We sampled from the beta distribution (

In total, we ran 768 simulations (four

k = 0.1 | k = 0.01 | k = 0.002 | k = 0.0005 | |||||||||||

Error | Stage | # meas | ||||||||||||

Beta only | ||||||||||||||

Early | 5 | 5690 | 9784 | 6906 | 7123 | 6942 | ||||||||

Early | 7 | 4325 | 8519 | 5168 | 9351 | 5367 | 9517 | 5281 | 9409 | |||||

Early | 10 | 2581 | 6638 | 3274 | 7454 | 3390 | 7802 | 3335 | 7641 |

Additionally, when using ML estimation with beta errors to estimate the low ^{−1}), optimization algorithms often failed to converge. We therefore estimated the low rate as a yearly rate (this solved the convergence problems) and converted it back to a daily rate for analyses, figures and tables.

For simulation runs that generated data sets with values

Because the generated data sets had small sample sizes (i.e. _{c}

To determine how well the different approaches estimated the litter decomposition rate, _{e}_{t}^{−1}) we calculated the average percent (%) bias

_{t}_{t}_{e}

Because results (% bias, % RE, and average _{c} results) were very similar among

We used three real data sets that reflected the range of time frames used in the data simulation: early, mid, and late stage decomposition data, based on the proportion of initial litter mass still present at the end of each study (

For the early stage decomposition data set, we used the Hobbie and Gough

Mid stage decomposition data were provided by Laliberté and Tylianakis’

We used the Hobbie

Because NLS and ML estimation using normal errors produced nearly identical results in the data simulations for early to late stage decomposition (^{−1}), optimization algorithms often failed to converge. In these cases, estimating

(A) Percent bias, (B) percent relative error, and (C) average ^{−1}.

(A) standard deviation (σ) = 0.0125 (option 3a) and SV transformation, (B) σ = 0.0125 (option 3a) and REP transformation, (C) σ = 0.05 (option 3b) and SV transformation, (D) σ = 0.05 (option 3b) and REP transformation. Early, mid, late and end are early, mid, late and end stage decomposition simulations. The numbers 2, 5, 7 and 10 are the numbers of measurements used in each simulation. Blue circles = NLS, Red circles = Normal ML, gray/black circles = Beta ML. In most cases, nls = Normal ML so that the red circles cover the blue circles. Gray lines show 0% bias.

(A) σ = 0.0125 (option 3a) and SV transformation, (B) σ = 0.0125 (option 3a) and REP transformation, (C) σ = 0.05 (option 3b) and SV transformation, (D) σ = 0.05 (option 3b) and REP transformation. Early, mid, late and end are early, mid, late and end stage decomposition simulations. The numbers 2, 5, 7 and 10 are the numbers of measurements used in each simulation. Blue circles = NLS, Red circles = Normal ML, gray/black circles = Beta ML. In most cases, nls = Normal ML so that the red circles cover the blue circles.

(A) σ = 0.0125 (option 3a) and SV transformation, (B) σ = 0.0125 (option 3a) and REP transformation, (C) σ = 0.05 (option 3b) and SV transformation, (D) σ = 0.05 (option 3b) and REP transformation. Early, mid, late and end are early, mid, late and end stage decomposition simulations. The numbers 2, 5, 7 and 10 are the numbers of measurements used in each simulation. Blue circles = NLS, Red circles = Normal ML, gray/black circles = Beta ML. In most cases, nls = Normal ML so that the red circles cover the blue circles. Gray lines show the true ^{−1}.

Percent bias using (A) SV and (B) REP transformations and relative error using (C) SV and (D) REP transformations. Early, mid, late and end are early, mid, late and end stage decomposition simulations. The numbers 2, 5, 7 and 10 are the numbers of measurements used in each simulation. Blue circles = NLS, Red circles = Normal ML, gray/black circles = Beta ML. In most cases, nls = Normal ML so that the red circles cover the blue circles. Gray lines in panels (A) and (B) show 0% bias.

(A) SV and (B) REP transformations. Early, mid, late and end are early, mid, late and end stage decomposition simulations. The numbers 2, 5, 7 and 10 are the numbers of measurements used in each simulation. Blue circles = NLS, Red circles = Normal ML, gray/black circles = Beta ML. In most cases, nls = Normal ML so that the red circles cover the blue circles. Gray lines in panels (A) and (B) show the true ^{−1}.

We fit single pool models (_{c}_{c}_{c}_{c}

We also examined model fit to the untransformed data using fractional bias (FB)

For beta-distributed errors (option 2; no data transformations needed), the accuracy of

Across all simulations, using ML regression with beta errors resulted in very similar or more accurate

In most cases, AIC_{c}_{c}

Stage | # meas |
Same | Beta ML | Norm ML | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML |

Early | 2 | 35.3 | 64.3 | 0.4 | 66.5 | 32.5 | 1.0 | 91.7 | 6.0 | 2.3 |

5 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | |

7 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | |

10 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | |

Mid | 2 | 98.3 | 1.2 | 0.6 | 99.2 | 0.0 | 0.8 | 99.2 | 0.0 | 0.8 |

5 | 0.2 | 99.8 | 0.0 | 1.3 | 98.7 | 0.0 | 8.4 | 91.3 | 0.3 | |

7 | 0.1 | 99.9 | 0.0 | 0.6 | 99.4 | 0.0 | 4.2 | 95.6 | 0.2 | |

10 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | 1.1 | 98.9 | 0.1 | |

Late | 2 | 87.6 | 11.6 | 0.8 | 97.0 | 1.6 | 1.4 | 97.8 | 0.0 | 2.2 |

5 | 6.3 | 93.6 | 0.1 | 24.3 | 75.3 | 0.4 | 62.5 | 35.8 | 1.8 | |

7 | 3.4 | 96.6 | 0.1 | 13.3 | 86.5 | 0.2 | 40.7 | 58.2 | 1.2 | |

10 | 0.4 | 99.6 | 0.0 | 3.8 | 96.2 | 0.1 | 18.6 | 80.9 | 0.5 | |

End | 2 | 0.0 | 100.0 | 0.0 | 0.2 | 99.8 | 0.0 | 1.0 | 98.9 | 0.1 |

5 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | |

7 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | |

10 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 |

Norm = normal.

meas = measurements.

For simulations with beta-distributed plus normal errors (option 3), the accuracy of

With few exceptions, estimating

Overall, the REP transformation resulted in less bias and RE than did the SV transformation. This was especially apparent in early, mid and late stage decomposition. The amount of bias and RE generated by the REP and SV transformations was similar during end stage decomposition.

Despite the fact that ML estimation using beta errors tended to generate less accurate _{c}_{c}

Tr |
Stage | # meas | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML |

SV |
Early | 2 | 50.6 | 49.0 | 0.4 | 74.6 | 24.4 | 0.9 | 91.6 | 5.7 | 2.7 |

5 | 4.6 | 94.6 | 0.8 | 8.7 | 90.3 | 1.0 | 16.4 | 81.5 | 2.0 | ||

7 | 1.1 | 98.9 | 0.1 | 3.3 | 96.6 | 0.2 | 10.4 | 88.5 | 1.1 | ||

10 | 0.3 | 99.7 | 0.0 | 1.5 | 98.4 | 0.1 | 6.1 | 93.0 | 0.9 | ||

Mid | 2 | 98.0 | 1.2 | 0.8 | 98.9 | 0.0 | 1.1 | 99.1 | 0.0 | 0.9 | |

5 | 2.3 | 97.6 | 0.1 | 6.2 | 93.7 | 0.1 | 16.8 | 82.7 | 0.5 | ||

7 | 1.0 | 99.0 | 0.0 | 3.5 | 96.4 | 0.1 | 10.9 | 88.6 | 0.5 | ||

10 | 0.5 | 99.5 | 0.0 | 1.6 | 98.4 | 0.1 | 5.5 | 94.2 | 0.3 | ||

Late | 2 | 89.2 | 10.1 | 0.7 | 96.9 | 1.5 | 1.6 | 97.8 | 0.0 | 2.2 | |

5 | 9.5 | 90.4 | 0.1 | 28.8 | 70.8 | 0.5 | 62.7 | 35.4 | 1.9 | ||

7 | 5.4 | 94.5 | 0.1 | 17.1 | 82.5 | 0.4 | 43.9 | 55.0 | 1.1 | ||

10 | 1.0 | 99.0 | 0.0 | 5.9 | 94.0 | 0.1 | 21.4 | 77.7 | 0.9 | ||

End | 2 | 34.3 | 43.6 | 22.1 | 41.8 | 35.5 | 22.6 | 49.0 | 25.6 | 25.5 | |

5 | 1.9 | 97.9 | 0.2 | 3.1 | 96.6 | 0.3 | 6.3 | 93.3 | 0.4 | ||

7 | 0.1 | 99.9 | 0.0 | 0.6 | 99.4 | 0.0 | 2.1 | 97.8 | 0.1 | ||

10 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | ||

REP |
Early | 2 | 32.8 | 66.9 | 0.4 | 62.2 | 36.5 | 1.3 | 89.2 | 7.4 | 3.5 |

5 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.4 | 99.5 | 0.1 | ||

7 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.2 | 99.8 | 0.0 | ||

10 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | ||

Mid | 2 | 97.7 | 1.6 | 0.8 | 99.2 | 0.1 | 0.8 | 99.1 | 0.0 | 0.9 | |

5 | 0.4 | 99.6 | 0.0 | 1.7 | 98.2 | 0.1 | 9.4 | 89.8 | 0.8 | ||

7 | 0.2 | 99.8 | 0.0 | 1.2 | 98.8 | 0.1 | 5.5 | 94.0 | 0.5 | ||

10 | 0.0 | 100.0 | 0.0 | 0.5 | 99.5 | 0.0 | 1.9 | 97.9 | 0.2 | ||

Late | 2 | 90.6 | 8.7 | 0.8 | 97.1 | 1.5 | 1.4 | 97.6 | 0.0 | 2.4 | |

5 | 6.9 | 93.0 | 0.2 | 24.0 | 75.3 | 0.7 | 62.3 | 34.8 | 2.9 | ||

7 | 3.9 | 96.0 | 0.1 | 13.6 | 85.9 | 0.5 | 43.4 | 54.9 | 1.7 | ||

10 | 0.7 | 99.3 | 0.0 | 4.4 | 95.5 | 0.2 | 19.5 | 79.2 | 1.4 | ||

End | 2 | 39.1 | 53.3 | 7.7 | 44.9 | 46.3 | 8.9 | 53.1 | 36.3 | 10.6 | |

5 | 8.2 | 91.1 | 0.8 | 11.0 | 88.2 | 0.8 | 14.5 | 84.4 | 1.1 | ||

7 | 1.2 | 98.8 | 0.0 | 3.1 | 96.7 | 0.1 | 7.2 | 92.4 | 0.4 | ||

10 | 0.1 | 100.0 | 0.0 | 0.2 | 99.8 | 0.0 | 1.1 | 98.8 | 0.2 |

Tr = transformation.

SV = Smithson and Verkuilen

REP = transformed by replacing values ≥1 with 0.9999 and treating zeros as missing data.

In SV transformed data with high normal error (σ = 0.05), AIC_{c}_{c}

Tr |
Stage | # meas | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML |

SV |
Early | 2 | 73.7 | 25.9 | 0.4 | 85.1 | 13.9 | 1.0 | 92.8 | 4.3 | 2.9 |

5 | 33.7 | 19.3 | 46.9 | 31.7 | 13.2 | 55.1 | 25.5 | 6.9 | 67.6 | ||

7 | 36.3 | 27.8 | 35.8 | 33.4 | 17.6 | 49.0 | 26.9 | 8.6 | 64.5 | ||

10 | 31.4 | 25.9 | 42.7 | 26.9 | 14.6 | 58.4 | 17.3 | 6.2 | 76.6 | ||

Mid | 2 | 97.8 | 1.3 | 0.9 | 98.4 | 0.2 | 1.5 | 98.2 | 0.0 | 1.9 | |

5 | 40.1 | 55.1 | 4.9 | 46.9 | 44.9 | 8.2 | 53.1 | 31.8 | 15.2 | ||

7 | 34.2 | 60.9 | 4.9 | 43.1 | 48.6 | 8.4 | 50.0 | 33.8 | 16.2 | ||

10 | 33.9 | 58.7 | 7.3 | 40.2 | 46.8 | 13.0 | 43.6 | 33.9 | 22.5 | ||

Late | 2 | 90.5 | 8.7 | 0.9 | 96.7 | 1.9 | 1.4 | 96.3 | 0.2 | 3.5 | |

5 | 26.4 | 73.2 | 0.4 | 46.8 | 51.1 | 2.1 | 71.4 | 22.4 | 6.2 | ||

7 | 20.3 | 79.0 | 0.8 | 38.8 | 58.8 | 2.4 | 60.4 | 32.9 | 6.7 | ||

10 | 9.1 | 90.5 | 0.4 | 25.1 | 72.5 | 2.4 | 47.8 | 43.8 | 8.4 | ||

End | 2 | 41.5 | 20.8 | 37.8 | 45.6 | 13.7 | 40.7 | 44.4 | 6.2 | 49.4 | |

5 | 12.2 | 85.4 | 2.4 | 18.0 | 78.6 | 3.4 | 28.7 | 66.0 | 5.3 | ||

7 | 1.9 | 97.9 | 0.3 | 6.1 | 93.2 | 0.7 | 15.5 | 81.8 | 2.7 | ||

10 | 0.1 | 99.9 | 0.0 | 0.5 | 99.4 | 0.1 | 2.8 | 96.7 | 0.5 | ||

REP |
Early | 2 | 27.1 | 72.1 | 0.8 | 47.2 | 50.7 | 2.1 | 72.5 | 21.8 | 5.7 |

5 | 0.1 | 99.8 | 0.0 | 0.4 | 99.5 | 0.1 | 1.2 | 98.4 | 0.4 | ||

7 | 0.0 | 100.0 | 0.0 | 0.1 | 99.9 | 0.0 | 0.9 | 98.9 | 0.3 | ||

10 | 0.0 | 100.0 | 0.0 | 0.0 | 100.0 | 0.0 | 0.4 | 99.5 | 0.1 | ||

Mid | 2 | 93.3 | 5.7 | 1.0 | 97.2 | 0.8 | 2.0 | 97.6 | 0.0 | 2.4 | |

5 | 0.8 | 99.1 | 0.1 | 3.2 | 96.5 | 0.3 | 10.6 | 87.3 | 2.1 | ||

7 | 0.6 | 99.4 | 0.1 | 2.7 | 96.8 | 0.5 | 9.5 | 87.5 | 3.0 | ||

10 | 0.2 | 99.7 | 0.1 | 1.8 | 97.8 | 0.5 | 5.9 | 91.9 | 2.3 | ||

Late | 2 | 93.1 | 5.8 | 1.0 | 97.1 | 1.4 | 1.6 | 96.4 | 0.1 | 3.5 | |

5 | 9.1 | 90.4 | 0.5 | 24.1 | 73.7 | 2.2 | 54.3 | 38.7 | 7.1 | ||

7 | 6.6 | 92.7 | 0.8 | 15.8 | 82.1 | 2.2 | 39.4 | 54.8 | 5.8 | ||

10 | 2.0 | 97.8 | 0.2 | 7.0 | 91.9 | 1.1 | 23.4 | 71.7 | 4.9 | ||

End | 2 | 61.9 | 17.0 | 21.1 | 63.1 | 12.9 | 24.0 | 61.3 | 7.5 | 31.2 | |

5 | 36.1 | 56.0 | 8.0 | 46.3 | 41.2 | 12.5 | 53.5 | 28.3 | 18.2 | ||

7 | 16.2 | 81.7 | 2.1 | 30.2 | 64.7 | 5.1 | 42.7 | 44.6 | 12.7 | ||

10 | 6.0 | 93.1 | 0.9 | 14.1 | 82.6 | 3.3 | 28.6 | 59.3 | 12.1 |

Tr = transformation.

SV = Smithson and Verkuilen

REP = transformed by replacing values ≥1 with 0.9999 and treating zeros as missing data.

Again, percent bias and RE declined from early to late stage decomposition and RE declined with number of measurements (_{1} to Var σ_{3};

Using NLS or ML estimation with normal errors on untransformed data yielded the most consistently accurate

In general, using the REP transformation resulted in less bias and relative error than did using the SV transformation (

When the data were SV transformed, across all decomposition stages, numbers of measurements, and amounts of error used to create the simulated data, AICc generally identified ML estimation with beta errors as the best model or found no difference between ML estimation with beta or normal errors (_{c}_{1} and Var σ_{2}).

Var σ_{3} |
Var σ_{2} |
Var σ_{1} |
|||||||||

Tr |
Stage | # meas | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML | Same | Beta ML | Norm ML |

SV |
Early | 2 | 75.1 | 24.1 | 0.8 | 90.1 | 7.1 | 2.8 | 90.7 | 0.1 | 9.2 |

5 | 39.3 | 32.5 | 28.1 | 39.3 | 21.7 | 39.0 | 39.7 | 22.6 | 37.8 | ||

7 | 37.3 | 44.2 | 18.5 | 39.3 | 21.1 | 39.6 | 35.5 | 16.7 | 47.8 | ||

10 | 31.8 | 51.3 | 16.9 | 33.1 | 20.7 | 46.2 | 26.8 | 14.9 | 58.3 | ||

Mid | 2 | 95.8 | 3.0 | 1.3 | 96.5 | 0.1 | 3.5 | 98.3 | 0.0 | 1.7 | |

5 | 36.7 | 57.3 | 6.0 | 45.7 | 47.7 | 6.6 | 48.1 | 41.4 | 10.5 | ||

7 | 29.4 | 66.6 | 3.9 | 44.6 | 43.6 | 11.8 | 43.7 | 38.9 | 17.4 | ||

10 | 30.7 | 60.1 | 9.3 | 39.9 | 41.7 | 18.5 | 37.1 | 47.9 | 15.0 | ||

Late | 2 | 91.5 | 7.6 | 0.9 | 95.8 | 0.9 | 3.3 | 89.5 | 0.0 | 10.5 | |

5 | 30.5 | 67.4 | 2.1 | 63.6 | 27.8 | 8.6 | 74.7 | 1.3 | 24.0 | ||

7 | 28.1 | 68.8 | 3.0 | 56.5 | 31.4 | 12.2 | 68.4 | 3.5 | 28.1 | ||

10 | 14.0 | 84.2 | 1.8 | 46.1 | 39.1 | 14.8 | 55.0 | 6.1 | 38.9 | ||

End | 2 | 54.5 | 8.3 | 37.2 | 49.1 | 5.2 | 45.8 | 56.7 | 5.8 | 37.5 | |

5 | 21.4 | 73.0 | 5.6 | 24.2 | 70.8 | 5.0 | 23.0 | 74.8 | 2.1 | ||

7 | 6.2 | 92.9 | 0.9 | 14.6 | 82.7 | 2.7 | 20.0 | 75.5 | 4.5 | ||

10 | 0.6 | 99.3 | 0.1 | 1.9 | 97.7 | 0.4 | 2.3 | 97.3 | 0.4 | ||

REP |
Early | 2 | 23.2 | 74.8 | 2.0 | 53.1 | 39.6 | 7.3 | 85.6 | 0.5 | 14.0 |

5 | 0.6 | 99.2 | 0.2 | 1.9 | 97.4 | 0.7 | 10.3 | 86.3 | 3.4 | ||

7 | 0.1 | 99.8 | 0.0 | 1.2 | 98.3 | 0.6 | 13.2 | 79.6 | 7.2 | ||

10 | 0.0 | 100.0 | 0.0 | 0.4 | 99.4 | 0.1 | 8.8 | 85.0 | 6.2 | ||

Mid | 2 | 77.9 | 19.3 | 2.8 | 92.7 | 1.1 | 6.3 | 98.4 | 0.0 | 1.7 | |

5 | 3.9 | 95.1 | 1.0 | 13.9 | 82.7 | 3.5 | 42.4 | 37.7 | 19.9 | ||

7 | 1.5 | 98.1 | 0.4 | 10.6 | 84.8 | 4.5 | 35.2 | 42.3 | 22.6 | ||

10 | 1.3 | 98.2 | 0.5 | 8.6 | 86.4 | 5.0 | 26.2 | 55.3 | 18.6 | ||

Late | 2 | 95.6 | 2.9 | 1.5 | 95.9 | 0.3 | 3.8 | 90.2 | 0.0 | 9.9 | |

5 | 11.7 | 86.0 | 2.4 | 39.4 | 49.4 | 11.2 | 66.2 | 1.3 | 32.5 | ||

7 | 10.3 | 86.4 | 3.3 | 32.3 | 56.1 | 11.7 | 55.0 | 3.5 | 41.6 | ||

10 | 6.0 | 91.8 | 2.2 | 21.3 | 69.5 | 9.3 | 42.4 | 4.9 | 52.8 | ||

End | 2 | 70.4 | 6.6 | 23.0 | 65.4 | 7.1 | 27.6 | 67.7 | 12.1 | 20.2 | |

5 | 37.3 | 48.7 | 14.0 | 48.7 | 35.4 | 16.0 | 35.5 | 60.3 | 4.3 | ||

7 | 26.2 | 68.1 | 5.8 | 39.0 | 49.5 | 11.6 | 30.9 | 60.8 | 8.4 | ||

10 | 9.2 | 88.4 | 2.4 | 16.5 | 78.3 | 5.2 | 7.4 | 86.7 | 5.9 |

Tr = transformation.

SV = Smithson and Verkuilen

REP = transformed by replacing values ≥1 with 0.9999 and treating zeros as missing data.

When using the REP transformation, AIC_{c}_{2} and Var σ_{3}) and the number of measurements was more than two (_{1}), AIC_{c}

Overall, normal and beta errors produced similar

Data | Transformation | Error | Mean ^{−1}) |
Mean FB | σ FB | Mean RB | σ RB |

Hobbie &Gough | |||||||

None | Beta | 0.00055 | 0.0001 | 0.0020 | 0.0047 | 0.0190 | |

Normal | 0.00054 | 0.0026 | 0.0088 | 0.0308 | 0.1163 | ||

SV |
Beta | 0.00055 | −0.0042 | 0.0083 | −0.1394 | 0.2908 | |

Normal | 0.00054 | −0.0006 | 0.0121 | −0.0758 | 0.2923 | ||

REP |
Beta | 0.00055 | −0.0002 | 0.0018 | −0.0049 | 0.0276 | |

Normal | 0.00054 | 0.0024 | 0.0089 | 0.0263 | 0.1216 | ||

Laliberté &Tylianakis | |||||||

None | Beta | 0.00258 | 0.1090 | 0.0481 | 0.2778 | 0.0985 | |

Normal | 0.00363 | 0.1682 | 0.1082 | 0.3399 | 0.1219 | ||

SV |
Beta | 0.00349 | 0.1646 | 0.1438 | 0.3256 | 0.1979 | |

Normal | 0.00361 | 0.1693 | 0.1098 | 0.3416 | 0.1244 | ||

REP |
Beta | 0.00357 | 0.0192 | 0.0449 | 0.0279 | 0.0647 | |

Normal | 0.00356 | 0.0286 | 0.0303 | 0.0561 | 0.0495 | ||

Hobbie | |||||||

None | Beta | 0.00088 | 0.0123 | 0.0293 | 0.0249 | 0.0660 | |

Normal | 0.00091 | −0.0142 | 0.0227 | −0.0236 | 0.0574 | ||

SV |
Beta | 0.00090 | −0.0097 | 0.0604 | −0.0291 | 0.1247 | |

Normal | 0.00095 | −0.0343 | 0.0562 | −0.0636 | 0.1192 | ||

REP |
Beta | 0.00084 | 0.0220 | 0.0453 | 0.0330 | 0.0675 | |

Normal | 0.00091 | −0.0148 | 0.0232 | −0.0251 | 0.0588 |

SV = Smithson and Verkuilen

REP = data transformed by replacing values ≥1 with 0.9999 and treating zeros as missing data.

In 13 of 18 cases, the beta distribution could be used on untransformed data (all values >0 and <1). In these cases the beta model was best (ΔAIC_{c}_{c}

Using normal and beta errors generally produced very similar ^{−1} (_{c}_{c}

In general, using the SV data transformation resulted in similar or slightly more bias than the REP or no transformation (

In 40 of 64 cases, the data did not need to be transformed to use beta errors. Based on AIC_{c}_{c}

Again, normal and beta distributed errors produced largely similar ^{−1}), beta models produced slightly lower ^{−1}, for both the SV and REP transformations, the beta model was best in 11 cases, the normal model was best in nine cases, and there was no difference between the models in one case (

Using the SV transformation resulted in predictions with similar or more bias than using no transformation or the REP transformation (

Of the 79 cases where the beta model could be used on untransformed data, it was best in 33, whereas the normal model was best in 22 cases. The models were indistinguishable in 24 cases. Using SV transformation, in 25 out of 128 cases there was no substantial difference between the models. In the majority of cases (69) the beta model was best. The normal model was best in 34 cases. Using the REP transformation, the beta model was best in 75 cases, the normal model was best in 29 cases, and the models were indistinguishable in 24 cases.

Proportional litter mass loss data generally show reduced variance near its bounds (i.e. 0 and 1), but researchers generally use single pool decomposition models that ignore such heteroscedasticity, potentially leading to biased

Contrary to our hypothesis, we found that standard nonlinear regression with constant, normal errors proved very robust to violations of homoscedasticity. In our simulations, _{c}

Our simulations also provided information for the design of decomposition experiments, suggesting that the accuracy of

Obviously, with real proportional litter mass loss data we cannot evaluate how “biased” _{c}

While we have focused on the beta distribution because it suits bounded data especially well

The potential negative impacts of rapid increases in atmospheric CO_{2} require a better understanding of the critical role of litter decomposition in the global carbon cycle. This, in turn, requires accurate estimates of litter decomposition rates. Our results show that nonlinear beta regression is a useful method for estimating these rates. However, with the data explored to date, it did not often produce dramatically different results from standard nonlinear regression. Yet, given the type of heteroscedasticity found in most decomposition data, we suggest that the two methods should be considered alongside one another. Furthermore, our results suggest that regression method choice will have the smallest impacts during mid and late stage decomposition.

Minimum and maximum decomposition rates (

(DOCX)

Histograms of sampling times as a proportion of total time from the Adair et al.

(DOCX)

Percent bias for simulations using beta error only with

(DOCX)

Percent bias for simulations using beta error only with

(DOCX)

Average

(DOCX)

Percent bias for simulations using beta error with normal error (σ = 0.05) added with

(DOCX)

Percent relative error for simulations using beta error with normal error (σ = 0.05) added with

(DOCX)

Average

(DOCX)

Bias and ΔAIC_{c}

(DOCX)

R code to perform nonlinear beta regression.

(R)

We thank B.M. Bolker for contributing the beta regression R code. We appreciate the thorough and thoughtful comments provided by Ben Bond-Lamberty and anonymous reviewers.