Head-to-Head Comparison of Three Methods of Quantifying Competitive Fitness in C. elegans

Organismal fitness is relevant in many contexts in biology. The most meaningful experimental measure of fitness is competitive fitness, when two or more entities (e.g., genotypes) are allowed to compete directly. In theory, competitive fitness is simple to measure: an experimental population is initiated with the different types in known proportions and allowed to evolve under experimental conditions to a predefined endpoint. In practice, there are several obstacles to obtaining robust estimates of competitive fitness in multicellular organisms, the most pervasive of which is simply the time it takes to count many individuals of different types from many replicate populations. Methods by which counting can be automated in high throughput are desirable, but for automated methods to be useful, the bias and technical variance associated with the method must be (a) known, and (b) sufficiently small relative to other sources of bias and variance to make the effort worthwhile. The nematode Caenorhabditis elegans is an important model organism, and the fitness effects of genotype and environmental conditions are often of interest. We report a comparison of three experimental methods of quantifying competitive fitness, in which wild-type strains are competed against GFP-marked competitors under standard laboratory conditions. Population samples were split into three replicates and counted (1) “by eye” from a saved image, (2) from the same image using CellProfiler image analysis software, and (3) with a large particle flow cytometer (a “worm sorter”). From 720 replicate samples, neither the frequency of wild-type worms nor the among-sample variance differed significantly between the three methods. CellProfiler and the worm sorter provide at least a tenfold increase in sample handling speed with little (if any) bias or increase in variance.

analysis. Second, a large-particle flow cytometer (aka, a "worm sorter") can be employed. The 82 latter two methods involve significant initial investment, especially the worm sorter. However,

83
given that the relevant hardware is available, it is useful to know the time/accuracy trade-offs 84 involved with the different methods.

85
Here we provide a head-to-head comparison of three methods of quantifying competitive 86 fitness in C. elegans. Method 1 is our standard "by eye" competitive fitness assay, in which

139
The repeatability of the "by eye" and CellProfiler methods can be assessed by counting correlation between the two counts was >99.9% for both the total count and the GFP count.

143
The mean absolute difference between the two counts, expressed as a fraction of the average 144 of the two counts, was 0.73% for the total count and 0.46% for the GFP count. The correlation 145 between the proportion of wild-type worms, p, between the two counts is 99.8%. We re-counted 146 all 720 images counted by CellProfiler; the counts were exactly the same in every case.

239
(2) CellProfiler. We developed an image analysis pipeline using CellProfiler software to 240 automatically quantify competitive fitness in a given well using the paired bright-field and GFP  likelihood (REML). The full linear model is: y ijkl =µ+f i +c j +m k +t ij +u ik +v jk +w ijk +ε l|ijk , where y ijkl 300 is the estimate of SD log(CI) in block l, µ is the overall mean, f i is the effect of focal strain i, c j is the 301 effect of competitor strain j, m k is the effect of method k, t ij is the effect of the interaction 302 between focal strain i and competitor strain j, u ik is the effect of the interaction between focal 303 strain i and method k, v jk is the interaction between competitor strain j and method k, w ijk is 304 the effect of the three-way interaction, and ε l|ijk is residual (among-block) variance. We initially 305 estimated the residual variance separately for each focal/competitor/method combination, then 306 pooled the residual variance over different combinations of groups, using the minimum 307 corrected Akaike's Information Criterion (AICc) as the criterion for the best model. Similarly, 308 competitor strain, focal strain, and their interactions were removed and the AICc calculated.

309
The smallest AICc was given by the model with only method included as a fixed effect and the 310 residual variance estimated separately for each method, pooling residual variance over focal 311 and competitor strains within a method. Significance of fixed effects was assessed by F-test on 312 type III sums of squares.

313
We repeated the above analysis for the fraction of focal worms, p, with block included as 314 an additional random effect and replicate (nested within block) as the unit of observation. The

315
block for which we did not collect "by eye" data was omitted from the analysis.

318
We thank Joanna Dembek and Asher Shoucair for assistance in the lab. Support was provided 319 by NIH grants R01GM107227 to CFB and E. C. Andersen, and S1010OD012006 to CFB. SS 320 was supported by a graduate fellowship from the Higher Committee for Education Development

321
in Iraq.