Using birth-death processes to infer tumor subpopulation structure from live-cell imaging drug screening data

Tumor heterogeneity is a complex and widely recognized trait that poses significant challenges in developing effective cancer therapies. In particular, many tumors harbor a variety of subpopulations with distinct therapeutic response characteristics. Characterizing this heterogeneity by determining the subpopulation structure within a tumor enables more precise and successful treatment strategies. In our prior work, we developed PhenoPop, a computational framework for unravelling the drug-response subpopulation structure within a tumor from bulk high-throughput drug screening data. However, the deterministic nature of the underlying models driving PhenoPop restricts the model fit and the information it can extract from the data. As an advancement, we propose a stochastic model based on the linear birth-death process to address this limitation. Our model can formulate a dynamic variance along the horizon of the experiment so that the model uses more information from the data to provide a more robust estimation. In addition, the newly proposed model can be readily adapted to situations where the experimental data exhibits a positive time correlation. We test our model on simulated data (in silico) and experimental data (in vitro), which supports our argument about its advantages.


Introduction
In recent years the design of personalized anti-cancer therapies has been greatly aided by the use of high throughput drug screens (HTDS) [21,9].In these studies a large panel of drugs is tested against a patient's tumor sample to identify the most effective treatment [22,23,18,3].HTDS output observed cell viabilities after initial populations of tumor cells are exposed to each drug at a range of dose concentrations.The relative ease of performing and analyzing such large sets of simultaneous drug-response assays has been driven by technological advances in culturing patient tumor cells in vitro, and robotics and computer vision improvements.In principle, this information can be used to guide the choice of therapy and dosage for cancer patients, facilitating more personalized treatment strategies.
However, due to the evolutionary process by which they develop, tumors often harbor many different subpopulations with distinct drug-response characteristics by the time of diagnosis [16].This tumor heterogeneity can confound results from HTDS since the combined signal from multiple tumor subpopulations results in a bulk drug sensitivity profile that may not reflect the true drug response characteristics of any individual cell in the tumor.Small clones of drug-resistant subpopulations may be difficult to detect in a bulk drug response profile, but these clones may be clinically significant and drive tumor recurrence after drug-sensitive populations are depleted.As a result of the complex heterogeneities present in most tumors, care must be taken in the analysis and design of HTDS to ensure that beneficial treatments result from the HTDS.In recent work we developed a method, PhenoPop, that leverages HTDS data to probe tumor heterogeneity and population substructure with respect to drug sensitivity [15].In particular, for each drug, PhenoPop characterizes i) the number of phenotypically distinct subpopulations present, ii) the relative abundance of those subpopulations and iii) each subpopulation's drug sensitivity.This method was validated on both experimental and simulated datasets, and applied to clinical samples from multiple myeloma patients.
In the current work, we develop novel theoretical results and computational strategies that improve PhenoPop by addressing important theoretical and practical limitations.The original PhenoPop framework was powered by an underlying deterministic population dynamic model of tumor cell growth and response to therapy.Here we introduce a more sophisticated version of PhenoPop that utilizes stochastic linear birth-death processes, which are widely used to model the dynamics of growing cellular populations [20,14,6], as the underlying population dynamic model powering the method.This new framework addresses several important practical limitations of the original approach: First, our original framework assumed two fixed levels of observational noise; here, the use of an underlying stochastic population dynamic model enables an improved model of observational noise that more accurately captures the characteristics of HTDS data, and reflects the observed dependence of noise amplitude on population size (see Figure 1).Second, this framework allows for natural correlations in observation noise that are tailored to fit specific experimental platforms.Rather than assuming that all HTDS observations are independent, we may consider data generated using live-cell imaging techniques where the same cellular population is studied at multiple time points, resulting in observational noise that is correlated in time.By using these stochastic processes to model the underlying populations, we obtain an improved variance and correlation structure that more accurately models the data and enables more accurate estimators with smaller confidence intervals.
The rest of the paper is organized as follows.In Section 2, we review the existing PhenoPop method and introduce the new estimation framework based on a stochastic birth-death process model of the underlying population dynamics.We propose two distinct statistical approaches in the new framework, aimed at analyzing data from endpoint vs. time series (e.g.live-cell imaging) where b ∈ (0, 1) and E, m > 0. A homogeneous cell population treated continuously with drug dose d is assumed to grow at exponential rate α + log(H(d; b, E, m)) per unit time.If the population has initial size C 0 , the population size at time t is given by C 0 exp [t (α + log(H(d; b, E, m)))] .
Note that H(0; b, E, m) = 1 and H(d; b, E, m) → b as d → ∞.Therefore, the population grows at exponential rate α in the absence of drug (d = 0) and at rate α + log(b) < α for an arbitrarily large drug dose (d → ∞).The parameter E represents the dose at which the drug has half the maximum effect, and m represents the steepness of the dose-response curve d → H(d; b, E, m).
For a heterogeneous cell population, each subpopulation is assumed to follow the aforementioned growth model with subpopulation-specific parameters α i and (b i , E i , m i ).Assume there are S distinct subpopulations.Then, under drug dose d, the number of cells in population i at time t is given by To ease notation, the dose-response function H(•; b i , E i , m i ) for population i will be denoted by in what follows.The initial size of population i is f i (0) = np i , where n is the known initial total population size and p i is the unknown initial fraction of population i.The total population size at time t is then given by A statistical model for the observed data x is obtained by adding independent Gaussian noise to the deterministic growth model prediction.The variance of the Gaussian noise is given by The variance is allowed to depend on time and dose, since at large time points and low doses, a larger variance is expected due to larger cell counts [15].Thus, the statistical model for the observation x t,d,r is given by where {Z (r) (t, d); r ∈ {1, . . ., N R }} are independent random variables with the normal distribution N (0, σ 2 hl (t, d)).This model has the parameter set The initial fractions of the S subpopulations {p i : i ∈ {1, . . .S}} and the parameters {(α i , b i , E i , m i ) : i ∈ {1, . . ., S}} governing the drug responses of the subpopulations are unknown.In addition, the variance levels σ 2 H and σ 2 L are unknown.In practice, the precise values of the thresholds T L and D L have minimal effect on the performance of PhenoPop.Therefore, T L and D L are treated as known.
The goal of the PhenoPop algorithm is to use the experimental data x to estimate the unknown parameters θ P P (S) and the number of subpopulations S. The parameters θ P P (S) are estimated via maximum likelihood estimation, where the likelihood function is given by The likelihood function describes the probability of observing the data x as a function of the parameter vector θ P P (S) for a given number S of subpopulations.The number of subpopulations is then estimated by comparing the negative log likelihood across candidate values of S via the elbow method or Akaike/Bayesian Information criteria.For further information, we refer to [15].

Limitations.
The assumption of the PhenoPop algorithm that the Gaussian observation noise has two levels of variance is made for methodological simplicity and does not reflect an observed bifurcation of experimental noise levels.It would be more natural to assume that the noise level is directly proportional to the cell count, as indicated by the experimental data shown in Figure 1.In addition, PhenoPop assumes that all observations are statistically independent.However, if cells are counted using techniques such as live-cell imaging (time-lapse microscopy), then observations of the same well at different time points will be positively correlated.Both of these limitations can be addressed by modeling the cellular populations with stochastic processes, as we will now show.

Linear birth-death process
A natural extension of PhenoPop [15] is to use a stochatic linear birth-death process to model the cell population dynamics.In the model, a cell in subpopulation i (type-i cell) divides into two cells at rate β i ≥ 0 and dies at rate ν i ≥ 0. This means that during a short time interval of length ∆t > 0, a type-i cell divides with probability β i ∆t and dies with probability ν i ∆t.The death rate of type-i cells is assumed dose-dependent according to Using the substitution α i = β i − ν i , we see that the drug affects the net birth rate of the stochastic model the same way it affects the growth rate α i of the deterministic population model of PhenoPop.Note however that here, the drug is assumed to act via a cytotoxic mechanism, that is, higher doses lead to higher death rates.Our framework can easily account for cytostatic effects, where higher doses lead to lower cell division rates, but we focus on cytotoxic therapies for simplicity.
Let X i (t, d) denote the number of cells in subpopulation i at time t under drug dose d.The mean and variance of the subpopulation size at time t is given by Next, denote the total population size at time t under drug dose d by with mean and variance Note that the mean size of the total population under the stochastic model equals the total population size under the deterministic model of PhenoPop, again with the substitution α i = β i − ν i .However, the stochastic model introduces variability in the population dynamics at each time point arising from the stochastic nature of cell division and cell death.To account for experimental measurement error, we add independent Gaussian noise to each observation of the stochastic model.As a result, the new statistical model for each observation is where X (r) (t, d) are independent copies of X(t, d) for r = 1, . . ., N R , and {Z t,d,r ; d ∈ D, t ∈ T , r ∈ {1, . . ., N R }} are i.i.d.random variables with the normal distribution N (0, c 2 ), independent of the X (r) (t, d)'s.The model parameter set is now In comparison with PhenoPop, on the one hand, the growth rate parameter α i for each subpopulation has been replaced by the birth and death rates β i and ν i .On the other hand, there is only one parameter c for the observation noise as opposed to four parameters {σ H , σ L , T L , D L } for PhenoPop.
Under the new statistical model, the likelihood function is where we assume that observations at different doses and from distinct replicates are independent, and (x t,d,r , x t,d,r + ∆x t,d,r ) represents an infinitesimally small interval around x t,d,r .We now discuss two different forms this likelihood function can take, depending on whether the data collected at different time points are correlated or not.

End-point experiments
For many common cell counting techniques, e.g.CellTiter-Glo [13], the experiment must be stopped to perform the viability assay.In this case, observations at different time points are actually observations of different cell populations exposed to drugs for different amounts of time, and can therefore be treated as independent.Thus, the likelihood function can be written as We note that the distribution of X (r) (t, d) + Z t,d,r can be computed exactly.However, for faster computation, one can approximate the distribution by a Gaussian distribution.To that end, consider the centered and normalized process A straightforward application of the central limit theorem gives the following result.
Note that '⇒' means converge in distribution.The proof of this result will not be provided since it is a consequence of the more general Proposition 2.
Based on Proposition 1, we obtain the likelihood function The right-hand side of ( 9) depends on the model parameters θ BD (S) via the mean and variance functions µ(t, d) and σ 2 (t, d).As in [15], one can maximize this expression over the parameter set θ BD (S) to obtain maximum likelihood estimates of the model parameters.The optimization problem for the new likelihood L EP is more difficult to solve than the corresponding problem for the PhenoPop likelihood L P P in (2), since the variance of the data now depends on the dose-response parameters for the subpopulations.However, the numerical optimization software we employ is able to deal with this more complex dependence on the model parameters, see Appendix 5.1.

Live-cell imaging techniques
Live-cell imaging techniques enable the experimenter to obtain cell counts for the same population across multiple different time points.For such datasets, observations of the same sample at different time points will be positively correlated.In this case, we must compute the joint distribution for each d ∈ D. To ease notation, we will temporarily suppress dependence on the dose.We first note that X (r) (t) t≥0 is not a Markov process, since the total cell count at each time point does not include information on the sizes of the individual subpopulations.Computing (10) exactly requires summing over the possible sizes of the subpopulations at each time point, which is computationally intensive.It is possible to speed up the computation using tools from hidden Markov models, which reduces the computational complexity to Ω(min t∈T x 2 t ).However, this is still computationally infeasible since min t∈T x t ≈ 5000, resulting in a 1 second computation time to evaluate a single likelihood.The computational details are provided in Appendix 5.5.
A more efficient approach is to use a Gaussian approximation.For the centered and normalized process W n (t) from (8), define the vector of observations across time points By assuming that the set T , number of subpopulations S and initial proportion p i of each subtype i are independent of the initial total cell count n, we derive the following approximation for W n : where the (i, j) element of the covariance matrix Σ is given by The proof of this proposition is given in Appendix 5.3.In Appendix 5.4, we relax the assumption that the initial proportion p i is independent of the initial total cell count, and a similar result still follows.In future work, we plan to relax the assumption that T is independent of n.
We now reintroduce dose dependence.For each d ∈ D, define the N T × 1 vector and the N T × N T identity matrix I. Based on Proposition 2, the following approximation is used to compute the likelihood in expression (7): The likelihood function is thus given by Note that the computational complexity of evaluating the above likelihood is independent of min t∈T x t , alleviating the computational burden associated with an exact evaluation of the likelihood.The difference between the likelihood function L EP for endpoint data and L LC for live-cell imaging data lies in the structure of the covariance matrix for the observation vector x •,d,r .For L EP , observations made at different time points are assumed independent, meaning that the covariance matrix is diagonal.For live-cell imaging data, the covariance matrix is not diagonal.Accurately accounting for time correlations in the likelihood (11) can improve the accuracy of parameter estimates, as we will discuss in Section 3.However, it does come at a cost, since it is obviously more computationally expensive to calculate the inverses and determinants present in L LC .As a result, the optimization of L LC can be more difficult than the optimization of L EP .

Accuracy of Gaussian approximation
Proposition 2 states that the centered and normalized process W n is approximately Gaussian N (0, Σ(d)).However, in the derivation of the likelihood function (11), the distribution of the total cell number X(d) = {X(t, d); t ∈ T } is approximated with a Gaussian distribution N (µ(d), nΣ(d) + c 2 I), whose mean and variance increases linearly with n.To verify that the error in this approximation is reasonable for large n, we will now compare the distributions of X(d) and N (µ(d), nΣ(d) + c 2 I) using a well-known measure of the distance between two distributions.
The energy distance, introduced in [24], is a measure of the distance between probability distributions, which has previously been shown to be related to Cramer's distance [5,24].The energy distance has been utilized in several statistical tests [2] and is easily computed for multivariate distributions.For probability distributions F and G on R d , we define their energy distance as where all random variables are independent, X and X ′ , and Y and Y ′ , are distributed according to F and G respectively, and ∥ • ∥ denotes the Euclidean norm.Since it is unrealistic to compute the equation ( 12) directly, we approximate the true energy distance by computing the empirical energy distance.For two sets of i.
Denote the distribution of X(d) by F BD and the normal distribution samples from the distribution F BD , and let {Y i } m i=1 be m i.i.d samples from F N .We can then compute D E (F BD , F N ) using (13).In Figure 2, we plot D E (F BD , F N ) with varying initial cell counts.The plot shows a monotonic decrease in the empirical energy distance as a function of the initial cell count, which indicates that the distribution of X(d) is reasonably approximated by a Gaussian distribution for large values of the initial cell count.

Numerical results
In this section, we use our new statistical methods to analyze both simulated (in silico) and experimental (in vitro) live cell imaging data.We apply both the simpler end-point estimation procedure ("end-points method"), based on the likelihood L EP in (9), and the more complex live cell imaging procedure ("live cell image method"), based on the likelihood L LC in (11).The performance of the new methods is compared with the existing PhenoPop algorithm.In all analyses it is assumed that the observation at time t = 0 represents the known starting population size, i.e. x 0,d,r = n.

Application to simulated data
We first apply our estimation methods to simulated (in silico) data.In Appendix 5.1, we provide details of the data generation and the parameter estimation for these in silico experiments.

Examples with 2 subpopulations
For illustrative purposes, we begin with a case study involving an artificial tumor with two subpopulations.Data is generated using a parameter vector θ BD (2) selected uniformly at random from the ranges in Table 1.We assume that one tumor subpopulation is drug-sensitive and the other is drug-resistant.These subpopulations are indicated by the subscripts s and r, respectively.
Table 1: Range for parameter generation of experiments with 2 subpopulations As in [15], we focus on inferring the initial proportion p s of sensitive cells, as well as the GR 50 dose for each subpopulation.The GR 50 is the dose at which the drug has half the maximal effect on the cell death rate, as is further explained in Appendix 5.1.Informally, the GR 50 dose for each subpopulation is a measure of the subpopulation's sensitivity to the drug.To assess the uncertainty in the parameter estimation, we compute maximum likelihood estimates for 100 bootstrapped datasets, as described in Appendix 5.1.The results of the case study are shown in Figure 3, where we see that both the live cell image method and the end-points method are able to recover the initial proportion p s of the sensitive population and the GR 50 dose for each subpopulation accurately.
We next evaluate the performance of the estimation methods across 30 simulated datasets, where each parameter vector θ i BD (2) for i = 1, . . ., 30 is sampled from the ranges in Table 1.We furthermore compare the performance of the two new methods with the performance of PhenoPop.The error in the estimation of each parameter {p s , GR s , GR r } is measured by considering the absolute log ratio between the point estimate x and the true value x for the parameter, This metric is chosen to address the logarithmic scale associated with the GR 50 dose.
In Figure 4, a box plot of the estimation errors for the three methods across the 30 datasets is presented.Note that all three parameters {p s , GR s , GR r } are estimated accurately using all three methods.In addition, the error in estimating the sensitive GR 50 is larger than the error in estimating the resistant GR 50 for all three methods.One possible reason is that the initial proportion of sensitive cells is p s ∈ [0.3, 0.5], so the experimental data contains less information on the sensitive subpopulation.Later experimental results will lend further support to this hypothesis.
We next compare the estimation precision of the three methods.Specifically, we will compare the widths of the 95% confidence intervals for the three parameters between the three different methods.Since GR 50 values vary significantly across the 30 generated datasets, we normalize the CI width for each method by dividing it by the sum of the CI widths of all three methods for the same dataset.
In Figure 5, the normalized CIs for {p s , GR s , GR r } are compared between the three methods across the 30 datasets.First, note that for the initial proportion p s , the live cell image method has significantly narrower CIs than the other two methods.Additionally, there is a small but statistically significant difference between the CI widths for the end-points method and the PhenoPop method.For the sensitive GR 50 index, the live cell image method again has significantly narrower confidence intervals than the other two methods, and the end-points method has significantly narrower confidence intervals than the PhenoPop method.The results are similar for the resistant GR 50 index.It is worth mentioning that for at least 28 out of the 30 datasets, the true parameters were located within the confidence intervals for all three methods.
In summary, the end-points and live cell image methods provide a significant improvement in estimator precision over the PhenoPop method for all three parameters, and furthermore, the live cell image method has the best precision out of all three methods.

Illustrative example with 3 subpopulations
In this section, we examine a case study involving an artificial tumor with 3 subpopulations.The subpopulations are assumed sensitive, moderate, and resistant with respect to the drug, and they are denoted using the subscripts s, m, and r, respectively.Data is generated using a parameter vector θ BD (3) selected uniformly at random from the ranges in Table 2.Those parameters not listed in Table 2 are selected as in Table 1.
Table 2: modified range of parameters Figure 6 shows estimation results for the initial proportions p s , p m , p r and the GR 50 doses of the three subpopulations.Note that the end-points and live cell image methods provide more accurate estimates of the initial proportion for each subpopulation than PhenoPop.Furthermore, when estimating the GR 50 for each subpopulation, the inter-quartile range (IQR) of 100 bootstrapped estimates covers the true GR 50 value for all three methods.However, the estimation for the GR 50 of the moderate subpopulation with E m = 0.3558 is less precise than for the other two subpopulations, i.e., the IQR is wider.This is likely due to confounding between the moderate subpopulation and the other two subpopulations.
It is worth noting that for the 3 subpopulation example, the number of datapoints is the same as for the 2 subpopulation examples, since only total cell counts are observed at each time point.Furthermore, when computing maximum likelihood estimates for 3 subpopulations, we solved each optimization problem the same number of times as for 2 subpopulations.Overall, our conclusion is that all three methods can provide reasonable estimates of the true initial proportion and the GR 50 of each subpopulation for 3 subpopulations.However, achieving equivalent levels of accuracy and precision as for 2 subpopulations may require a greater computational effort or the collection of more data, given that the 3 subpopulation model is more complex and has more parameters.

Performance in challenging conditions
In the previous work [15], three conditions under which the performance of PhenoPop deteriorates were identified: the case of a large observation noise, a small initial fraction of resistant cells, and similar drug-sensitivity of both subpopulations.We now investigate the performance of the endpoints and live cell image methods in these conditions and compare to the performance of PhenoPop.

Large observation noise:
We first consider the case of large observation noise.Note that in the PhenoPop method, the only source of variability in the statistical model is the additive Gaussian noise.In the end-points and live cell image methods, however, there is an underlying stochastic process governing the population dynamics with an added Gaussian noise term.Thus, whereas PhenoPop deals with high levels of noise by adjusting the variance of the Gaussian term, the two new methods may also try to adjust the subpopulation growth and dose response parameters.This can complicate estimation with the two new methods compared to PhenoPop from data with high levels of noise.
We begin by considering a case study where the noise level is set to c = 500, and other parameters are chosen uniformly at random according to Table 1.The results are shown in Figure 7.For each method, the initial proportion p s is estimated with good accuracy, and the IQR of 100 bootstrap estimates for GR s covers the true value.However, compared with the estimation in Figure 3, the estimation precision of the end-points method and live cell image method has degraded.In addition, observe that the IQRs of the three methods have about the same width, which implies the precision advantage observed in Section 3.1.1disappears under a very large observation noise.
We next evaluate estimation performance across 30 simulated datasets for each noise value c ∈ C = {100, 200, 300, 400, 500}.Figure 8 shows the mean absolute log ratio across the 30 datasets for each parameter {p s , GR s , GR r }, each noise level and each estimation method.As expected, the estimation error increases for all three methods as a function of the observation noise.In fact, all three methods show a similar response to increasing levels of noise.
We next compare the widths of 95% confidence intervals for the three parameters under noise levels c = 100 and c = 500, using 30 datasets for each noise level.The results are shown in Figures 9  and 10.For c = 100 (Figure 9), the precision advantage of the live cell image method over the other two methods is less pronounced than in Figure 5, where c ∈ [0, 10], especially for the resistant GR 50 .For c = 500 (Figure 10), the advantage disappears for the sensitive GR 50 .Importantly, however, Figure 9 shows that the precision advantage of the live cell method is statistically significant for all three parameters {p s , GR s , GR r } for an observation noise as large as 10% of the initial cell count.It should be noted that the standard deviation of observation noise reported from common automated and semi-automated cell counting techniques ranges from 1 − 15% [4,19].

Small resistant subpopulation:
For the datasets investigated in Section 3.1.1,the initial proportion p s of sensitive cells was constrained to be in [0.3, 0.5].We now consider the setting of a small resistant subpopulation.We begin with a case study in Figure 11, where p s is assigned to 0.99, and other parameters are sampled according to Table 1.For both the sensitive and resistant subpopulations, the IQR for the GR 50 dose under PhenoPop does not cover the true value, whereas the IQR for the live cell image method does.The IQR for the end-points method covers the true resistant GR 50 , but only barely covers the true sensitive GR 50 .In addition, the end-points and live cell image methods have significantly narrower IQRs than PhenoPop.Finally, note that the estimate of the initial proportion of resistant cells is much more accurate for the end-points and live cell image methods.Thus, while PhenoPop provides a reasonable estimate of the sensitive GR 50 , which is the dominant subpopulation in this scenario, inferring the population composition and the GR 50 for the minority resistant subpopulation requires the use of the more powerful end-points and live cell image methods.
In Figure 12, we show the mean absolute log ratio for each parameter {p s , GR s , GR r } across 100 datasets for each p s ∈ {0.85, 0.9, 0.95, 0.99}.Note that both the end-points and live cell image methods have significantly smaller errors than PhenoPop, and that the difference becomes more pronounced as p s increases.Also note that the error in estimating the sensitive GR 50 is smaller than for the resistant GR 50 , opposite to the results of Figure 4, where p s ∈ [0.3, 0.5].This further reinforces the hypothesis stated in Section 3.1.1that the initial proportion of a subpopulation impacts the precision of estimating the GR 50 for that subpopulation.

Similar subpopulation sensitivity:
For the datasets investigated in Section 3.1.1,the GR 50 's for the two subpopulations were assumed to be significantly different.We now consider the case where the two GR 50 's are similar.Figure 13 shows the results of a case study where E s ∈ [0.05, 0.1], E r = 0.15, and other parameters are selected according to Table 1.Note that all three methods successfully recover the parameters {p s , GR s , GR r }, where the IQRs for the live cell image method are significantly narrower than for PhenoPop.For brevity, we omit the plots that depict the statistical comparison of confidence interval widths.In Figure 14, we perform estimation across 80 datasets for each E r ∈ {0.15, 0.3, 0.45, 0.85, 2.0}, with other parameters sampled from Table 1, including E s ∈ [0.05, 0.1].As expected, the accuracy in estimating the parameters {p s , GR s , GR r } improves as the sensitive GR 50 and resistant GR 50 become more different.We note however that the live cell image method has the lowest mean error when estimating the parameters, with all three methods showing similar degradation as the two subpopulations become more phenotypically similar.

Application to in vitro data
We conclude by evaluating the performance of our two new methods on in vitro experimental data.The data consists of different mixtures of imatinib sensitive and resistant Ba/F3 cells.In the experiments, cells were exposed to 11 different concentrations of imatinib and they were observed at 14 different time points.For each drug concentration, 14 independent replicates were performed starting with roughly 1000 cells.Cell counts were obtained using a live-cell imaging technique.Four datasets were produced with different starting ratios between sensitive and resistant cells: 1 : 1, 1 : 2, 2 : 1 and 4 : 1.These datasets are denoted BF11, BF12, BF21 and BF41, respectively.See [15] for further details on the experimental methods for generating the data.
In [15], we showed that the PhenoPop method can accurately identify the initial proportion of sensitive cells and both subpopulations' GR 50 indices from the datasets.Here, we apply the two new estimation methods to the datasets and compare how well the models fit the data.Model fits are assessed using the Akaike Information Criterion (AIC), which for a statistical model with parameters θ and likelihood function L(θ|x) is given by Here, θ * is the maximum likelihood esitmate and |v| is the cardinality of the vector v.When comparing the three methods, the one with the lowest AIC is preferred.3: AIC scores of three methods: PhenoPop method(PP), end-points method(EP), and live cell image method(LC) for the four experimental datasets BF11, BF12, BF21 and BF41.

DATA PP(AIC) EP(AIC) LC(AIC
Results are shown in Table 3.The AIC values of the end-points method (EP) and live cell image method (LC) are clearly lower than for the PhenoPop method (PP), indicating that the two new methods are superior for fitting the experimental datasets.As discussed in Section 2, the newly proposed methods have more sophisticated variance structures, which is likely the reason why they are able to provide a better fit to the datasets.Finally, we note that the end-points method has superior AIC scores to the live cell image method for three out of four of the experimental conditions.Therefore, it is not clear which of these two methods is more appropriate for fitting these datasets.

Discussion
In this work, we have proposed two methods for analyzing data from heterogeneous cell mixtures.In particular, we are interested in the setting where a mixture of at least two distinct cell subpopulations is exposed to a given drug at various concentrations.We then use the dose response curve of the composite population to learn about the two subpopulations.In particular, we are interested in estimates of the different subpopulations' initial prevalence and also their distinct dose response curves.The challenge of this problem is that we do not observe direct information about the subpopulations, but instead only information about the dose response of the composite population.
This work is an extension of our prior work in [15].The novelty of the current work is that we introduce a more realistic variance structure to our statistical model.We create a new variance structure by building our model using linear birth-death processes.In particular, we model each subpopulation as a linear birth-death process with a unique birth rate and a unique dose-dependent death rate.The dose dependence of the death rate is captured using a 3-parameter Hill function.Our observed process is then a sum of independent birth-death processes.Our goal is then to estimate the initial proportion of the subpopulations, as well as their birth rates and the parameters governing the dose response in their death rates.
Counting cells in in vitro experiments can generally be conducted in one of two fashions.In the first approach, cell numbers can only be estimated at the end of the experiment because the mechanism for estimating cell numbers requires killing the cells.In the second approach, cells are counted via live imaging techniques and the cells can be counted at multiple time points.When dealing with multiple time point data from cells collected via the first approach we can assume that observations at different time points are independent because they are the result of different experiments.However, when dealing with data from the second approach we can no longer make that assumption because the cell counts at different time points are from the same population and there is a positive correlation between those measurements.As a result of this differing structure we develop two methods, one that assumes independent observations at each time point, and one that assumes all the time points for a given dose are correlated.Evaluating the likelihood function under the second approach is not trivial at first glance since it requires evaluating the likelihood function of a sample path of a non-Markovian process (the total cell count).We are able to get around this difficulty by using a central limit theorem argument to approximate the exact likelihood function with a Gaussian likelihood.
In this work we compared three different methods: PhenoPop method from [15], end-points method (assumes measurements are independent in time), and live cell image method (assumes time correlations).We first performed this comparison using simulated data.We generated our data by simulating linear birth-death processes and then adding independent Gaussian noise terms to the simulations.We mainly focused on a mixture of two supbopulations, and we were interested in estimating three features of the mixed population: initial proportion of sensitive cells, GR 50 of the sensitive cells, and GR 50 of the resistant cells.Our first test for the simulated data was to look at confidence interval widths as a measure of estimator precision.In this study, we found that the live cell image method had significantly narrower confidence intervals than the other methods for estimating all three features.We next investigated the performance of our three methods in the setting of small resistant subpopulations, where less than 15% of initial cells are resistant.We found that in this small resistant fraction setting the live cell image method provides a significant improvement in accuracy over the original PhenoPop method.Furthermore, this improvement increases as the initial fraction of resistant cells goes to zero.We also compared the performance of the methods for simulations with increased levels of additive noise and subpopulations with similar dose response curves.In the scenario of subpopulations with similar dose response curves, we found that the live cell image method has the lowest mean error among the three methods.For increasing additive noise, all three methods perform similarly in terms of estimation accuracy.However, the live cell image method maintains its precision advantage over the other two methods for an observation noise of 10% of the initial cell count, while the advantage disappears for a 50% noise level.
We finally compared the three methods using in vitro data.In particular, we used data from our previous work [15] that considered different seeding mixtures of imatinib sensitive and resistant tumor cells.We then used all three methods to fit this data and used AIC as a model selection tool.We found that live cell image and end-points methods had significantly better scores than PhenoPop for all four initial mixtures studied.Interestingly the end-points method had lower AIC scores for three out of the four mixtures studied even though this data was generated using live-cell imaging techniques.
In our statistical model, there are several important features of cell biology that we have left out.For example, one type of cell may transition to another type of cell via a phenotypic switching mechanism (see e.g., [10,11]).We believe that our current methods should be able to handle this type of switching with little modification since the underlying stochastic model will be very similar, i.e., a multi-type branching process.Another way the cell types can interact is via competition for scarce resources as the populations approach their carrying capacity.These types of interactions will require new statistical models since the underlying stochastic processes will no longer be linear birthdeath processes.Another interesting direction of future work is to quantify the limits of when we can identify distinct subpopulations.For example, if the resistant subpopulation is present at fraction ϵ, what observation set would allow us to identify the presence of this subpopulation?Finally our stochastic model assumes that the time between cell divisions is exponential, but this is of course a great simplifcation.At the cost of a more complex model it would be possible to incorporate states for the different stages of the cell cycle.We leave this open as a question for future investigation.

Details of the numerical experiments
We define some of the basic algorithms and formulas used in the numerical results.

Generation of simulated data
To simulate data, the parameter set θ BD (S) (as defined in ( 6)) is selected uniformly at random from a subset of the parameter space given in Table 1.Note that one can obtain the generating parameter set θ P P (S) (defined in (1))from θ BD (S) directly by setting α i = β i −ν i for each subpopulation.Based on the parameter set θ BD (S), we simulated data is generated according to the statistical model specified in equation (5).Note that data is collected from the simulation continuously during the course of the experiment to replicate the live-cell imaging experiments.

Maximum likelihood estimation (MLE)
The maximum likelihood estimation was conducted by minimizing the negative log-likelihood, subject to constraints that were placed on the range of each parameter.The optimization process to find the minimum point was based on the MATLAB Optimization Toolbox [17] function fmincon with sequential quadratic programming (sqp) solver.Due to the non-convexity of the negative log-likelihood function, we performed the optimization starting from 100 uniformly sampled initial points within a feasible region.The feasible region sets limitations on the parameters based on prior knowledge about them.For simulation studies, the feasible region is given by Table 4, and for the in vitro data the feasible region is specified by Table 5.Among all the resulting local optima, the parameter set with the lowest negative log-likelihood as the estimated result.

Bootstrapping
In the simulated experiments, bootstrapping is used to quantify the uncertainty in the MLE estimator.In particular, 20 independent replicates of data measured at 11 concentration values D and 13 time points T are generated from the parameters θ BD (S) at the beginning of the experiment.Then bootstrapping is employed to randomly re-sample 13 replicates from those 20 replicates with replacement 100 times.With 13 randomly sampled replicates it is possible to create an MLE for the parameter set θ BD .Since there are now 100 MLE's for θ BD it is possible to construct confidence intervals as well by using the empirical quantiles of the estimators.

GR 50
Our goal is to estimate the number of subpopulations, initial mixture proportion p i , and the drug sensitivity of each cellular subpopulation.The GR 50 , introduced in [12], is a summary metric of drug-sensitivity.It is defined as the concentration at which a drug's effect on cell growth is half the observed effect.Note that at the maximum concentration level, the drug may not reach its theoretical maximum effect.
In the context of the model, the GR 50 can be defined as below.Denote the maximum dosage applied as d m , and define the half-maximum effect for subpopulation i as The explicit formula for the GR 50 is then for subpopulation i: When S = 2, we will denote the higher GR 50 as either the resistant GR 50 or GR r , and the lower GR 50 as either the sensitive GR 50 or GR s .In this setting, parameters for the sensitive subpopulation and the resistant subpopulation, respectively, are denoted by subscripts s and r, e.g.E s and E r .
For these specific concentration levels and time points, we have chosen the threshold values of T L = 21 and D L = 1 in the PhenoPop model.

Optimization feasible region
When performing numerical optimization the parameters are restricted to a physically realistic region.Unless otherwise noted, the optimization was performed using 100 uniformly sampled initial points from Table 4.
Table 4: Feasible interval for each parameter.

Estimation on in vitro experimental data
When solving the maximum likelihood optimization problems for the in vitro data of Section 3.2, the optimization feasible region was chosen to be the same as the feasible region used in paper [15] for the Ba/F3 data, i.e.,

Table 5: Optimization Feasible region
We solved each optimization problem 500 times starting from randomly chosen initial points.

Proof of proposition 2
Given a set of time points T = {t 1 , • • • , t k }, we first show for any t j , 1 ≤ j ≤ k that: ) and the σ 2 g (t) here is the variance of subpopulation g linear birth-death process defined in equation (4).Following an argument from Either and Kurtz [7], we have the decomposition By assuming the initial proportion p g for sub-type g is independent of n and using the Law of large numbers, as n → ∞, we have By assuming that the maximum number of time point N T and the length of time interval t i − t j for any i ≥ j are both bounded and not depend on n, the Law of large numbers also assures that for any ℓ ∈ {1, • • • , N T } the X g (t ℓ−1 ) will diverge to infinity when n → ∞.Therefore, we may apply the Central Limit Theorem to the following term: Thus, we conclude that Next we show that the random vector W converges to the random vector Y, which has the multivariate normal distribution.We can obtain the distribution for Y from the independence between where p g e λg(ti−t ℓ ) e λg(tj Then we use the Cramer Wold device, given a constant vector a ∈ R k < ∞, we have Thus, we show that W ⇒ Y.

Proof of proposition 2 if initial proportions can go to 0 with n
In proving Proposition 2, we made the assumption that the initial proportions p i for i = 1, • • • , S are not dependent on the initial cell count n.In this sub-section, we aim to relax this assumption and demonstrate a similar result.Note we will assume that all other inputs are still independent of n, i.e., S and T .
In particular, we will allow p i to depend on n, and allow for lim sup n p i n < ∞.We denote two sets of subpopulations F and I, where , and S being fixed with n, we know that the set I must not be an empty set.Then following a similar pattern as the proof of the Proposition 2, we derive: Next, we may consider these two double sums separately.For g ∈ I, because the p g n diverges to infinity as n → ∞, the first double sum will converge to the same limit we established in Proposition 2. For g ∈ F , p g n will stay bounded.Therefore, in the second double sum, we will have pgn n converge to 0 as n → ∞, which makes the second double sum vanish.In conclusion, we have Note that this convergence result will lead to the same realization in practice, i.e., we will define the covariance by summing over all subtypes.This is because in practice we only have n < ∞ and we cannot actually assume subpopulations have zero contribution to the covariance.

Exact path likelihood computation
Here we show how to calculate the exact likelihood of a sample path observation of the total cell count of multiple heterogeneous birth-death processes.While we do not use this approach for likelihood evaluation in the current manuscript we report it here to show that it is not feasible.
We first consider the following joint probability of a homogeneous linear birth-death process: For ease of notation define the transition probability p i,j (t k − t k−1 ) = P(X(t k ) = j|X(t k−1 ) = i, θ BD (2)).It is important to note that evaluating p i,j (t k −t k−1 ) is the most computation-demanding task when evaluating the joint probability.As a result, we mainly consider the number of evaluations of this transition probability.The analytical form for this transition probability was derived in [1]: where Note that numerical evaluation (15) will be computationally expensive due to the presence of multiple factorial terms.We use a Gosper refined version of the Stirling formula [8] to approximate these factorials We find that this approximation leads to good performance in our examples.It is thus straightforward to evaluate the path likelihood for the case of a homogeneous linear birth-death process.
If we instead have observations of a sum of birth-death processes, the evaluation of the path likelihood is much more difficult.In particular, the sum of the birth-death processes is no longer a Markov process and we must therefore sum over possible values of our unobserved subpopulations.Specifically, if we have two subpopulations we can formulate the equation ( 10), as Note that the last equality is due to the assumption that subpopulation grow independently, and we can decompose the joint probability of mixture cell count into a summation of multiple joint probabilities of the homogeneous linear birth-death process.It is not hard to see that if we naively evaluate the above sum, the number of computations of the homogeneous joint probability is around Ω(min t∈τ x N T t ).Since many examples have N T ≈ 10, and x t ≈ 100 this is clearly an infeasible approach.
In order to avoid the exponential dependence on the number of time points, one option is to use techniques from Hidden Markov Models (HMM).The main assumption of HMM is the Markov property of the hidden process, and that the hidden process relates to the observable process according to a specified distribution B. Recall that the time series of observed total cell count is given by {X(t i ); i ∈ T } and denote the time series of the subpopulations as {(X 1 (t i ), . . ., X S (t i ); i ∈ T }.Then {X(t i ); i ∈ T } in the HMM is the observable process, and {(X 1 (t i ), . . ., X S (t i )); i ∈ T } is the hidden Markov process due to the Markov property of the linear birth-death process.Notice that the relationship between the hidden process {(X 1 (t i ), . . ., X S (t i )); i ∈ T } and the observable process {X(t i ); i ∈ T } can be defined as We can translate the live-cell imaging experiment into an HMM and significantly improve the computational complexity of evaluating the exact likelihood function (7).In particular, we can use popular HMM techniques, such as the forward-backward procedure to reduce the total number of transition probability, i.e., equation (15), computation to Θ(H 2 N T ) for one replicate at one dosage level, where H is the number of hidden states.In particular, we need to calculate the H by H transition matrix for every time point, and if we assume the length of time intervals are identical, we can reduce the upper bound to Θ(H 2 ).However, the number of hidden states depends on both the maximum total number of cells observed at each time point x and the number of subpopulations S.
As we will now show, this is unfortunately not a sufficient reduction in computational complexity.In particular, assume we observe x total cells.In that case, the number of hidden states is given by x−1 S−1 , and assuming that x ≫ S, we have that x−1 S−1 = Θ x S−1 as x → ∞.With only two subpopulations this results in computational complexity of Θ(x 2 ), and in many experiments, we might have x ≈ 10 5 leading to an extremely high computational burden.If S = 3 we would end up with the far worse computational complexity of Θ(x 4 ).
In conclusion, using a naive approach to compute the joint probability of mixture cell count would require Ω min t∈T x N T t many computations of transition probability, which is clearly infeasible when many cell counts are in the thousands with over 10 observed time points.We also discuss an HMM based approach to evaluating the likelihood that results in a significant reduction in the computational burden for evaluating this likelihood.In particular, with this approach we can reduce the number of computations of the transition probability to Ω min t∈T x 2 t .Unfortunately we might have min t∈T x t ≈ 5000.Note that this will be the complexity for evaluating the likelihood of one single replicate at a single dose, so taking into account that we can have more than 10 different doses, with more than 10 replicates at each dose we see that unfortunately, this HMM approach is computationally infeasible.In addition, this HMM approach will have significantly worse computational complexity after we include observation noise terms and/or more than 2 subpopulations.

Data and code availability
All data and code used for running experiments, model fitting, and plotting is available on a GitHub repository at https://github.com/chenyuwu233/PhenoPop_stochastic.The required Matlab version is Matlab R2022a or newer.Each color in the plot represents a distinct subpopulation: orange for sensitive and blue for resistant.The shaded areas in the box plot indicate the concentration intervals where the true GR 50 's are located, and the colored dots mark outliers in the estimation of the GR 50 for each subpopulation, with red for sensitive and blue for resistant.This example demonstrates that our newly proposed models can accurately recover the initial proportion and GR 50 values with high precision.
Figure 4: Absolute log ratio accuracy of three estimators {p s , ĜR s , ĜR r } using the PhenoPop, endpoints and live cell image methods.The results are summarized based on 30 different simulated datasets.This figure demonstrates that there are no significant differences in estimation accuracy among these three methods when the true parameters fall within the range described in Table 1.

Figure 1 :
Figure 1: One to one mixtures of imatinib-sensitive and resistant Ba/F3 cells are counted at 14 different time points under 11 different concentrations of imatinib.Error bars, based on 14 replicates with outliers removed, depict the sample standard deviation, which increase with larger cell counts.

Figure 3 :
Figure 3: Estimation of the initial proportion and GR 50 for 2 subpopulations using the end-points method and the live cell image method on simulated data.The parameter vector θ BD (2) and observation noise c used in this example are p s = 0.4856, β s = 0.1163, ν s = 0.0176, b s = 0.8262, E s = 0.0674, m s = 4.5404, p r = 0.5144, β r = 0.4624, ν r = 0.3978, b r = 0.8062, E r = 1.5776, m r = 4.2002, c = 1.2103.The pie chart illustrates the average of all bootstrap estimates for the initial proportion, while the box plot summarizes the distribution of the estimates for the GR 50 's.The vertical dashed lines in the box plot correspond to the true GR 50 values employed to generate the data, while the vertical solid lines indicate the concentration levels at which the data were collected.Each color in the plot represents a distinct subpopulation: orange for sensitive and blue for resistant.The shaded areas in the box plot indicate the concentration intervals where the true GR 50 's are located, and the colored dots mark outliers in the estimation of the GR 50 for each subpopulation, with red for sensitive and blue for resistant.This example demonstrates that our newly proposed models can accurately recover the initial proportion and GR 50 values with high precision.

Figure 5 :
Figure 5: Comparison of the normalized CI widths of three estimators {p s , ĜR s , ĜR r } estimated from three different methods.The y-axis represents the normalized CI width.The box plot summarizes the results across 30 different simulated datasets.The significance bar indicates the p-values derived from the Wilcoxon rank-sum test, with significance levels denoted as * * * ≤ 0.001 ≤ * * ≤ 0.01 ≤ * ≤ 0.05.This figure demonstrates that the newly proposed models exhibit significant advantages in estimation precision, with the live cell image method demonstrating the highest level of precision.

Figure 6 :
Figure 6: Estimation of the initial proportion and GR 50 for 3 subpopulations using the three estimation methods.The parameter vector θ BD (3) and the observation noise in this example are p s = 0.2135, β s = 0.3214, ν s = 0.2773, b s = 0.8782, E s = 0.0344, m s = 2.5998, p m = 0.2718, β m = 0.7334, ν m = 0.6776, b m = 0.8506, E m = 0.3558, m m = 4.6055, p r = 0.5147, β r = 0.0683, ν r = 0.0253, b r = 0.8614, E r = 1.5764, m r = 4.4706, c = 9.5209.The pie chart illustrates the average of all bootstrap estimates for the initial proportion, while the box plot summarizes the distribution of all estimates for the GR 50 's.The vertical dashed lines in the box plot correspond to the true GR 50 values employed to generate the data, while the vertical solid lines indicate the concentration levels at which the data were collected.Each color in the plot represents a distinct subpopulation: orange for sensitive, blue for moderate, and yellow for resistant.The shaded areas in the box plot indicate the concentration intervals where the true GR 50 's are located, and the colored dots mark outliers in the estimation of the GR 50 for each subpopulation, with red for sensitive, blue for moderate, and yellow for resistant.

Figure 7 :
Figure 7: An illustrative example under the high observation noise scenario, i.e. c = 500.The parameter vector θ BD (2) and the observation noise in this example are p s = 0.3690, β s = 0.4380, ν s = 0.3422, b s = 0.8398, E s = 0.0813, m s = 3.9647, p r = 0.6310, β r = 0.5320, ν r = 0.4767, b r = 0.8674, E r = 1.9793, m r = 4.8357, c = 500.Results are presented as in Figure 3.This example demonstrates that all three methods are capable of recovering the initial proportion and GR 50 even under the high observation noise scenario.

Figure 8 :
Figure 8: Estimation error of {p s , ĜR s , ĜR r } with respect to varying standard deviation of observation noise.The metric of estimation error is the mean absolute log ratio of estimates across 30 simulated datasets, each generated from a distinct parameter set.The value of the observation noise parameter, c, in these 30 generating parameter sets was assigned to 5 different values in the set C = {100, 200, 300, 400, 500} to generate the line plot.Three different line plots correspond to three different methods, as indicated by the figure legends.This figure demonstrates that the estimations of the three methods deteriorate as the level of observation noise increases.

Figure 9 :
Figure 9: Comparison of the normalized CI widths of the three estimators {p s , ĜR s , ĜR r } using the three different estimation methods, when the observation noise parameter is set to c = 100.The y-axis represents the normalized CI width.The box plot summarizes the results across 30 different datasets.The significance bar indicates the p-values derived from the Wilcoxon rank-sum test, with significance levels denoted as * * * ≤ 0.001 ≤ * * ≤ 0.01 ≤ * ≤ 0.05.This figure demonstrates the advantages of the live cell image method in estimation precision are preserved even when the standard deviation of observation noise is 10% of the initial cell count.

Figure 10 :
Figure 10: Comparison of the normalized CI widths of three estimators {p s , ĜR s , ĜR r } using the three different estimation methods, when the observation noise parameter is set to c = 500.Results are presented as in Figure 9.This figure demonstrates that the advantages of the live cell image method in estimation precision become less significant as the standard deviation of observation noise increases to 50% of the initial cell count.

Figure 11 :
Figure 11: An illustrative example under the unbalanced initial proportion scenario, i.e. p s = 0.99.The parameter vector θ BD (2) and the observation noise in this example are p s = 0.9900, β s = 0.4301, ν s = 0.4199, b s = 0.8644, E s = 0.0768, m s = 4.3186, p r = 0.0100, β r = 0.1458, ν r = 0.1258, b r = 0.8565, E r = 0.5348, m r = 3.7518, c = 4.8400.Results are presented as in Figure3.This example demonstrates that our newly proposed model can accurately estimate parameters even when the initial proportion of the resistant subpopulation is negligible, while the PhenoPop method fails to estimate the parameters accurately.

Figure 12 :
Figure 12: Estimation error of {p s , ĜR s , ĜR r } with respect to varying resistant initial proportions.The metric of estimation error is the mean absolute log ratio across 100 simulated datasets, each generated from a distinct parameter set.The value of p s in these 100 generating parameter sets was assigned to 4 different values in the set P = {0.85,0.90, 0.95, 0.99} to generate the line plot.Three different line plots correspond to three different methods, as indicated by the figure legends.This figure demonstrates the advantages of estimation accuracy provided by the newly proposed methods when the initial proportion of the resistant subpopulation decreases toward 0.

Figure 13 :
Figure 13: An illustrative example under the similar subpopulation sensitivity scenario.The parameter vector θ BD (2) and the observation noise in this example are p s = 0.3263, β s = 0.8896, ν s = 0.8215, b s = 0.8820, E s = 0.0654, m s = 3.8539, p r = 0.6737, β r = 0.0925, ν r = 0.0661, b r = 0.8171, E r = 0.1500, m r = 3.6015, c = 7.6660.Results are presented as in Figure3.This example demonstrates that all three methods are capable of recovering the initial proportion and GR 50 even when two subpopulations have similar drug sensitivity, while the newly proposed methods exhibit superior estimation precision compared to the PhenoPop method.

Figure 14 :
Figure 14: Estimation error of {p s , ĜR s , ĜR r } with respect to varying similarity between subpopulation drug sensitivities.The metric of estimation error is the mean absolute log ratio across 80 simulated datasets, each generated from a distinct parameter set.The value of E r in these 80 generating parameter sets was assigned to 5 different values in the set E = {0.15,0.3, 0.45, 0.85, 2} to generate the line plot.Three different line plots correspond to three different methods, as indicated by the figure legends.This figure demonstrates that the estimation accuracy of the three methods improves as the discrepancy of drug sensitivity between the two subpopulations increases, with the live cell image method exhibiting the smallest average error among the three methods.