Uncoupled Analysis of Stochastic Reaction Networks in Fluctuating Environments

The dynamics of stochastic reaction networks within cells are inevitably modulated by factors considered extrinsic to the network such as, for instance, the fluctuations in ribosome copy numbers for a gene regulatory network. While several recent studies demonstrate the importance of accounting for such extrinsic components, the resulting models are typically hard to analyze. In this work we develop a general mathematical framework that allows to uncouple the network from its dynamic environment by incorporating only the environment's effect onto the network into a new model. More technically, we show how such fluctuating extrinsic components (e.g., chemical species) can be marginalized in order to obtain this decoupled model. We derive its corresponding process- and master equations and show how stochastic simulations can be performed. Using several case studies, we demonstrate the significance of the approach.

The hazard function of the uncoupled reaction network corresponding to the environmental process Z is obtained through the innovation theorem for counting processes [1,2] and ref. [17] in the main text. For static environments, such reaction hazards have been derived in (ref. [9] in the main text and [9]) but an equivalent calculation is possible in the case of fluctuating environments. For completeness, we provide a simple derivation in the following. Conceptually, the construction of such a process can be understood as a marginalization of the process dynamics with respect to Z. The critical point is to anticipate that the marginal process will depend on the full process history Xt. We can then write the marginal jump probability of the k-th reaction as P (X(t + ∆t) = x + ν k | Xt) = P (X(t + ∆t) = x + ν k | Z(t) = z, X(t) = x)p(Z(t) = z | Xt)dz = ∆tc k (z)g k (x)p(Z(t) = z | Xt)dz = ∆tE [c k (Z(t)) | Xt] g k (x) = λ k (Xt)∆t. (1)
For small ∆t, we can write the one-step prior distribution as Hence, in case no reaction happens, we obtain Taking the limit yields the temporal change in p, i.e., Using a simpler notation, we can write the differential change in case no reaction happens aṡ with M1 as the posterior expectation of Z. If one reaction happens, we can write the one-step posterior distribution as Since the posterior jumps instantaneously when the reaction happens, the derivatives are not defined. Instead, we compute the increments if ∆t approaches zero. This yields Finally, the overall time-evolution of p is given by the stochastic equation Note that this equation depends on the conditional mean of Z, such that it becomes tedious to solve directly. In contrast, the unnormalized density provided above can be solved and normalized in a subsequent step. Nevertheless, the normalized distribution allows a simple computation of the posterior moments of Z.

S.3 Approximate moment dynamics
In cases where the conditional process Z(t) | Xt cannot be fully characterized by a finite number of moments, one can employ suitable approximations to obtain a closed moment system (e.g., ref. [24] in the main text). Such schemes -commonly referred to as moment-closure -aim to approximate the moment dynamics of a system using certain distributional assumptions. It has been shown that momentclosure techniques may yield reasonable approximation performances in several practical scenarios (see e.g., ref. [8] in the main text). In other cases -however -they might not be able to correctly capture the desired moments and moreover, there is no principled way of assessing the quality of a particular closure "beforehand", i.e., without performing extensive stochastic simulations. Those problems are inherent if one aims to approximate Z(t) without including further information. In contrast, the conditional process Z(t) | Xt turns out to be more straight-forward to approximate. This stems from the fact that due to the conditioning, the distribution over Z(t) will generally be more informative than the unconditional distribution. This can be understood via Bayes' theorem: a complicated and broad prior distribution is significantly harder to approximate by a simple distribution (e.g., a Gaussian) than a posterior distribution that is obtained after observing data. The more data (i.e., information) is included, the tighter and symmetric it is. While such arguments appear largely qualitative, they can be rigorously formulated using concepts from asymptotic theory such as large sample properties of Bayesian estimators [4]. In fact, our simulations indicated that the approximation accuracy of the uncoupled dynamics often shows little sensitivity with respect to the particular closure function. For univariate environments, we consistently used the Gamma-type of closure described in the main text. For the multivariate case, we applied the second-order zero-cumulant closure in which the third order moments are approximated by the first-and second-order moments as

S.4 Derivation of the effective noise
We assume now that Z(t) modulates X(t) through a zero-order reaction with index k. We have demonstrated in the main text that at any time t the total variance of Z(t) splits up into two terms: (i) the suppressed noise and (ii) the effective noise. In order to quantify the former, we rewrite the conditional moments in terms of central instead of non-central moments. In particular, we obtain for the mean and variance with S2(t) as the conditional variance Var [Z(t) | Xt] andD2(t) as the unconditional central moment dynamics of order two. We next need to compute the expected value of S2(t). Decomposing dR k (t) into a predictable part and a martingale, i.e., dR k (t) = c k M1(t)dt + dQ k (t), we can rewrite Eq.13 as Taking the expectation of Eq.14, all terms involving dQ k (t) become zero and we obtain Although eq. (15) is fully general, it might be hard to evaluate the expectation E S 2 2 (t)/M1(t) (and possible other further terms stemming from prior dynamicsD2(t)).

S.4.1 Effective noise of a Cox-Ingersoll-Ross process
Let us consider the case where Z(t) follows a CIR process as defined in the main text. The expected central moments are then governed by The only term that remains to be specified is the expectation E S 2 2 (t)/M1(t) . Fortunately, it turns out that for a Gamma-type conditional distribution, this expectation simplifies to E S 2 . A derivation of that fact can be performed using the extension of Ito's lemma for counting processes. However, since it involves a multitude of technicalities that are not in the scope of this study, we skip the individual steps. Instead we provide a heuristic but substantially simpler explanation based on the fact that the CIR process is conjugate to the Poissonian reaction channel. In particular, we consider the case of a Gamma distributed random variable Z ∼ G(α, β), with α and β as shape-and inverse scale parameters. The random variable is observed through a Poissonian measurement X | Z = z ∼ Poiss(z). After observing X, the conditional distribution over Z is given by Furthermore, the conditional mean and variance are and the ratio thereof becomes Taking the expectation with respect to x then yields We now compare this expression to E [S2] 2 /E [M1]. In particular, we obtain for the two expectations and therefore, the both expressions will coincide. The expected moments -and hence the suppressed noise then can be found by solving .
In order to find an expression at stationarity, we set the l.h.s. to zero and solve for M1 and S2.
Hence, the suppressed and effective noise terms of Z(t) at stationarity are given by Furthermore, the relative effective noise is found be dividing the effective noise by the total noise, i.e., with v = θ/σZ as the normalized timescale of Z(t).

S.5 Generalized master equations
Since the uncoupled process is non-Markovian, it does not satisfy a conventional master equation. Nevertheless, it can be described by a non-Markovian modification thereof, giving rise to a generalized master equation (GME). Typically, a GME is given in the form of an integro-differential equation, where the integral part stems from a time-convolution representing the memory effects of the system 1 . In some cases, such master equations may be given in a time-convolutionless form [8], which are typically more convenient to handle in practice. While GMEs are barely used in the context of biology, they are frequently applied in the field of quantum-and statistical mechanics [8,10]. In the following we will use several concepts from that field to formulate a GME for the uncoupled dynamics. While a fully general description is yet to be developed, we consider the case where non-Markovian effects arise due to non-exponential and possibly time-dependent waiting-time distributions. However, the latter do not explicitly depend on the entire process history Xt. This covers several relevant scenarios such as the master equation associated with the SNA.
Let us consider waiting-times W1, . . . , WN , each of which is associated with a particular reaction channel. We assume that only the k-th reaction is modulated by the environmental network Z. Accordingly, only the k-th reaction will be associated with a non-exponential waiting-time distribution P (W k < w | Xt) = P (W k < w | X(t) = x) and corresponding density p k (w | x, t). For all other reaction channels we assume exponential waiting-time distributions P (Wi < w | X(t) = x) = 1 − exp(−hi(x, ci)w) and associated densities pi(w | x) = hi(x, ci) exp(−hi(x, ci)w) for i = k. In the general case, the k-th waiting-time distribution follows with λ(X − t+w ) = ciE Z(t + w) | X − t+w , where X − t+w extends Xt by a time-interval w assuming that no reaction of type k happens in w. The corresponding density is given by Restricting the waiting-time distribution to only depend on the current state X(t) = x and time t, we further have that Assuming that the process is in state x at time t, the next reaction and its firing time are determined by picking the minimum of the randomly drawn waiting-times W1, . . . , WN , i.e., 1 In contrast, Markovian dynamics are known to be memoryless.
with W as the waiting-time until the next reaction fires. Accordingly, we can define the probability that the next reaction will fire between t and t + w as and the corresponding density will be denoted by p(w | x, t). We remark that if all N waiting-times stem from an exponential distribution (i.e., if X is Markovian), also their minimum will be exponentially distributed. The probability of the next reaction at time w to be of type k is given by where pj(w | x, t) = pj(w | x) if j = k. Plugging in the waiting-time distribution from (28), we obtain transition probabilities with h(x, t) = λ(x, t) + l =k h l (x, c l ) and the waiting-time density becomes The exponential term e − w 0 h(x,t+T )dT = 1 − P (W < w | X(t) = x, t) := S(w | x, t) in (33) corresponds to the probability that the system remains in state x until t + w. Note that the transition probabilities depend on t and w only through their sum such that Qi(x, t, w) = Qi(x, t + w). Now, as similarly described in [5] or [6], we can use the above definitions to formulate a generalization of the Chapman-Kolmogorov equation for jump processes with non-exponential waiting-times. The probability of finding the system in a particular state x at time t is determined by considering two cases. If x = X(0) = x0, we know that we have reached x from another state x − νi via a reaction of type i. The corresponding probability is obtained by summing over all possible reactions that may have moved the system to a state x and all possible times at which the reaction may have fired. The latter will be reflected by the aforementioned time-integral. A special case arises if x = x0, since we need an additional term accounting for the case where X(t) has never changed from its initial condition. In summary, one can show that the probability P (x, t) satisfies No reaction until t.
Eq. (34) can be rewritten in differential form as where the memory function φ(w | x, t) can in principle be derived from the waiting-time distribution p(w | x, t). However, we are not aware of a fully general relation but if p(w | x, t) = p(w | x), i.e., it does not explicitly depend on time, φ(w | x, t) is given by the Montroll-Weiss equation (ref. [27] in the main text) where φ(u) and p(u) denote the Laplace transforms of p(w | x) and φ(w | x), respectively. From this relation, it is straight-forward to verify that the master equation becomes memory-less in case of exponential waiting-time distributions (i.e., the memory function is given by a dirac-delta function) [5,6].
For the case considered here, we derive a master equation by direct differentiation of the Chapman-Kolmogorov equation (i.e., similar to [7]). With the above definitions, (34) becomes and taking the time-derivative further yields With h(x, t) = λ(x, t) + l =k h l (x, c l ) we arrive at Eq.(39) illustrate that under the given assumptions, the uncoupled dynamics can be described by a non-stationary and convolution-less master equation.

S.5.1 Derivation of the slow noise approximation
We apply the time-convolutionless master equation approach in order to approximate the transient probability distribution of a birth-death process in a fluctuating environment, i.e., where the environment Z enters the model through its birth-rate and c d is the rate constant of the death-reaction. The birth-hazard of the uncoupled process X is then given by The slow noise approximation is based on two critical assumptions: 1. The conditional process Z(t) | Xt can be well represented by a Gamma distribution with timevarying parameters. We highlight that this assumption does not mean that the unconditional process Z(t) needs to be approximately Gamma-distributed.
2. The impact of a fluctuating environment on a system can be well "mimicked" by a static environ-mentZ with a suitably chosen variance (e.g., the effective noise).
Under the Gamma-assumption, the conditional expectation from (40) is governed by the differential equations Assuming a very slow environmental dynamics, we have that D1 and D1 become zero and hence, The solution of (42) immediately before the next jump at time t is given by with µ = E Z and σ 2 = Var Z as mean and variance of the approximate environmentZ. Importantly, we find that the conditional mean -and therefore the hazard function only depends on the number of birth reactions but not on when those reactions happened. The uncoupled dynamics can therefore be described by eq. (39). Using the fact that X(t) = R b (t) − R d (t) for X(0) = 0, we obtain the timeconvolutionless master equation In the following, we derive a solution of this equation using the concept of generating functions [3]. In particular, we employ certain properties of the probability generating function which allow us to transform the difference-differential equation (46) into a PDE. It is straight-forward to from which we compute the distribution in X as i.e., a negative binomial distribution. Furthermore, it is straight-forward to show that marginally, both r b and r d have negative binomial distributions. We remark that the slow noise approximation is exact in the case of infinitely slow as well as fast fluctuations.