Adjusting Phenotypes by Noise Control

Genetically identical cells can show phenotypic variability. This is often caused by stochastic events that originate from randomness in biochemical processes involving in gene expression and other extrinsic cellular processes. From an engineering perspective, there have been efforts focused on theory and experiments to control noise levels by perturbing and replacing gene network components. However, systematic methods for noise control are lacking mainly due to the intractable mathematical structure of noise propagation through reaction networks. Here, we provide a numerical analysis method by quantifying the parametric sensitivity of noise characteristics at the level of the linear noise approximation. Our analysis is readily applicable to various types of noise control and to different types of system; for example, we can orthogonally control the mean and noise levels and can control system dynamics such as noisy oscillations. As an illustration we applied our method to HIV and yeast gene expression systems and metabolic networks. The oscillatory signal control was applied to p53 oscillations from DNA damage. Furthermore, we showed that the efficiency of orthogonal control can be enhanced by applying extrinsic noise and feedback. Our noise control analysis can be applied to any stochastic model belonging to continuous time Markovian systems such as biological and chemical reaction systems, and even computer and social networks. We anticipate the proposed analysis to be a useful tool for designing and controlling synthetic gene networks.

For the case of control that the mean level is changed with the noise level fixed, the tolerance level (tol) is applied to the noise level. The control schemes related to α m , γ m , and α p were significantly dependent on the tolerance level for the wild-type. This is because the control coefficients for the noise level with respect to these parameters were all small (C V s αm = −0.19, C V s γm = 0.13, and C V s αp = −0.01).
Model for ATM-p53-mdm2 oscillations ATM-p53-mdm2 systems show sustained noisy oscillations, which were shown to be successfully modeled by the following linear Langevin equations [1]: where a, p, and m denote the oscillation components of the concentrations of ATM, p53, and mdm2, respectively, and ξ a , ξ p , and ξ m are the independent Gaussian white noise, satisfying ξ a (t)ξ a (t ) = W a δ(t − t ) and the same relationships for the other noise terms ξ p and ξ m with different noise strength W p and W m .
The parameter values are shown in Table S1.

Summation theorem for auto-correlation functions
We derived summation theorems among control coefficients for concentration mean and noise levels (coefficients of variation and even higher order moments) [2] that is analogous to those found in the metabolic control analysis. In this section, we derive another summation theorem regarding to control coefficients for auto-correlation functions.
We consider a system that is described by the master equation. The reaction rate functions can be arbitrary and the global proportionality constants are chosen for perturbations. For example, in the Michaelis-Menten-type rate law: v max is our choice of parameter but the Michaelis-Menten constant K M is not. For more detailed discussion, γ a k a→p k p→a γ p k m→p k p→m γ m W a W p W m 60 ln(2)/100 -0.65 1 60 ln(2)/75 -0.55 0.29 60 ln(2)/144 1400 5000 10000 Table S1. Parameter values for the ATM-p53-mdm2 system. The unit of all the parameters is hour −1 .
The values are taken from [1].
we refer to [2]. The set of the parameters will be denoted by p. If there are n reactions, the number of the parameters (the dimension of p) is n.
Before the derivation of the summation theorem, we note that the rescaling all the parameter p to αp means that all the reactions are increased by the factor of α. This can be alternatively viewed as the unit of time is decreased by the factor of α [2][3][4][5]. This equivalence leads to the following relationship: Specifically, we can obtain the above relationship as follows. The function G(τ, αp) is defined as where the angle bracket means the average over time at the stationary state. With the rescaling in p, the above equation becomes G(τ, αp) = X(α(t + τ ))X(αt) − X 2 = X(αt + ατ )X(αt) − X 2 = G(ατ, p).
By using this relationship, we can obtain the summation theorem as follows. First, an infinitesimal change in the autocorrelation function can be described by the sum of the change due to each individual parameter change by applying the chain rule: Second, the infinitesimal change in the autocorrelation can also be described, by using Eq. (S2), as where δτ = ατ − τ = (α − 1)τ . From Eqs. (S3) and (S4), we obtain the summation theorem:

Jacobian and diffusion matrices
Our model system -a continuous time Markov process -is assumed to show stochastic fluctuations in state variables (here, concentrations) small enough that the fluctuations occur in linear portions of nonlinear reaction rates. Under this assumption, the fluctuation statistics can be approximately described by linearizing the nonlinear reaction rates. This approximation is called the linear noise approximation [6]. The noise strength (concentration covariance) at the stationary state was shown to be computed by solving the Lyapunov equation [6] (also known as the fluctuation dissipation relationship [7,8]): where σ the covariance matrix, J the Jacobian matrix, and D the diffusion matrix.
When a system is governed by the master equation, J and D can be computed from the system's linearized reaction rates. However, when the master equation is not known, e.g., due to unknown source of noise, the system is often described by a phenomenological Langevin equation with Gaussian white noise, where the strength of noise is estimated via parameter fitting procedures. When fluctuations in concentrations are small enough that the Langevin equation can be linearized, the concentration covariance is described by the same (in mathematical structure) ordinary differentical equation that was derived from the master equation [9].
At the stationary state, the equation becomes the Lyapunov equation Eq. (S5) and the diffusion matrix is given by the estimated noise strength.
Consider the two-state gene expression model shown in Fig. 2b. This model system is fully described by the master equation. Under the linear noise approximation (the approximation becomes exact actually, since the model is described by linear reaction rate laws), J and D are given by  ) is defined as i, j = 1 corresponds to P a ; i, j = 2 to mRN A; and i, j = 3 to P rotein. For example, both σ 12 and σ 21 represent covariance between P a and mRN A.
For the p53 oscillation model shown in Fig. 6a, the linear phenomenological Langevin equation (S1) was used. J and D were given by with the matrix index (i, j) is defined as i, j = 1 corresponds to ATM; i, j = 2 to P53; and i, j = 3 to mdm2.
The noise strength, W a , W p , and W m were estimated by parameter-fitting [1].
Network C in Fig. 7 is not weakly reversible.
The network can be converted as follows: where S is a constant. This system is not weakly reversible; there is no directed arrow path, e.g., from the complex X 1 +E to the complex E, from X 2 to X 1 , and so on. The deficiency is, however, zero since there are six complexes {ø, E, X 1 + E, X 1 , X 2 , P }, two linkage classes [10], and the rank of the network (the column rank of the stoichiometric matrix [10]) is 4.

Relationship between infinitesimal and finite perturbations
Consider an infinitesimal perturbation in a parameter p such as p → p(1 + ), with 1. The relative change in p can be expressed as The infinitesimal perturbation is applied iteratively in N times to perturb the parameter in a finite amount: where p 0 and p N are the values of the parameter before any iteration performed and after the N -th iteration, respectively. Therefore, we obtain p N = p 0 exp(N ).
When two parameters, p and q, are perturbed infinitesimally with the ratio of 1:2: p → p(1 + ), and q → q(1 + 2 ), with 1. After the N -th iteration of this perturbation, the final parameter values become: p N = p 0 exp(N ), and q N = q 0 exp(2N ).
Therefore, we obtain q N q 0 = p N p 0 2 .