The Fidelity of Dynamic Signaling by Noisy Biomolecular Networks

Cells live in changing, dynamic environments. To understand cellular decision-making, we must therefore understand how fluctuating inputs are processed by noisy biomolecular networks. Here we present a general methodology for analyzing the fidelity with which different statistics of a fluctuating input are represented, or encoded, in the output of a signaling system over time. We identify two orthogonal sources of error that corrupt perfect representation of the signal: dynamical error, which occurs when the network responds on average to other features of the input trajectory as well as to the signal of interest, and mechanistic error, which occurs because biochemical reactions comprising the signaling mechanism are stochastic. Trade-offs between these two errors can determine the system's fidelity. By developing mathematical approaches to derive dynamics conditional on input trajectories we can show, for example, that increased biochemical noise (mechanistic error) can improve fidelity and that both negative and positive feedback degrade fidelity, for standard models of genetic autoregulation. For a group of cells, the fidelity of the collective output exceeds that of an individual cell and negative feedback then typically becomes beneficial. We can also predict the dynamic signal for which a given system has highest fidelity and, conversely, how to modify the network design to maximize fidelity for a given dynamic signal. Our approach is general, has applications to both systems and synthetic biology, and will help underpin studies of cellular behavior in natural, dynamic environments.


Introduction
Cells are continuously challenged by extra-and intracellular fluctuations, or 'noise', [1][2][3]. We are only starting to unravel how fluctuating inputs and dynamic interactions with other stochastic, intracellular systems affect the behavior of biomolecular networks [4][5][6][7][8][9]. Such knowledge is, however, essential for studying the fidelity of signal transduction [10,11] and therefore for understanding and controlling cellular decision-making [12]. Indeed, successful synthetic biology requires quantitative predictions of the effects of fluctuations at the single-cell level, both in static and dynamic environments [13]. Furthermore, sophisticated responses to signals that change over time are needed for therapeutics that involve targeted delivery of molecules by microbes [14,15] or the reprogramming of immune cells [16]. Here we begin to address these challenges by developing a general framework for analysing the fidelity with which dynamic signals are represented by, or 'encoded' in, the output of noisy biomolecular networks.

Two types of fidelity loss in dynamic signaling
For cellular signaling to be effective, it should maintain sufficient fidelity. We wish to quantify the extent to which the current output of an intracellular biochemical network, Z(t), can represent a particular feature of a fluctuating input (Fig. 1). This signal of interest, s(t), is generally a function of the history of the input, denoted u Ht . By its history, we mean the value of the input u at time t and at all previous times. The signal s(t) could be, for example, the level of the input at time t or a time average of the input over a time window in the most recent past. The output of the signaling network, Z(t), is able to perfectly represent the signal s(t) if s(t) can be inferred exactly from Z(t) at all times, t. The system then has zero fidelity error. However, for a stochastic biochemical mechanism, a given value of s(t) will map to multiple possible values of the output, Z(t).
We will assume that the conditional mean, E½Z(t)Ds(t), is an invertible function of s(t): it takes different values for any two values of s(t). It is then a perfect representation of s(t). The output Z(t) will, however, usually be different from E½Z(t)Ds(t) and have a fidelity error, defined as the difference between Z(t) and E½Z(t)Ds(t). The notation Z(t)Ds(t) is read as Z(t) conditioned on, or given, the value of the variable s at time t. We use E, as for example in E½Z(t)Ds(t), to denote averaging over all random variables except those given in the conditioning. Therefore E½Z(t)Ds(t) is itself a random variable: it is a function of the random variable s(t) (we give a summary of the properties of conditional expectations in the SI).
Many response functions, E½Z(t)Ds(t), in biochemistry and physiology (for example, Hill functions) satisfy the requirement of invertibility or can be made to do so by defining s(t) appropriately-for example, when a response exactly saturates for all input values above a threshold, those values can be grouped to form a single input state. Furthermore, we know from the properties of conditional expectations that Z(t) is closer to E½Z(t)Ds(t) in terms of mean squared fidelity error than to any other representation (function) of s(t) (SI).
The difference between the conditional expectations E½Z(t)Du Ht and, for example, E½Z(t)Du(t) is important. The former, E½Z(t)Du Ht , is the average value of the output at time t given a particular history of the input u. It will often coincide with the deterministic (macroscopic) solution when the same input trajectory is applied to the network. The output Z(t) shows random variation around this average, E½Z(t)Du Ht , for identical realisations of the trajectory of u. By contrast, E½Z(t)Du(t) is the average value of Z(t) given that the trajectory of u up to time t ends at the value u(t). By the properties of conditional expectations, this is also the average value of E½Z(t)Du Ht over all trajectories ending in the value u(t): that is, EfE½Z(t)Du Ht Du(t)g~E½Z(t)Du(t). These mathematical definitions are illustrated diagrammatically in Fig. 2.
We distinguish between two types of error that reduce fidelity between Z(t) and s(t).
Dynamical error becomes significant when the response time of the signaling network is comparable to or longer than the timescale on which the signal of interest, s(t), fluctuates. On average, the output Z(t) then responds to other features of the input history as well as to s(t). We define the dynamical error therefore as the difference between the average level of the output given a particular history of the input, u Ht , and the average level of the output given the signal of interest (a function of u Ht ): The magnitude (variance) of the dynamical error is equal to E½V fE½Z(t)Du H t Ds(t)g, [7]. For example, if the signal of interest is the current value of the input, u(t), then e d (t) records a catch-up error if the network still 'remembers' (is still responding to) previous values of the input (Fig. 3). Since E½Z(t)Du Ht will generally be different for different input trajectories, it will generally differ from E½Z(t)Du(t) (which is an average over all input trajectories that end at u(t), Fig. 2).
We can write the dynamical error as If fluctuations in s(t) are slower than the response time of the system, then s(t) will be effectively constant over the 'portion' of its history detected by the output and the first term becomes zero because E½Z(t)Ds Ht ^E½Z(t)Ds(t). We note that the magnitude (variance) of e d (t) is always non-zero if the magnitude of this first term is non-zero because the two terms in Eq. 2 are uncorrelated (Methods). The second term quantifies the difference between the average effect on the output, Z(t), exerted by the history of the signal of interest and the average effect on the output exerted by the history of the input. This term would be non-zero, for example, if the input u consists of multiple ligands that influence Z, perhaps because of cross-talk between signaling pathways, but the signal of interest is only a function of the history of one of those ligands. This second term is zero, however, for the systems we will consider.
Mechanistic error is generated by the inherent stochasticity of the biochemical reactions that comprise the signaling network. We define mechanistic error as the deviation of the current value of the output from its average value given a particular history of the input: We consider a 2-stage model of gene expression [22]. The extracellular environment or input, u(t), gives the current rate of transcription and the signal of interest s(t)~u(t). We model u(t) as either a 2-state Markov chain with equal switching rates between states (the states each have unconditional probability of 1=2) (A&C); or as proportional to a Poissonian birth-death process for a transcriptional activator (B&D; proportionality constant of 0.025). The transformed signals E½Z(t)Du(t) (in red, lower panels) are a perfect representation of u(t), although protein levels Z(t) (in blue) are not.

Author Summary
Cells do not live in constant conditions, but in environments that change over time. To adapt to their surroundings, cells must therefore sense fluctuating concentrations and 'interpret' the state of their environment to see whether, for example, a change in the pattern of gene expression is needed. This task is achieved via the noisy computations of biomolecular networks. But what levels of signaling fidelity can be achieved and how are dynamic signals encoded in the network's outputs? Here we present a general technique for analyzing such questions. We identify two sources of signaling error: dynamic error, which occurs when the network responds to features of the input other than the signal of interest; and mechanistic error, which arises because of the inevitable stochasticity of biochemical reactions. We show analytically that increased biochemical noise can sometimes improve fidelity and that, for genetic autoregulation, feedback can be deleterious. Our approach also allows us to predict the dynamic signal for which a given signaling network has highest fidelity and to design networks to maximize fidelity for a given signal. We thus propose a new way to analyze the flow of information in signaling networks, particularly for the dynamic environments expected in nature.
Z(t) departs from its average (given the realised input history) because of biochemical stochasticity (Fig. 2). The magnitude of mechanistic error is given by E½e m (t) 2 , which equals EfV ½Z(t)Du H t g.
Mechanistic error is related to intrinsic noise. Intrinsic variation measures the expected variation in Z(t) given the history of all the extrinsic variables [7,8]. Extrinsic variables describe the influence of the rest of the cell and of the extracellular environment on, say, expression of a gene of interest [17] and would include, for example, levels of ATP and ribosomes as well as extracellular signals such as the input u. The magnitude of the mechanistic error measures, however, the expected variation in Z(t) given the history of just one extrinsic variable, the input u. Mechanistic variation therefore also includes the effects of fluctuations in the levels of ATP and ribosomes on the signalling mechanism and is always greater than or equal to the intrinsic variation.
We then define the fidelity error, e f (t), to be the sum of these two errors: which has zero mean, as do e d (t) and e m (t). Fig. 1 shows fluctuating protein output levels, Z(t), for a network that has high fidelity (small errors) for the signal of interest, there the current state of the environment, u(t).

Orthogonal signal and error components
We can decompose the output Z(t) into the sum of the faithfully transformed or transmitted signal, E½Z(t)Ds(t), the dynamical error, and the mechanistic error: for all times t §0. Eq. 5 is an orthogonal decomposition of the random variable Z(t)-each pair of random variables on the right-hand side has zero correlation (Methods). The variance of Z(t) therefore satisfies where the magnitude of the fidelity error is given by E½e f (t) 2 , which is E½e d (t) 2 zE½e m (t) 2 because of the orthogonality. This magnitude of the fidelity error is also equal to the expected conditional variance of the output, EfV ½Z(t)Ds(t)g. We note that we can generalize this decomposition, and thus extend our approach, for example, to study different components of the mechanistic error (Methods).
To compare signaling by different biochemical mechanisms, we normalize Z(t) by the square root of its variance, writing , and define the fidelity as a signal-to-noise ratio: for some signal of interest, s(t). Eq. 7 is dimensionless and a montonically decreasing function of E½ẽ e 2 f (t). Indeed, we have shown that the maximal mutual information between Z(t) and s(t) across all possible signal distributions is bounded below by a decreasing function of E½ẽ e f (t) 2 (and so an increasing function of our fidelity), for a suitable choice of distribution of the signal s(t) and when E½Z(t)Ds(t) is an invertible function of s(t) [7].
Comparing biochemical systems using the fidelity measure is equivalent to comparison based on the magnitude of the fidelity error, E½ẽ e 2 f (t), whereẽ e f (t)~Z Z(t){E½Z Z(t)Ds(t) and the error is measured in units of the standard deviation of the output. Eq. 7 is maximized when E½ẽ e 2 f (t) is minimized. One minus the magnitude of the fidelity error is the fraction of the variance in the output that is generated by the signal of interest. In information theoretic approaches, normalizing the output by its standard deviation is also important, because the normalization allows determination of the number of 'unique' levels of output that can be distinguished from one other despite the stochasticity of the output, as least for Gaussian fluctuations [18].
When s(t) and Z(t) have a bivariate Gaussian distribution, the instantaneous mutual information, I½s(t); Z(t), is monotonically related to the fidelity and exactly equal to , where Corr denotes the correlation coefficient. Also in this Gaussian case, E½ẽ e 2 f (t) is equal to the minimum mean squared error (normalised by V ½s(t)) between s(t) and the linear, optimal estimate, E½s(t)DZ(t). (This is the optimal 'filter' when only the current output Z(t) is available, although typically a filter such as the Wiener filter would employ the entire history of Z up to time t.) Gaussian models of this sort for biochemical signalling motifs were considered in [19], with instantaneous mutual information expressed in terms of a signal-to-noise ratio equivalent (for their models) to the fidelity of Eq. 7. Such Gaussian models (if taken literally, rather than used to provide a lower bound on the information capacity [19]) would imply that the input-output relation, E½Z(t)Ds(t), is linear and that V ½Z(t)Ds(t) does not depend on s(t) (by the properties of the multivariate normal distribution). Our approach requires neither assumption.
Whenever E½Z(t)Ds(t) is a linear function of s(t), that is E½Z(t)Ds(t)~czgs(t) for constants c and g, we consider g to be the gain for the signal of interest s(t) [19]. The fidelity then depends on the ratio of the squared gain to the fidelity error and is given by The dynamic signal with maximum fidelity for a given input process. Suppose that the input process u(t) is given and we want to choose from among all functions or statistics of the input history that 'signal of interest', s(t), for which the network achieves the highest fidelity. An immediate implication of Eq. 7 is that it identifies the signal of interest with the highest fidelity. Since 1{E½ẽ e 2 f (t). In B, the relative protein lifetime, t Zu~du d {1 Z , is higher than optimal (t Zu~0 :37) and fidelity is 2.2; in C, t Zu is optimal (t Zu~0 :02) and fidelity is 10.1; and in D, t Zu is lower than optimal (t Zu~0 :003) and fidelity is 5.3. Dynamical error, e d (t), is the difference between E½Z(t)Du Ht (black) and the faithfully transformed signal E½Z(t)Du(t) (red), and decreases from B to D, while mechanistic error increases. The lower row shows the magnitudes of the relative dynamical error (black) and relative mechanistic error (orange). All rate parameters are as in Fig. 1  EfZ(t)D E½Z(t)Du Ht g~E½Z(t)Du Ht , the dynamical error is zero when from Eq. 1. This choice of s(t) therefore maximizes fidelity for all signaling networks: it minimizes the magnitude of the fidelity error (Eq. 6), because E½e 2 m (t)~EfV ½Z(t)Du H t g and V ½Z(t) do not depend on s(t). The variance of Z only changes with the biochemistry of the network and the input process. We will give an example of such a signal of interest that maximizes fidelity in Eq. 9.

Analyzing networks with fluctuating inputs
Methods of analysis of stochastic systems with dynamic inputs are still being developed. We argue that deriving expectations of network components conditional upon the histories of stochastic inputs is a powerful approach. We have developed three methods to determine components of Eqs. 5 and 6 (SI): An exact analytical method, applicable to linear cascades and feedforward loops, based on the observation that moments calculated from a chemical master equation with propensities that are the appropriate functions of time are conditional moments, where the conditioning is on the history of the inputs at time t and on the initial conditions. (ii) A Langevin method that can include non-linearities, requires stationary dynamics, and whose accuracy as an approximation improves as typical numbers of molecules grow. (iii) A numerical method, applicable to arbitrary biomolecular networks and signals of interest-based on a modification of the Gillespie algorithm allowing time-varying, stochastic propensities-that uses a 'conjugate' reporter to estimate the mechanistic error [7] and a simulated sample from the distribution of the signal-output pair, ½s(t),Z(t), to estimate the conditional means, E½Z(t)Ds(t).
We note that our methods require that the inputs can be modeled as exogenous processes that are unaffected by interactions with the biochemistry of the signaling network (a distinction emphasised in [20]). By an exogenous process we mean one whose future trajectory is independent, given its own history, of the history of the biochemical system. This model for an input is reasonable, for example, when the input is the level of a regulatory molecule, such as a transcription factor, that has relatively few binding sites in the cell.

Analyzing signal representation by gene expression
Transcriptional regulation is a primary means by which cells alter gene expression in response to signals [21]. We now provide an exact, in-depth analysis of a two-stage model of gene expression [22] where the fluctuating input, u, is the rate (or propensity) of transcription and the signal of interest, s(t), equals the current value of the input, u(t). For example, u(t) may be proportional to the extracellular level of a nutrient or the cytosolic level of a hormone regulating a nuclear hormone receptor.
The cellular response should account for not only the current biological state of u but also future fluctuations. If we consider an input that is a Markov process, future fluctuations depend solely on the current value u(t), and the cell would need only to 'track' the current state as effectively as possible and then use the representation in protein levels to control downstream effectors.
These ideas are related to those underlying predictive information [23,24].
Our analysis requires only the stationary mean and variance of the input u(t) and that u(t) has exponentially declining 'memory' (SI). Consequently, the autocorrelation function of u is a single exponential with autocorrelation time d {1 u (the lifetime of fluctuations in u). Examples include a birth-death process or a two-state Markov chain. We can generalize using, for example, weighted sums of exponentials to flexibly model the autocorrelation function of u.
Solving the 'conditional' master equation with a time-varying rate of transcription, we find that the conditionally expected protein level is a double weighted 'sum' of past levels of the signal u (SI): (where for simplicity the equation is stated for the case of zero initial mRNA and protein). We denote the rate of translation per molecule of mRNA by v, the rate of mRNA degradation per molecule by d M , and the rate of degradation of protein per molecule by d Z . The most recent history of the input u exerts the greatest impact on the current expected output, with the memory of protein levels for the history of the input determined by the lifetimes of mRNA and protein molecules. Eq. 9 gives the signal of interest, s(t) (a function of the history of the fluctuating transcription rate), that gene expression transmits with the highest fidelity to protein levels (see Eq. 8). Notice that the current value of the input, u(t), cannot be recovered exactly from E½Z(t)Du H t , which is therefore not a perfect representation of u(t).
We find, by contrast, that E½Z(t)Du(t) is an invertible, linear function of u(t): when the dynamics reach stationarity, and that the stationary unconditional mean is E½Z~vE½u=d M d Z (SI). Notice that E½Z(t)Du(t) does not converge for large t to the average 'steadystate' solution for a static u, but depends on d u . The discrepancy between Eqs. 9 and 10 results in dynamical error with non-zero magnitude (Fig. 3B). Using our solutions for the conditional moments, we can calculate the variance components of Eq. 6 (SI). For the faithfully transformed signal, when s(t)~u(t), we have where t Mu~du =d M is the ratio of the lifetime of mRNA to the lifetime of fluctuations in u, and t Zu~du =d Z is the ratio of the lifetime of protein to the lifetime of fluctuations in u. The magnitude of the dynamical error is in this case proportional to Eq. 11 and the magnitude of the mechanistic error satisfies When the autocorrelation time of u(t) becomes large (t Mu and t Zu tending to zero), the dynamical error e d (t) therefore vanishes (Eq. 12). In this limit, the output effectively experiences a constant input u(t) during the time 'remembered' by the system. To gain intuition about the the effect of relative lifetimes on the fidelity of signaling, we first suppose the mechanistic error is small relative to V ½Z. Eq. 7 then becomes simply t {1 Zu if protein lifetime is large relative to mRNA lifetime, t Mu =t Zu ?0 (as expected for many genes in budding yeast [25]). The fidelity thus improves as the protein lifetime decreases relative to the lifetime of fluctuations in u, and the output is able to follow more short-lived fluctuations in the signal. This observation is only true, however, for negligible mechanistic error.

Tradeoffs between errors can determine signaling fidelity
It is the aggregate behavior of dynamical and mechanistic errors as a fraction of the total variance of the output that determines signaling fidelity, Eq. 7. Effective network designs must sometimes balance trade-offs between the two types of error.
Increasing biochemical noise can enhance signaling fidelity. Predicting changes in fidelity requires predicting whether changes in the magnitude of the dynamical error relative to V ½Z, denoted E½ẽ e 2 d (t), either dominate or are dominated by changes in the magnitude of the mechanistic error relative to V ½Z, denoted E½ẽ e 2 m (t). For example, shorter protein lifetimes can decrease the absolute value of both the dynamical error and the mechanistic error (the output has a lower mean-Eq. 13). We calculated for all parameter space the sensitivities of the magnitude of the two (relative) errors with respect to changes in the protein lifetime, d {1 Z (using Eqs. 11, 12, and 13). We found that although the relative magnitude of the dynamical error decreases with shorter protein lifetime, the relative magnitude of the mechanistic error increases. The sign of the overall effect on the relative fidelity error can therefore be positive or negative (Fig. 3A), and consequently fidelity is maximized by a particular protein lifetime, d {1 Z (Fig. 3B-D).
Similar trade-offs have been observed before in signal transduction. For example, tuning the protein's degradation rate can also maximize the instantaneous mutual information, at least for Gaussian models [19]. As the protein degradation rate increases, although the fidelity error EfV ½Z(t)Du(t)g decreases, there is a trade-off because the gain also decreases. In our model the gain, v=(d u zd M )(d u zd Z ) (Eq. 10), is decreasing in d Z and we observe the same tradeoff.
Further, the trade-off between the two relative errors has some similarities with trade-offs that occur with Wiener filtering [26]. There, however, the entire output history is used to optimally estimate (or reconstruct) the signal of interest. In contrast, we consider representation of s(t) only by the current output Z(t).
The rule-of-thumb that increasing stochasticity or noise in signaling mechanisms reduces signaling fidelity is broken in this example. Such statements typically ignore the effect of dynamical error, but here reductions in relative dynamical error can more than compensate for gains in relative mechanistic error. Both errors should be included in the analysis.
Feedback can harm signaling fidelity. Intuitively we might expect that feedback can improve signaling fidelity because feedback affects response times. For example, autoregulation affects the mean time to initiate transcription: it is reduced by negative autoregulation [27] and increased by positive autoregulation [28]. We introduce autoregulation into our model of gene expression, interpreting again u(t) as proportional to the fluctuating level of a transcriptional activator and allowing the protein Z to bind to its own promoter. For negative feedback, the rate of transcription becomes u(t)=½1zK 1 Z(t); for positive feedback, it becomes ½wK 1 Z(t)zu(t)=½1z(K 1 zK 2 )Z(t), with w the rate of transcription from the active promoter (SI). We impose u(t)vwK 1 =(K 1 zK 2 ) so that the transcription rate increases with Z(t) for a given u(t). Increasing K 1 increases the strength of the feedback in both cases. We note that other models of autoregulation may give different conclusions, and that the transcription rate depends linearly on u(t) in our models.
We let the signal of interest s(t) again be u(t). To proceed we calculate the sensitivities of the magnitudes of the fidelity errors using our Langevin method with the input an Ornstein-Uhlenbeck process. We determine their signs with respect to changes in feedback strength by randomly sampling a biophysically plausible parameter space (SI). As we sample, the parameter space governing fluctuations of u(t) is also explored. We find excellent agreement between our Langevin and numerical, simulation-based approach (SI). Since we calculate sensitivities, we are examining the effect of changing feedback strength, K 1 , while holding other network parameters constant. This process both imitates the incremental change often expected during evolution and the way that network properties tend to be manipulated experimentally. When comparing the fidelity error of the signal representations for different K 1 using Eq. 7, we implicitly normalise the variance of the output to one in order to ensure fair comparison.
Consider first the static case where the fluctuations in u(t) are sufficiently slow relative to the timescales of the transduction mechanism that the input is effectively constant (d u ?0 with fixed V ½u). As expected (Eq. 1), e d converges to zero as d u ?0. With a static input, negative autoregulation is expected to reduce the variances of the response, Z(t), for each value of the input [29]. The mechanistic variance is therefore expected to decrease, and does so in all models sampled as K 1 increases. We can show analytically (SI) that the suppression of mean levels also decreases the variance of the conditional mean, the 'signal' variance V fE½Z(t)Dug, and so the total variance of the output decreases. We find that the decrease in mechanistic variance cannot outweigh the decreased signal variance, and the fidelity always decreases with increasing feedback (increasing K 1 ). Such a reduction in information transfer through negative feedback has recently been observed experimentally [10]. For positive autoregulation, the mechanistic variance increases with K 1 , which dominates any increase in the signal variance observed at low values of K 1 . Relative mechanistic error again rises and fidelity therefore decreases.
For a static u, therefore, neither negative nor positive autoregulation improves signaling fidelity. As the strength of feedback becomes large, the transcriptional propensity tends to zero for negative feedback and to the constant w for positive feedback (with fixed positive Z), and the propensities for different u become indistinguishable as functions of Z (SI). Signaling is correspondingly compromised in both cases.
These findings essentially still hold when the input is dynamic. For negative autoregulation, all three components of the output variance decrease with K 1 . The relative dynamical error decreases with K 1 , but this decrease is typically outweighed by an increase in the relative mechanistic error, and the overall fidelity deteriorates (w85% of cases sampled and Fig. 4). Any reduction in fidelity error, E½ẽ e f (t) 2 , was negligible (the difference from the fidelity error when K 1~0 was always less than 0:001). We note that this conclusion is in contradistinction to the finding (using a linear Gaussian model) that negative feedback does not affect information transfer between entire input and output trajectories [30]. For positive feedback, both the mechanistic variance and the relative mechanistic error increase with K 1 (for all models sampled). This mechanistic effect dominates the relative dynamical error, which can change non-monotonically with K 1 , and fidelity again deteriorates.
Our results are consistent with the intuition that, although negative feedback reduces the absolute mechanistic error (fewer molecules) and absolute dynamical error (faster response times), negative feedback also decreases the dynamic range of the output. The fidelity therefore does not improve because the output distributions corresponding to each value of u(t), despite being tighter, are also located closer together (Fig. 4). Positive feedback acts in the opposite way, with increasing variance in the (conditional) output distributions overwhelming any increase in the dynamic range of the output.
To explore what happens when the effect of feedback on the dynamic range is directly controlled, we investigated the effect of varying K 1 in our negative feedback model while simultaneously altering the translation rate (v) to hold the system's 'gain' constant (SI). In our model, the faithfully transformed signal is a linear function of u(t): E½Z(t)Du(t)~czgu(t), where g is the gain. If only K 1 is varied and the translation rate kept fixed, then the gain is always less than the gain when K 1 is zero. The signal variance or 'dynamic range', V fE½Z(t)Du(t)g, is equal to g 2 V ½u(t), which is also therefore held constant as we vary K 1 at constant gain. The fidelity is g 2 V ½u(t)=(E½e 2 d (t)zE½e 2 m (t)). For static signals, we again find the fidelity almost always decreases with increasing negative feedback strength, K 1 : the absolute mechanistic error now increases with increasing K 1 , presumably because of the decreased rate of translation. For dynamic signals we find, for the vast majority of cases, an optimal feedback strength, K 1 , above and below which fidelity deteriorates.
With increased K 1 , although the absolute mechanistic error increases, the absolute dynamical error decreases, when we compare randomised initial parameterisations with the K 1 that maximises fidelity. When K 1 decreases compared to its initial value, these errors have the opposite behavior. At constant gain, the tradeoff between dynamical and mechanistic error is thus still observed, as is the harmful effect of too strong a negative feedback.
Combining outputs from multiple cells improves fidelity. When a physiological response corresponds to the average output of multiple cells, the magnitude of the mechanistic error is that for a single cell divided by the number of cells in the group (for identical and independent cells receiving the same input). This reduction arises because the magnitude of the mechanistic error is now the variance of the average mechanistic error of the cells in the group. The dynamical error, Eq. 1, however, is the same as the dynamical error of each individual cell: expectations of the average response equal the expectations of the response of each single cell when the cells are identical. Therefore the fidelity for any signal of interest, s(t), increases if the average or aggregate output of a group of cells is used (SI). Measuring the collective response of small groups of cells, Cheong et al. indeed found that information capacity increased significantly compared to that of a single cell [10], and averaging of individual cellular responses is believed to increase the precision of gene expression during embryonic development [31].
Although negative feedback reduces relative dynamical error, it increases relative mechanistic error in individual cells. At the level of the collective response of multiple cells, the deleterious effect on mechanistic error is attentuated (Fig. 5). Using a population of 100 independent and identical cells we find that adding negative feedback now improves fidelity in the majority of cases, with moderate reductions in (relative) fidelity error (ƒ0:10) for our parameter space. Adding positive feedback never significantly improves overall fidelity As the strength of feedback increases, the underlying state of the input is more difficult to infer (the conditional distributions overlap more) because increasing (relative) mechanistic error dominates the decreasing (relative) dynamical error. Note the decrease in the (relative) dynamical error when u(t) is in its high state (yellow conditional distribution) because stronger negative feedback gives faster initiation of transcription. Transcription propensities are given by u(t)=½1zK 1 Z(t), and all parameters except K 1 are as in Fig. 3B (all observed reductions ƒ0:02). Furthermore, negative feedback can often significantly reduce the number of cells needed to achieve the same fidelity as, say, 100 cells that lack feedback (less than 10 cells are needed 22:5% of the time and less than 50 cells 48% of the time when sampling from our parameter space).

Designing dynamic networks in synthetic biology
Our framework naturally adapts to the scenario of controlling a network output to approach a desired 'target' response when, for example, the cell's environment changes. Combined with model search procedures for synthetic design [32], it is a promising approach to the design of synthetic biomolecular networks. If the target response is given by r(t), which is a function of the input history, then to guide the design process, we can decompose the error Z(t){r(t) analogously to Eq. 5 and find an equivalent to Eq. 6, a dissection of the network performance into orthogonal components (SI).

Discussion
Cells use the information conveyed by signaling networks to regulate their behavior and make decisions. Not all features of the input trajectory will, however, be relevant for a particular decision, and we define the fidelity between the output of the network and a signal of interest, s(t), which is a function of the input trajectory. Information encoded in upstream fluctuations must eventually either be lost or encoded in current levels of cellular constituents. We have therefore focused on the fidelity with which s(t) is represented by the current output, Z(t).
Using an orthogonal decomposition of the network's output into the faithfully transformed signal and error terms, we are able to identify two sources of error -dynamical and mechanistic. We assume the transformed signal, E½Z(t)Ds(t), to be an invertible function of s(t). The aggregate behavior of the two types of error determines the signaling fidelity, and ignoring either may cause erroneous conclusions. We interpret Z(t) as the current cellular estimate or 'readout' of the faithfully transformed signal. The magnitude of the fidelity error relative to the variance in Z, Eq. 7, is a dimensionless measure of the quality of that estimate since E½ẽ e 2 f (t)~Ef(Z(t){E½Z(t)Ds(t)) 2 g=V ½Z(t). Furthermore, we have shown that E½ẽ e 2 f (t) is related to the mutual information between the input and output [7].
To apply our approach experimentally, we can use microfluidic technology to expose cells to the same controlled but time-varying input in the medium [33], and a fluorescent reporter to monitor the network output, Z(t). This reporter could measure, for example, a level of gene expression or the extent of translocation of a transcription factor. The transformed signal, E½Z(t)Ds(t), and its variance (for a given probability distribution of the input process) can then be estimated with sufficient amounts of data by monitoring Z(t) in each cell and s(t) in the microfluidic medium. We can determine the mechanistic error by measuring the average squared difference between the output of one cell and that of another -because the outputs of two cells are conjugate given the history of the input [7] and hence determine the dynamical error by applying Eq. 6.
Our analysis is complementary to one based on information theory and the entire distribution of input and output [7]. Without making strong assumptions about the network and the input, calculation of mutual information is challenging for dynamic inputs. Previous work has considered either the mutual information between entire input and output trajectories with a Gaussian joint distribution of input and output [19,34], or the 'instantaneous' mutual information between input and output at time t [19] (applicable in principle to non-Gaussian settings). Our approach, however, depends only on conditional moments and avoids the need to fully specify the distribution of the input process, which is often poorly characterized.
The environments in which cells live are inherently dynamic and noisy. Here we have developed mathematical techniques to quantify how cells interpret and respond to fluctuating signals given their stochastic biochemistry. Our approach is general and will help underpin studies of cellular behavior in natural, dynamic environments.

Orthogonality of transformed signal, dynamical error and mechanistic error
Define e s (t)~E½Z(t)Ds(t){E½Z(t), the transformed signal with zero mean. Then the signal and error components of Eq. 5 are pairwise uncorrelated: E½e s (t)e d (t)~Efe s (t)E½e d (t)Ds(t)g~0, E½e s (t)e m (t)~Efe s (t)E½e m (t)Du H t g~0, Orthogonal decomposition of a random variable based on a filtration Eq. 5 is a special case of the following general decomposition for any random variable (with finite expectation), here denoted Z. Consider a filtration, or increasing sequence of conditioning 'information sets', fH 0 ,H 1 ,:::,H k g, where k §1 and H 0~f V,1g. Let e i~E ½ZDH i {E½ZDH i{1 for i~1,:::,k, and let e kz1~Z {E½ZDH k . Then the decomposition satisfies E½e i e j ~0 for all i=j since the sequence fe i ; i~1,:::,kz1g is a martingale difference sequence with respect to the filtration (SI). Therefore, V ½Z~P kz1 i~1 E½e 2 i .

Supporting Information
Text S1 The complete supporting information is provided as Text S1. (PDF)