Motifs, Control, and Stability

The interactions of networks of transcription factors and signaling molecules can be understood, in part, through concepts from control theory and engineering.

A key starting point in developing a conceptual and theoretical bridge between engineering and biology is robustness, the preservation of particular characteristics despite uncertainty in components or the environment (1,2). Biologists and biophysicists new to studying complex networks often express surprise at a biological network's apparent robustness (3). They find "perfect adaptation" and homeostatic regulation are robust properties of networks (4,5), despite "exploratory mechanisms" that can seem gratuitously uncertain (6,7,8). Some even conclude that these mechanisms and their resulting features seem absent in engineering (8,9). However, ironically, it is in the nature of robustness and complexity where biology and advanced engineering are most alike (10). Good design in both cases (e.g. cells and bodies, cars and planes) means users are largely unaware of hidden complexities, except through system failures. Furthermore, the robustness and fragility features of complex systems are shared and necessary. Although the need for universal principles of complexity and corresponding mathematical tools is widely recognized (11), sharp differences arise as to what is fundamental about complexity and what mathematics is needed (12). This tutorial is based on (2) and presents the most elementary aspects of well-known results in control theory.
Protocols are the most important aspect of modularity, and the most complex and critical protocols are for feedback control and the sensing, computing, communication, and actuation that implement it. Feedback control is both a powerful and dangerous strategy for creating robustness to external disturbances and internal component variations. Properly balanced it delivers such a huge benefit that both engineers and evolution capitalize extensively on feedback to build and support complex systems. Detailed elaboration of the nature of regulatory feedback is well beyond the scope of this tutorial, but an elementary "toy" model illustrates the necessity both of feedback to the function of complex systems and its "conservation of fragility" law. This is arguably the most critical and rigorously established robustness tradeoff in complex systems.
In most technologies as well as biochemistry it is relatively easy to build either uncertain, high gain components, or precise, low gain ones, but the precise, high gain systems essential to both biology and technology are impossible or prohibitively expensive to make, except using feedback strategies such as in Fig. 1. The simplest case to analyze is steady state gain, where after some transient, r and d are held constant, and y too approaches a constant y=Rr+Sd (13). Solving y=d+ACy+Ar gives Ideally, perfect control would have |S| =0, since that gives y = -r/C (R = -1/C) completely independent of arbitrary variations in A and d. This can be achieved asymptotically if A → ∞ and -1/C>>1 then F → -∞, |S| → 0, and y→ r/C. Then R amplifies r and is perfectly robust to external disturbance d and to variations in A (14). Choosing C small and precise, with A sufficiently large and even sloppy, is one effective, efficient, and robust way to make y a high gain function of r. |S| measures the deviation from perfect control, and feedback can attenuate or greatly amplify the effects of uncertainties. Defining fragility as log|S|, note that F < 0 iff |S| < 1 iff log|S| < 0. F > 0 makes log|S| > 0, amplifying d and uncertainty in A, and F → 1 makes log|S|→∞ (15).  u a x Dynamics This story is incomplete and even misleading without dynamics. The simplest possibility is for A and C to be 1 st -order differential equations 1 2 : (1) : C is a low pass filter with internal state x and parameters k 1 and k 2 . A is a pure integrator with state a and gain g (16), and we'll assume that g > 0. This system can be written in "state space" form as

( )
Re 0 j λ < 1 >0 and k 2 > 0, or equivalently k 1 > 0 and k 2 > 0. Stability here means the solution in the states x and a converges to the origin for all initial conditions, and any bounded input in r and/or d gives bounded state. For this system these two notions are equivalent, but in general there are a variety of notions of stability.
Thus given g > 0, stability depends only on the signs of k 1 and k 2 , and holds if and only if both are positive. Since there are only two constants, this means that ¼ of the space of values is stable, but then it is stable for all positive values. Thus stability is not robust to sign changes, but with fixed signs is very robust to magnitude changes. This is a typical situation and has apparently been a large source of confusion regarding the robustness of biological networks. For n constant, the number of different sign combinations grows exponentially as 2 n and thus one (fine-tuned) choice of signs becomes a vanishingly small fraction of the total number of possibilities in any sufficiently large network. For example, a characteristic polynomial such as In other words, if signs are important, and they are in control systems, the resulting network cannot be structurally stable. It is also true that in both technology and biology it is much easier to manufacture components with robustly fixed signs than with precise absolute values, so this is not necessarily a stringent constraint on control systems. Much more important is that this type of stability is not as important as the more severe constraint of robust transient response.
j k This type of control is called "integral feedback." The parameters g, k 1 , and k 2 might typically be functions of underlying physical quantities such as temperature, binding affinities, concentrations etc. and thus might vary widely. The response y(t) to steps in r and d are shown in Figure 2 over two orders of magnitude in g and k 1 > 0. Note the extreme divergence (k 1 =0) vs. convergence (k 1 =.01) as t→∞. This simple protocol of integral feedback produces extremely robust external behavior even from wildly varying components (the blue solid versus red dashed lines in Figure 2b) and converges to the steady state y=(k 2 /k 1 )r independently of arbitrarily large variations in gain g and disturbance d (17). If k 2 >>k 1 , y=(k 2 /k 1 )r is a high gain amplifier as well (18). The individual values of g, k 1 and k 2 influence the rate of convergence to steady state, but only (k 2 /k 1 ) determines its value. Thus robust high steady state gain can be achieved with uncertain and small parameters with the right feedback protocol. Figure 3 shows that variations in both g and k 2 of orders of magnitude have modest impact, and only on early transient behavior.
The protocol here is the structure of the equations, including the integral feedback and the signs of the parameters. Modules are the implementations of the actuator and controller. As with the Lego example in (2), this protocol must be "fine-tuned" (since rewiring components or flipping signs typically creates exponentially growing instabilities), but this allows the modules to vary widely with minimal effect (19). Integral feedback is used ubiquitously in engineering (20) and is likely to be ubiquitous in biology as well, to achieve everything from homeostatic regulation to "perfect adaptation," and preliminary investigations confirm this impression (21,22,23). One reason is that integral feedback is both sufficient and necessary for perfect and robust steady state tracking. Intuitively, necessity follows from the fact that in steady state, a=y-d must perfectly cancel any constant (step in) d, while the input u to A cannot depend on this d, since y does not. Thus, A (or C) must contain an internal model of the dynamics of d, which for step changes is a pure integrator (24), which produces unbounded outputs to constant inputs. Thus, open-loop hypersensitivity is necessary for closed loop robustness, and the behavior in Figures 2 and 3 is not an accident.

Conservation laws
Fragility also enters in the transient response. When g is increased, the response is faster but oscillatory ( Figure 2). Thus there are always nonconstant (e.g. sinusoidal) d(t) that would be amplified in y(t). Such d could be perfectly rejected too, but only by adding internal models as complex as the external environment that generates d. While such modeling is only possible for simple idealized laboratory environments, even approximate attempts can drive an extreme complexity spiral in real systems, and any controller is still subject to the constraint in eq. (4). The key to good control design then is insuring that this fragility is tolerable, and occurs where uncertainties are relatively small.
Even these simple toy examples show the robust yet fragile features of complex regulatory networks. Their outward signatures are, ironically, extremely constant regulated variables yet occasional cryptic fluctuations. They have extraordinary robustness to component variations yet rare but catastrophic cascading failures. These apparently paradoxical combinations can easily be a source of confusion to experimentalists, clinicians, and theoreticians alike (28), but are intrinsic features of highly optimized feedback regulation.
Since net robustness and fragility are constrained quantities, they must be manipulated and controlled with and within complex networks, even more than energy and materials. Figure 2b shows how extreme open vs. closed loop behavior can be, and thus how dangerous loss of control is to a system relying on it. The tradeoff in equation (4) shows that even when working perfectly, net fragility is constrained, and thus some transient amplification is unavoidable. The necessity of integral feedback and the fragility constraint in equation (4) thus describe laws, not protocols, perhaps the two simplest such laws from control theory. Controllers that are more complex, with additional dynamics and multiple sensors and actuators, offer more refinement in performing robustness-fragility tradeoffs. Adding to regulatory complexity is also evolutionarily relatively easy. Faster components allow for faster closed-loop responses.
All are used in both biology and engineering but all are still ultimately subject to equation (4). Control engineers must contend with this tradeoff, and its generalizations to more complex structures dominate control system design. It may be that such tradeoffs dominate and constrain evolution and biology as well.

The cost of instability
The simplest change that introduces plant instability is for A to be changed to 1 2 : : where for simplicity r has been eliminated and C is unchanged, and we'll continue to assume that g > 0 and also that 0 σ ≥ . With there is no feedback, and the system is "open loop" with an unstable pole at σ. The state space form is

Implications for biology and engineering
Success of systems biology will certainly require modeling and simulation tools from engineering (29,30), where experience shows that brute force computational approaches are hopeless for complex systems involving protocols and feedback. Highly fragile features require highly sophisticated modeling, whereas robust features often have adequate models that are greatly simplified. For example, if Fig. 1 was for a module in a larger system, the steady-state gain y=(k 2 /k 1 )r depends only on (k 2 /k 1 ) and no other parameters, potentially simplifying experiments and modeling. If transient dynamics or component failure were of interest, more details would be needed, determined more by the rest of the system than by the internal components.
Many challenges of post-genomic biology are converging to those facing engineers building complex networks and "systems of systems." Engineering theory and practice are now undergoing a revolution as radical as biology's. The simple ideas here only hint at the possibilities. For example, more complex control protocols than Figure 1, used in both engineering and biology, can ameliorate though not eliminate the constraint in (4), but sophisticated theory is needed to elucidate the issues. Realistic models of biological networks will not be simple, with multiple feedback signals, nonlinear component dynamics, numerous uncertain parameters, stochastic noise models (31), parasitic dynamics, and other uncertainty models. Scaling to deal with large networks will be a major challenge. Fortunately, researchers in robust control theory, dynamical systems, and related areas have been vigorously pursuing mathematics and software tools to address exactly these issues and apply them to complex engineering systems (32,33). Biological applications are new, but progress so far is encouraging.
Experiments, modeling and simulation, and theory all have fragilities, but they are complementary, and through the right protocols, have the potential to create a robust "closed-loop" systems biology. Biologist's frustrating experience with theory has been primarily in an open-loop mode, where simple and attractive ideas can be wrong but receive enormous attention. Biology is the only science where feedback control and protocols play a dominant role, so it should not be surprising that there would be popular theories, coming from within science, which did not emphasize these issues. Biologists and engineers now have enough examples of complex systems that they can close the loop and eliminate specious theories (33). For example, Internet technology is rich in protocols and feedback, and a deep, rigorous, and practically relevant theory has recently been developing. Even though it is poorly understood by nonexperts and has become a focus of many specious theories, details and enormous data sets are available, and it makes an attractive example to compare with biological networks(see additional references).  Figure 2 (r, d, y, A, C, etc) approach constants, which can be solved for algebraically. 14. ">>" means "very much greater than." 15. An important use of positive feedback is to deliberately destabilize equilibria and amplify small differences to create switches and break symmetries and homogeneities. This can create patterns then maintained using negative feedback. Positive feedback is also critical to autocatalysis in growth and metabolism. 16. a′=gu means a (the output of A) is a time integral of gu, where u is the input to A. 17. Stability is easily shown using standard methods of linear systems. Steady state values can be found (in a stable system) by setting all time derivatives to 0, yielding gk y=gk r or y=(k /k )r. 18. Mechanisms often exist that allow controller parameters (e.g. k 1 and k 2 ) to be much less uncertain than g and d. It is often even easier to make ratios such as (k 2 /k 1 ) largely invariant to variations in underlying physical quantities affecting the individual k and k .  (1945). Relatively rare circumstances can involve an inequality (≥). This is worse, but means (4) is an inequality constraint rather than a pure "conservation" law. 28.
The robust yet fragile nature of highly optimized complex regulatory networks can be mistakenly attributed to various kinds of bifurcations and "order-disorder" transitions (e.g. phase transitions, critical phenomena, "edge-of-chaos," pattern formation, scale-free, etc. Additional references