Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Beyond the French Flag Model: Exploiting Spatial and Gene Regulatory Interactions for Positional Information

  • Patrick Hillenbrand,

    Affiliation Physics of Complex Biosystems, Physics Department,Technical University of Munich, James-Franck-Str. 1, D-85748 Garching, Germany

  • Ulrich Gerland,

    Affiliation Physics of Complex Biosystems, Physics Department,Technical University of Munich, James-Franck-Str. 1, D-85748 Garching, Germany

  • Gašper Tkačik

    gtkacik@ist.ac.at

    Affiliation Institute of Science and Technology Austria, Am Campus 1, A-3400 Klosterneuburg, Austria

Abstract

A crucial step in the early development of multicellular organisms involves the establishment of spatial patterns of gene expression which later direct proliferating cells to take on different cell fates. These patterns enable the cells to infer their global position within a tissue or an organism by reading out local gene expression levels. The patterning system is thus said to encode positional information, a concept that was formalized recently in the framework of information theory. Here we introduce a toy model of patterning in one spatial dimension, which can be seen as an extension of Wolpert’s paradigmatic “French Flag” model, to patterning by several interacting, spatially coupled genes subject to intrinsic and extrinsic noise. Our model, a variant of an Ising spin system, allows us to systematically explore expression patterns that optimally encode positional information. We find that optimal patterning systems use positional cues, as in the French Flag model, together with gene-gene interactions to generate combinatorial codes for position which we call “Counter” patterns. Counter patterns can also be stabilized against noise and variations in system size or morphogen dosage by longer-range spatial interactions of the type invoked in the Turing model. The simple setup proposed here qualitatively captures many of the experimentally observed properties of biological patterning systems and allows them to be studied in a single, theoretically consistent framework.

Introduction

Shape and size are global properties of organisms and of their constituent parts. Yet organisms develop and grow by processes that are intrinsically local: cell division, fate commitment and differentiation, migration, and death. To coordinate these processes appropriately, cells must reproducibly activate different gene expression programs in a manner that is positionally specified, or patterned, within a tissue or a whole organism. Here we are interested in essential conditions for pattern-forming systems to support rich and robust positional specification of cells. Rather than focusing on any particular organism, we analyze a minimal tractable model that can qualitatively reproduce pattering features across a diverse range of biological examples. In the process, we illustrate and formalize a number of concepts pertaining to developmental pattern formation.

One possibility for cells to acquire their position-dependent fates is to establish a field of developmental cues that the cells can “read out” to learn about their individual locations in an organism and hence to appropriately coordinate their behaviors. Often, these cues are gradients of patterning molecules, called morphogens. A morphogen gradient is a chemical “coordinate system” in which the organism’s body plan is drawn [1]; in this analogy, reproducibility of developmental outcomes is limited by the reliability with which physical locations map into morphogen concentrations [2]. A canonical embodiment of this idea is the French Flag model, where a smooth spatial morphogen gradient activates downstream cell-fate-determining genes at different thresholds, creating stripes in a previously unpatterned tissue [3, 4].

Patterning strategies need not rely on the existence of signals distributed throughout the organism. Organism or tissue boundaries are intrinsically different from the bulk, and one can envision local biophysical mechanisms that propagate information from the boundary into the bulk to set up a global pattern. The Turing model is an example of this kind, where local rules are expressed as a set of reaction-diffusion equations and the steady-state pattern is determined by the boundary and initial conditions [5]. A Turing mechanism in concert with a globally acting morphogen gradient has, for instance, been found to control vertebrate digit formation [69].

Changing focus from systems continuous in space, time, and concentration to discrete setups, one can think of cellular-automata-like models that, starting with a defined initial state at the boundary of a finite domain, propagate that state into the bulk, creating a discrete pattern in a lattice of cells. Similarity to such a local, rule-based mechanism can be found, for instance, in developmental notch signaling [10, 11].

Ultimately, patterning dynamics, be it continuous or discrete, need not even lead to a well-defined steady state; local transients (as opposed to instantaneous or average values) of patterning cues could mark the position, much like during signaling leading up to the aggregation of Dictyostelium cells [12], or in the clock-and-wavefront model for the generation of somites [1315].

While such models typically represent a gross simplification of reality, they do capture a fundamental property of biological pattern formation: local patterning cues, either in stationary state or during a readout period, carry information about position relative to a global reference frame [16]. This property was introduced as “positional information” by Wolpert in his landmark paper almost fifty years ago [3]. Despite intense study in the last decades [2, 1724], it has been difficult to come up with a formal definition of positional information and a corresponding measure that would quantify the regulatory power of a patterning system by, in essence, counting the maximal number of distinct cell fates that the system can reliably specify, irrespective of mechanistic detail. This is because the mapping between position and local cue values can be noisy or even ambiguous, and can be established by a diverse range of biophysical mechanisms. Additionally, it would be attractive to build such a quantity on a strong theoretical foundation on one hand, while on the other ensure that it could be computed in various models of patterning or be tractably estimated from data.

A candidate formalization of “positional information” that satisfies the above criteria, based on application of information theoretic ideas, has recently been proposed [25, 26]. Positional information can be seen as a generic measure of correlation, i.e., a mutual information [27], between position and local patterning cue values (e.g., morphogen expression levels). It is also closely related to information transmission through genetic regulatory networks that has been a subject of recent theoretical [2831] and data-driven investigations [3235]. Looking at anterior-posterior patterning in early Drosophila embryo, the four primary gap genes were estimated to carry 4.2 ± 0.05 bits of positional information, sufficient for each nucleus to determine its location with roughly 1% relative precision and consistent with the measured precision of downstream positional markers [25]. Furthermore, this patterning system exhibited signatures suggesting that positional “gap gene code” might be optimally organized. This suggests an interesting theoretical program: look for regulatory network architectures that maximize encoded positional information [3640] and compare these ab initio predictions to Drosophila gap gene data.

Taking a step back from concrete systems that necessarily involve an overwhelming amount of biological detail, there are a number of basic yet still unresolved questions about patterning systems and positional information: How do optimal patterns (i.e., patterns that maximize positional information) look like and what determines their shape? How are efficient patterning strategies different if patterning cues are distributed throughout the domain or are present solely at domain boundaries? In systems where multiple outputs are simultaneously driven by the same patterning cues, how should these outputs be coupled amongst themselves and across space? Can reliable patterns emerge from very noisy patterning cues, that is, can the readout network actually “create” positional information? And finally, what is the interplay between positional information and various aspects of robustness—to noise, to systematic changes in patterning cue levels, or to small variations in system size—that have been extensively discussed in particular biological systems [4143]?

To address these questions as clearly as possible in a rigorous information-theoretic framework, we follow the methodological approach taken by Wolpert in describing his French Flag model. We start with the simplest toy model of patterning, where smoothly varying patterning cues, e.g., morphogens, drive the expression of “binary genes” that only can have two states, ON or OFF. Clearly, this is not an appropriate assumption for many real patterning systems that rely on intermediate levels of gene expression. Conceptually, however, this assumption has three major advantages: first, it will provide us with basic theoretical insights that generalize to more complex setups; second, we will be able to easily visualize binary gene expression patterns; and third, we will be able to count the number of distinguishable gene expression states. The latter property is essential to gain an intuitive interpretation of positional information, which is generally measured in an abstract “currency” of bits.

In the following, we start by introducing our minimal 1D model of patterning, which is closely linked to Ising models in statistical physics, where magnetic spins (analogous to our binary ON/OFF genes) respond to spatially inhomogeneous magnetic field (analogous to our smoothly varying morphogen profile). We briefly review the information-theoretic foundations of positional information relevant to the proposed model. We then systematically explore optimal patterns and how they depend on the shape of the input gradient, noise level, the strength of gene-gene and spatial interactions, etc; as a result, we will be able to give a full account of how these factors affect positional information within our model class.

Results

A minimal model for a pattern-forming system

We look for a model patterning system in which we can systematically explore the effects of gene-gene interactions, spatial interactions, and noise. To keep the task conceptually clean and computationally tractable, we sought for the simplest possible model: we focused on binary (ON/OFF) patterning genes in the established framework of Ising-like spin models. These paradigmatic systems of statistical physics are minimal, i.e., able to generate the relevant phenomenology while having the smallest number of parameters [44, 45]. Our use of such models does not imply that all patterning genes should be viewed as binary (e.g., previous analysis suggests otherwise for Drosophila gap genes [25]), or that patterning happens at thermal equilibrium. Nevertheless, conclusions obtained in the simple Ising model framework often do generalize qualitatively to more complex systems and focus attention on important quantities. At the same time, the Ising framework we introduce below can be seen as a direct extension of Wolpert’s original model of binary genes responding to smooth morphogen gradients.

We start with a discrete one-dimensional lattice of N sites, x = 1, …, N. At every site x, the expression pattern is described by a binary variable σ(x), with σ(x) = 1 denoting that the patterning gene at location x is ON, and σ(x) = −1 denoting that the gene is OFF (see Fig 1A). Central to our analysis is the fact that the patterning system is noisy, due to, e.g., intrinsic stochasticity in gene regulation or extrinsic variability in system parameters. To capture the probabilistic nature of the patterning outcomes, we think of the patterning system as generating different spatial patterns with different probabilities, and in the Ising model framework the probability of each pattern can be written as follows: (1) Here, Z simply ensures that the distribution Q is normalized, η sets the intrinsic noise in the system, and the “energy,” , describes the effect of morphogens and gene-gene interactions on the resulting gene expression pattern. This distribution over all spatial patterns, , is parameterized by θ, a set of parameters that we will explicitly identify for our proposed model later. For the “energy” H, we write: (2) Here, spatial interaction is modeled by coupling of nearest neighbor sites with a coupling strength J. Positive J favor neighboring genes to have equal expression states, whereas negative J favor neighboring genes to have opposing expression states. Biologically, positive J could be realized by diffusion or active transport of gene products between neighboring cells or nuclei; negative J, corresponding to repressive spatial interactions, could be mediated by cell-cell signaling networks, e.g., the Delta-Notch pathway [46, 47].

thumbnail
Fig 1. A schematic diagram of the model system.

(A) At each lattice site, x = 1, …, N, we assume a morphogen signal m(x) (black), with extrinsic Gaussian fluctuations of constant variance, var(m) = ν (black error bars). Indicated in gray is the corresponding exponential concentration gradient, c(x), when m(x) is interpreted in the context of thermodynamic models of gene regulation (see text). The morphogen signal m(x) regulates a binary patterning gene σ(x), whose expression state also depends on spatial interaction with strength J. Levels of gray denote the mean value 〈σ(x)〉. Hence, σ = +1 (white) and σ = −1 (black) mark positions where the gene expression state is deterministic, while levels of gray correspond to noise-induced fluctuations in expression state, with σ = 0 (medium gray) marking positions at which the two expression states are equiprobable. (B) For two or more genes per lattice site the model is extended with pairwise local interactions between genes at each lattice site. Spatial interactions are considered only between the same genes at different lattice sites.

https://doi.org/10.1371/journal.pone.0163628.g001

The first term in Eq (2) contains the “bias,” h. The bias favors each individual gene to be either ON, whenever h(x) > 0 at that gene’s location, or OFF, whenever h(x) < 0. This term depends explicitly on the coordinate, x, and thus models the effect of a morphogen at each location. We can gain biological realism and allow later extensions of the model to multiple patterning genes if we assume that at every location there is a particular value of an abstract morphogen “signal,” m(x), which determines the bias in a linear fashion, h(x) = n(m(x) − E). If the signal, m(x), is interpreted as the logarithm of the morphogen concentration, m(x) = log(c(x)), there is an exact mathematical relation between the probability that the patterning gene is ON, P(σ(x) = 1), and the Hill-type thermodynamic model of regulation for the gene σ. Suppose that the gene σ is regulated in a strongly cooperative manner by binding sites with equal affinities, . Then (3) and it is easy to show that the parameters n and E relating the bias h(x) to the morphogen signal m(x) in our model correspond, up to a multiplicative factor, to and in the thermodynamic model of regulation. Furthermore, observed morphogen concentration gradients that commonly have an exponential profile, c(x) = c0 exp(−λx), map to linear morphogen signals, m(x) = −λx + log(c0), in our framework. Linear m(x) is thus our baseline profile, although we will subsequently also define and explore more localized morphogen signals.

In our model, η controls intrinsic fluctuations in the system. For η → 0, the gene σ responds deterministically to the morphogen and the expression state at neighboring locations; for example, if there were no spatial interactions (J = 0), the gene would be ON whenever the local morphogen signal exceeds the threshold, m(x) > E, and OFF otherwise, following Wolpert’s original idea. In contrast, for η → ∞ the genes respond completely randomly, with equal probability of being ON or OFF, irrespective of the relevant signals. In addition to this intrinsic noise, we also consider extrinsic noise [48]. To that end, we assume that there can be fluctuations in the morphogen signal, m, that are additive and Gaussian with constant variance ν, i.e., var(m) = ν. This maps to fluctuations in morphogen concentration that are proportional to the mean concentration, std(c) = νc〉, a dependence that is biophysically plausible and has previously been discussed in the literature [48, 49].

An extension of the model from a single patterning gene to multiple genes is straightforward (see Fig 1B). Let there be K distinct patterning genes, such that at each location σ(x) ≡ {σα(x)}, for α = 1, …, K. To write down the function and compute the probability of every pattern, we simply reproduce the “bias” and spatial interaction terms for each one of the K genes and sum them up in the energy function. Next, we add a qualitatively new term that couples the K genes amongst themselves at every location x, which models activating (Jαγ > 0) or repressive (Jαγ < 0) interactions between the genes α and γ (where α, γ ∈ {1, …, K} and αγ). The complete energy function reads: (4) This model of K genes responding to a morphogen signal is thus fully specified by 3K + K(K + 1)/2 interaction parameters θ = {nα, Eα, Jαα, Jαγ}, the intrinsic noise η, the extrinsic noise ν, and the shape of the morphogen gradient, m(x). These parameters, together with Eqs (1 and 4), fully specify the resulting distribution over gene expression patterns, . Next, we formally define positional information and discuss its computation and behavior within our model.

Positional information for genes with two possible expression states

We use a previously introduced information-theoretic definition of positional information [25, 26]. This definition is based on the observation that a cell can only “know” as much about its location x as it can infer from local and generally noisy expression levels, σ(x), of patterning genes. Again, σ(x) denotes the vector of all considered gene expression states at location x. Let the distribution of σ at location x be P(σ|x). Positional information is then defined as the mutual information between the location x and the local expression level σ: (5) The first term in Eq (5) is the entropy of the distribution of expression states across the lattice, with , which favors diverse use of expression states across the spatial pattern. The second term is a penalty term that quantifies the average variability in gene expression which is uncorrelated with position. Because it can induce confusion in the expression-position mapping, this “noise entropy” can only reduce the information and is zero in a noiseless system.

Considering a single patterning gene without noise, gene expression achieves maximum positional information by partitioning all N locations into two equally sized sets, one where the gene is OFF and one where the gene is ON. Without noise, the second term in Eq (5) vanishes and the positional information is given by the entropy of expression states. For a balanced assignment of expression states to positions we have and thus S[Pσ(σ)] = 1. Such a pattern is said to convey “one bit” of positional information, an amount sufficient to reliably separate the anterior from the posterior, or odd from even rows of cells; generally, one bit corresponds to the amount gained by an unambiguous answer to an optimally posed yes/no question. In the case of multiple patterning genes, a combination of gene expression states, e.g., {ON, OFF, ON, ON} or {+1, −1, +1, +1}, can be seen as a “codeword” for some particular position. The capacity of this code, independently of how the gene expression patterns are set up or read out, is again given by the first term in Eq (5). For four binary genes this cannot exceed 4 bits; as in the case of a single gene, the bound is achieved when each of the 24 = 16 distinct gene expression combinations is used equally often across all locations x in the tissue.

Mutual information is symmetric in its arguments, so that we can rewrite Eq (5) as (6) where S[Px(x)] = log2 N for cells that are uniformly distributed over coordinate x, as we assume in our model. This way of writing positional information emphasizes that log2(N) bits is the upper bound on the positional information in a patterning system, which would correspond to an unambiguous identification of every location x based on the expression pattern σ. Positional information is decreased from this bound by the second term in Eq (6) which measures the uncertainty in position that remains even when one knows gene expression levels.

One can be more explicit about the error in trying to estimate the position, x, given the expression levels σ. It can be shown that the expected error of any estimator for position is bounded by positional information. To find that bound, we think of Px(x) as the prior distribution of an estimator at position x. The best choice for an estimator of position is the expectation value estimated from a distribution encoding all and not more information about its position than the estimator can access. Therefore, for any estimator it holds . Without further information the estimator has to rely on the prior distribution Px(x). For the uniform Px(x), the variance of x can be expressed by the entropy S[Px(x)]: (7) The second inequality holds since the additional information about the expression state σ can only decrease the entropy (S[Px(x)] ≥ 〈S[P(x|σ)]〉σ). We assume that P(x|σ) = P(σ|x)Px(x)/P(σ) is the Bayesian inversion of the correct expression state of the system P(σ|x). From Eq 6 we can see that 〈S[P(x|σ)]〉σ = log2(N) − I(σ; x), so that we can finally write (8) This bound on the error can be made vanishingly small if positional information approaches its upper bound, I → log2 N bits. This relationship is analogous to the relationship between positional error and positional information for continuous systems [25, 26]. Importantly, when cells need to make decisions appropriate to their position within an organism, they are also subject to the estimation limits of Eq (8), irrespective of how complex the molecular readout mechanism for σ is.

How is the pattern-forming process related to P(σ|x), which determines positional information? While the pattern-forming process Q yields a global (joint) distribution over all possible patterns, the positional information is local and thus is only concerned with the expression state at a given location x; thus, P(σ|x) is obtained by summing (marginalizing) the joint distribution over gene expression states everywhere but at x: (9) Therefore, we can use Eqs (1, 5 and 9) to compute positional information, I(σ;x), which we will refer to as “PI” in the text, for any patterning system with parameters θ. Using standard approaches from statistical physics (transfer matrices) these mathematical manipulations are all doable exactly when the extrinsic noise, ν, is zero. When ν ≠ 0, analytical techniques coupled with tractable Monte Carlo sampling can be used to compute PI as a function of parameters; see S2 Appendix for details.

Our framework clearly separates the pattern-forming process whose outcome is described by and that depends on mechanistic parameters θ, from the resulting pattern, which carries a certain amount of positional information I(σ;x). Such a distinction is important and clarifies a number of conceptual issues with positional information. Consider, for example, the case of a single, noiseless gene responding to a smoothly varying morphogen gradient. As explained above, the maximal positional information achievable in this system is 1 bit. On the other hand, one could argue that by changing the threshold at which the readout gene is activated, the system can position the pattern boundary in any of N + 1 possible locations (including the system boundaries), generating N + 1 possible patterns: shouldn’t the information then be I = log2(N + 1) ≥ 1 bits? The apparent contradiction is that log2(N + 1) bits is not positional information; rather, it is a measure of how many different joint patterns, , can be generated by changing the parameters θ of the pattern forming system (in this case the readout threshold), i.e., the mutual information , which is an intrinsic property of the pattern-forming process, Q. This quantity certainly affects positional information, I(σ;x), yet it is not identical to it. In particular, positional information can be different for each separate value of parameters θ. As we will see in our toy model, one can find patterning systems that have a high positional information while simultaneously having either a high or low value for , and vice versa. The two information-theoretic quantities therefore describe independent aspects of the system, and should not be confused with each other.

Optimal patterning with a single gene depends on noise level and morphogen profile

We start by studying a system with one patterning gene, σ, on a lattice of N = 50 sites. We choose a linearly decaying morphogen signal, m(x), which favors the ON state of the patterning gene in the anterior and OFF state in the posterior. In absence of any noise and spatial interactions, we have a clear expectation for the pattern that yields the maximally achievable PI of 1 bit: this will be a symmetric partition of the lattice into equally sized anterior (ON) and posterior (OFF) halves, with a single sharp boundary at x = N/2. In terms of the parameters of our energy function, Eq (2), this corresponds to choosing E = J = 0, n = 1 and η, ν → 0.

How does the spatial interaction J affect the ability of a single gene σ to encode positional information when noise is not zero? Fig 2 shows I(σ; x) as a function of the coupling strength J at fixed levels of intrinsic (η) and extrinsic (ν) noise. In the absence of spatial interactions (J = 0) the response of the gene σ is uncoordinated across positions and gets largely destroyed by random fluctuations (Fig 2B(i)). Consequently, PI is low, in this case below 0.1 bits.

thumbnail
Fig 2. Effect of spatial interactions on one patterning gene.

(A) PI as a function of spatial interaction strength J with fixed intrinsic noise η = 2 and two levels of extrinsic noise (legend). The arrow indicates the excess PI available to an optimally spatially coupled system, relative to an uncoupled (J = 0) system. At nonzero extrinsic noise (ν = 0.1) PI is strongly suppressed at J < 0. (B) The average spatial pattern for three regions denoted in (A): (i) no spatial interaction; (ii) positive J stabilizes a pattern with a boundary against noise; (iii) negative J results in an alternating pattern. In region (iv) indicated in (A) the strength of spatial interactions forces the pattern in a uniform all ON or all OFF state that carries 0 bits of positional information. (C) Comparison between PI with (dotted line) and without (solid line) spatial interaction as a function of intrinsic noise, η. For the spatially coupled system an optimal Jopt has been found separately for each value of intrinsic noise η. The corresponding values of Jopt, scaled by the respective noise levels η, are plotted in the inset.

https://doi.org/10.1371/journal.pone.0163628.g002

If spatial interaction is increased to positive values of J, PI increases steeply to a maximum. The emergent pattern is qualitatively consistent with the expected optimal pattern: it contains a single boundary that divides the lattice into anterior and posterior halves (Fig 2B(ii)). The effects of noise on the patterning gene are restricted to the area around the boundary. Therefore, the maximally achievable PI is determined by the accuracy with which the boundary is positioned and depends on the noise level. For this optimal boundary pattern, additional extrinsic noise, ν, diminishes PI only mildly.

The large increase in PI can be ascribed to the well-studied effect of spatial averaging [39, 50]. With increasing coupling strength J, different lattice sites do not respond independently anymore but are spatially correlated. Consequently, fluctuations at individual lattice sites are overcome by a concerted response to the input field integrated over a range, which greatly sharpens the resulting pattern. As J is increased further, PI decreases again and eventually drops to zero because the lattice is forced into a spatially uniform (all ON or all OFF) configuration. Fig 2C summarizes the benefit of positive spatial interactions. The solid curve is the PI without spatial interactions plotted as a function of intrinsic noise η, whereas for the dashed curve J has been optimized separately for each value of η. The values of the optimized J as a function of η are shown in the inset. The spatially coupled system is therefore capable of retaining PI above 0.5 bits for more than an order of magnitude higher internal noise relative to the system without spatial interaction. In sum, positive spatial coupling J has the ability to stabilize the near optimal pattern against effects of intrinsic and extrinsic noise.

A qualitatively different pattern is formed if J is decreased to negative values. Initially, for small negative J, PI decreases almost to zero as negative spatial interaction disturbs the readout of the morphogen signal. Beyond the minimum, PI increases again and the system generates a pattern in which neighboring lattice sites alternate between ON and OFF states (Fig 2B(iii)). This is an alternative strategy for encoding PI: the system now distinguishes between even and odd lattice positions instead of an anterior and a posterior segment. While in principle both patterns can encode a full bit of PI, we will see that the alternating pattern is less robust against noise.

For the alternating pattern to form correctly it is necessary that the ON/OFF sequence is faithfully propagated through the bulk and that the morphogen signal can reliably break the symmetry between the two possible alternating patterns (if both patterns are equally likely, PI goes to zero). The condition for robust propagation depends on intrinsic noise. Breaking the symmetry is, on the other hand, highly susceptible to extrinsic noise, i.e., fluctuations in the morphogen signal. Consider, for example, the case of the linear gradient, which exerts the strongest bias at the anterior- and posterior-most sites. The ability to reliably break the symmetry depends on the anterior site consistently experiencing a stronger bias towards ON than the second site (which in an alternating pattern should be OFF), i.e., the mean difference in the morphogen signal between the first two sites needs to be larger than the typical strength of extrinsic fluctuations in the signal, 〈m(x = 1) − m(x = 2)〉 > ν, otherwise PI of the alternating pattern will be severely impaired. A similar argument can be made for the influence of intrinsic noise. Even if spatial interaction is strong enough to allow only strictly alternating patterns, it is still possible that the entire pattern flips to its inverse due to intrinsic fluctuations. Again, the ability of the system to select between the two possible patterns depends on the difference of the mean values at neighboring lattice site compared to the intrinsic noise level.

How does the optimal strategy for a single patterning gene depend on the shape of the morphogen signal? To investigate this, we consider a set of exponential shapes for the morphogen signal m(x), parametrized by a a decay parameter, χ. For χ ≪ 1, we recover the linear morphogen signal discussed above, while for larger χ the morphogen signal is increasingly concentrated at the anterior, as shown in Fig 3A. Fig 3B shows PI carried by a system with one gene as a function of χ and coupling strength J. Because localized morphogen signals can reliably break the symmetry of an alternating pattern, high PI can be achieved with a combination of negative J and large χ. In contrast, patterns that form a boundary (with J > 0) are efficient if the morphogen signal extends throughout the system. Consistent with our first observation, a systematic survey in Fig 3C shows that small additions of extrinsic noise severely lower PI carried by the alternating pattern while leaving the boundary pattern almost unaffected.

thumbnail
Fig 3. Effect of gradient shape on PI carried by one patterning gene.

(A) Morphogen signal profiles, mχ(x), for different shape parameters, χ, interpolate between a linear profile and a profile strongly concentrated at the anterior boundary. (B) PI carried by one patterning gene as a function of spatial interaction strength J and shape parameter χ. The intrinsic noise is set to η = 1.25, with zero extrinsic noise. (C) Same as (B), with extrinsic noise added. While PI in the regime of positive spatial interactions is almost unchanged, PI for negative spatial interactions is greatly diminished. (D) Phase diagram (at ν = 0) depicting for which values of intrinsic noise and gradient shape negative or positive coupling (and thus the resulting boundary or alternating pattern) is optimal.

https://doi.org/10.1371/journal.pone.0163628.g003

These observations can be summarized in a phase diagram, shown in Fig 3D. The diagram, constructed for zero extrinsic noise (ν = 0), divides the plane spanned by intrinsic noise η and morphogen shape parameter χ into a region where negative spatial interaction is optimal and a region where positive spatial interaction is optimal. For very low intrinsic noise, the alternating pattern generally outperforms the boundary pattern. As intrinsic noise increases, the optimal patterning strategy depends on the form of the gradient: for a spatially extended gradient the boundary pattern is optimal, whereas for a gradient concentrated at the boundary the alternating pattern is optimal. For sufficiently high intrinsic noise, the boundary pattern always outperforms the alternating pattern, even for gradients concentrated at the boundary. Adding extrinsic noise generally shifts the boundary in the phase diagram to favor positive spatial interactions.

Optimal patterning with multiple interacting genes can establish a stable combinatorial code for position

Which patterns optimally encode positional information in systems with multiple patterning genes? The French Flag model proposes a cascaded activation of the genes in response to the morphogen signal. For two binary genes this would lead to a pattern with three separate states (the “Tricolore” of the French Flag) and a maximal PI of I(σ; x) = log2(3) ≈ 1.59bits. In contrast, we know from the arguments made above that an optimal pattern with two binary genes should have PI of 2 bits, at least with vanishing intrinsic and extrinsic noise. In such a pattern all possible expression states would be realized and evenly distributed throughout the lattice. For two binary genes, (σ1, σ2), there are four different possible states: (ON,ON), (ON,OFF), (OFF,ON) and (OFF,OFF); note that one of the mixed states is missing in the French Flag, which is why it has lower PI. We will refer to PI-maximizing binary patterns for K genes, where all 2K states occur with equal probability in the pattern, as “Counter” patterns. A Counter pattern is an example of a combinatorial code, where position can only be decoded properly when the readout mechanism has simultaneous and complete access to the local expression states of all K genes.

We can ask two fundamental questions about Counter patterns. First, can such patterns be generated in a model where genes interact locally in a pairwise fashion and are spatially coupled, as assumed by Eq (4)? Second, Counter patterns are clearly optimal when noise is vanishing; are they optimal also when noise is present? If not, what are the optimal patterns in that case?

To investigate these questions we optimize the parameters of our model for two and three genes to find patterns that maximize PI for different levels of noise. Specifically, we vary all interactions in the system, both spatial (Jαα) and regulatory (Jαγ), as well as the parameters that prescribe how each gene couples to the morphogen signal ({nα, Eα}). For the case of two (three) patterning genes, this amounts to a total of 9 (15) parameters; we use stochastic optimization to carry out PI maximization (see S2 Appendix for details). As with the single gene case, we assume a linear morphogen signal, m(x). The only remaining dependence of our results is thus on the strength of the intrinsic (η) and extrinsic (ν) noise.

When the noise level is low enough, we find that one of the genes always takes on strongly negative spatial interaction, Jαα < 0, to generate an alternating pattern. This gene does not interact with the others, and contributes one bit (in the low noise limit) to the total PI. There can only be one such gene in the pattern, as any subsequent alternating gene (with the same or opposite polarity) would be redundant with the first one and thus provide no further increase in PI. As this strategy to increase PI by one bit is trivially available to any patterning system at low noise and does not occur at high noise, we restricted our subsequent search only to cases where spatial interactions are restricted to be positive, Jαα > 0, mimicking spatial averaging induced by diffusion or active transport of patterning gene products.

Fig 4A shows the maximal PI carried by two binary genes as a function of intrinsic noise, η. Characteristic examples of corresponding output patterns are shown in Fig 4B. As expected, the optimal pattern at low noise is a Counter, realizing each of the four possible expression states in an equally sized spatial fraction. A network schematic illustrating the optimized parameters is depicted in Fig 4C. As noise increases, the values of optimal parameters undergo an abrupt change and the optimal pattern changes from a Counter to a French Flag, encoding only three states (region (ii)). At the boundary between the two regions there is a visible kink in the PI curve. Continuations of the two coding strategies into the respective other noise regime are depicted as dotted curves in Fig 4A. If the noise level is increased even further, the optimal network changes its coding strategy again and generates a pattern in which both genes redundantly form a boundary in the center (region (iii)).

thumbnail
Fig 4. Positional information carried by two and three patterning genes with a linear morphogen signal.

(A) PI as a function of intrinsic noise level. For each noise level η, all parameters of the network have been optimized. The depicted regions (i)-(iii) indicate different numbers of states encoded by the resulting patterns. At each boundary (non-optimal) continuations of the coding strategy in the neighboring region are depicted as dotted curves. (B) Characteristic patterns of the different regions in (A). (C) Schematics of the network parameters for the different coding strategies. Pointed arrows denote positive interaction, blunted arrows denote negative interaction. The thickness of the arrows indicates their strength. (D) Comparison of PI carried by systems without spatial interactions and systems without local gene-gene interactions as a function of intrinsic noise. For each noise level the parameters of each system have been optimized separately. (E) PI carried by three patterning genes as a function of intrinsic noise. (F) Characteristic three-gene patterns for the different regions in (E).

https://doi.org/10.1371/journal.pone.0163628.g004

What are the respective influences of spatial and local gene-gene interactions on the formation of patterns and encoding of positional information? To study this question, Fig 4D compares the curve of Fig 4A with curves for PI carried by systems which are optimized without spatial interactions (dashed curve) and without local interactions (dotted curve). Without local interaction between genes the maximally achievable PI is about 1.59 bits, corresponding to the three-state French Flag pattern. This observation can be generalized: if a monotonous input gradient acts on the target genes independently via a monotonous response function (e.g., a sigmoid, as here), then each gene can form only a single transition or boundary. In that case, the French Flag pattern indeed encodes the maximally achievable PI. If, however, the genes interact with each other, their response functions receive multiple inputs and complex patterns are possible. For instance, the system generating the Counter pattern (Fig 4C(i)) has strong mutual repression between the two genes, which switches the lower gene to the inverse of the upper gene where the strength of the morphogen signal is low. Systems without spatial interactions, in contrast, can and do carry as much PI as the fully interacting system when the noise is vanishing. As the noise level increases, however, the fully interacting system always outperforms the system without spatial coupling, demonstrating the important role of spatial noise averaging. At high noise, the optimal boundary pattern is identical for both genes, providing two redundant read-outs of the morphogen signal. In this case, the local gene-gene interaction is positive and strong, providing further noise averaging (across the two readout genes), beyond that due to spatial interactions.

Maximization of PI in a system with three genes corroborates our results for two genes. Again, as intrinsic noise increases, the optimal strategy switches abruptly at particular noise values, marking a transition to a code that specifies one less distinct expression state. Between transitions, the number of distinct states that the network can generate is held constant, but information nevertheless decreases smoothly as increasing noise leads to more ambiguity in the mapping between position and gene expression state (Fig 4E). Three binary genes can generate a maximum of eight distinct expression states, which is achieved by a three gene Counter pattern (Fig 4F(i)). Between the optimal Counter pattern and the fully “redundant” pattern encoding two states, we find a variety of other strategies, including French Flag, that specify an intermediate number of states (Fig 4F(ii)–4F(iv)). The Ising-model-based framework for binary interacting genes is clearly sufficiently rich to generate this variety, including the theoretically optimal Counter. It is possible that even richer models, e.g., models allowing three-way interactions between genes in addition to pairwise interactions, would lead to higher PI values, possibly by being more robust to noise, or by allowing the Counter pattern to be easily generated in systems with K > 3 genes.

These results are not changed significantly by the addition of extrinsic noise. Generally, increasing extrinsic noise decreases the maximum achievable PI and can, analogously to intrinsic noise, lead to a change of the optimal patterning strategy. The effects of extrinsic noise can be effectively attenuated by spatial interaction of the patterning genes. This raises the question of how much positional information loss due to fluctuations in the input morphogen can be avoided by a spatially interacting network of patterning genes.

Positional information of spatially interacting patterning genes can exceed that of the morphogen signal

Can patterning genes encode more positional information than the morphogen signal itself? In other words, can a network generate PI starting with a noisy signal? Intuitively, both Turing patterns and cellular automata would suggest that the answer to this question is affirmative. In a Turing model at steady state, spatial locations are assignable to (at least) two expression states, so that there is positional information where initially there was none. Similarly, consider the simplest cellular automaton that proceeds along one discrete spatial dimension with a simple rule: “Read the value in the current position, increment the value by one, move one cell to the right, write the value.” Such a cellular automaton would generate a separate cell fate (i.e., a unique numerical value) in each position, providing maximal PI. In both cases, PI before patterning appears to be zero, and after patterning has some nonzero value.

More careful thought reveals, however, that the PI was established by transforming information that must have been present already at the beginning of the patterning process. Turing patterning is a deterministic mechanism defined by a set of partial differential equations, whose steady state solution therefore depends on the initial condition and the shape of the boundary. While the mechanism will generically produce domains with a typical lengthscale separated by sharp borders, the positions of the borders will shift with the initial and boundary conditions. Specifying initial and boundary conditions, however, requires information: more bits for more precise specification. Although the formal link between this information and the resulting PI depends on the system, it is clear that an ensemble of Turing systems with arbitrary initial / boundary conditions will not yield an ensemble of patterns with appreciable PI. This is even more obvious in the cellular automaton example. To apply the proposed rule and generate the pattern, one needs to specify the initial condition: the numerical value in the cell at position one. Suppose that there is N possible lattice positions. Specifying the initial value, θ0, for position one then amounts to providing I0 = log2 N bits of information for the automaton to start working. The PI of this initial pattern (with the first cell specified, and all others unspecified) is low. After the automaton finishes, all N positions are uniquely specified, yielding PI of I(σ; x) = log2 N bits. The automaton has therefore taken the initial information I0 and “spread it over space” to generate PI of equal amount. If, however, the initial value is specified poorly (i.e., probabilistically, with close to uniform distribution), the resulting ensemble of patterns will have very small PI.

These restrictions trace back to the Data Processing Inequality (DPI) [27], a central result in information theory. If initial/boundary conditions for a deterministic patterning process are specified with finite precision (e.g., they are noisy across repetitions of the same pattern generation), then the resulting patterns can again be seen as draws from the distribution , where θ are now interpreted as the true boundary / initial conditions, which, however, enter the patterning dynamics with some noise. This is much like the extrinsic noise that corrupts the morphogen signal in our Ising-like model. In the case of the simple cellular automaton, it is easy to convince oneself that the final PI must be equal to the initial information, I0. The corresponding patterning can be seen as a Markov chain: , where θ0 is the “true” initial condition, is its corrupted version, which corresponds to the initial condition not being specified with perfect precision, and is, as before, the resulting pattern. In this case, DPI states that —the precision by which the initial state is specified limits the reproducibility of the resulting pattern. Since in this example , PI is limited by the specification of the initial state. The same type of argument applies generally, although the bounds for more complex systems may be difficult to derive.

What limits does the Data Processing Inequality imply for our model system? First, unlike the cellular automaton and the Turing mechanism examples above, our patterning takes place at equilibrium, so that the dependence on initial conditions is lost. When spatial interactions in our model are set to zero, the expression states at individual locations of the lattice become independent of each other. The patterning can then be seen as a Markov chain, xmσ, and DPI requires that I(σ; x)≤I(m; x), i.e., that PI of the patterning genes must be smaller or equal to PI of the morphogen signal. In this case, the network, no matter how complicated, cannot provide more PI than the morphogen signal already has.

How does this picture change when we allow spatial interactions? Fig 5 compares PI carried directly by the morphogen signal with PI carried by an optimized network of three patterning genes responding to that morphogen signal. For steep gradients or high extrinsic noise, PI of the patterning genes can indeed exceed PI in the morphogen signal itself, which appears to be in violation of DPI.

thumbnail
Fig 5. Comparison of PI in input gradient and output pattern.

PI in m (solid) and σ (dashed) are shown for three different input noise levels ((A)-(C)) as a function of χ, parametrizing the gradient shape (cf. Fig 2A). The computation of is described in S3 Appendix.

https://doi.org/10.1371/journal.pone.0163628.g005

To explain this observation, we turn to the theoretical question of whether a patterning network downstream of the morphogen signal can contribute to encoding of PI beyond the information already present in the signal. More specifically, given a morphogen signal m and several patterning genes σ responding to it, is it possible that I(σ, m; x) > I(m; x)? Here, I(σ, m; x) is PI jointly carried by the simultaneous state of both σ and m about position x. Joint information can be split up as follows: (10) where the last term is the conditional mutual information (11) In this context, I(σ; x|m) is PI carried by the pattern σ additional to that in the morphogen signal m. It is zero if and only if P(σ|m, x) = P(σ|m). It is easy to see that this condition is met precisely when spatial interactions are zero. Then, σ(x) depends only on its local input, m(x), and not directly on its location within the lattice. In other words, position, the morphogen signal, and the patterning genes form a Markov (dependency) chain, xmσ. If, in contrast, patterning genes are spatially coupled, σ(x) additionally depends on the state of σ at neighboring lattice sites, which in turn respond also to their respective local morphogen signals. In that case, position, the morphogen signal, and the patterning genes do not form a Markov chain and DPI does not apply.

Our derivation and numerical results show that a spatially coupled network of genes downstream of the morphogen signal can extract PI in excess of that carried by the morphogen signal itself. Without spatial interactions, this is impossible in our setup, regardless of the complexity of local gene-gene interactions. Note that spatial interactions do not necessarily lead to an increase in PI, but can do so if optimally chosen, in the regime where they permit spatial noise averaging.

Long-range interactions enable Turing-like pattern formation, and make patterns robust to system size and morphogen signal variations

Is there a role for spatial interactions beyond noise averaging? To address this question, we start by studying a seemingly unrelated problem of whether a Turing-like pattern generating mechanism also exists in our discrete setup. After establishing that is indeed the case, we proceed to show that in an appropriate limit Turing-like pattern generating capability also confers two biologically desired properties onto our system: the ability of the resulting patterns to automatically scale with the system size, and robustness to systematic perturbations of the morphogen signal, a special case of “canalization” that has been discussed in the biological literature [5154].

A distinguishing characteristic of Turing patterning is the emergence of patterns with an intrinsic length scale, which is set by the diffusion ranges of the two reacting species of the model. Until now, our model did not exhibit this property: spatial scale was either slaved to the external morphogen signal, as in the French Flag model, giving rise to boundary patterns; or the scale was one lattice spacing, as in the alternating pattern, where genes switched from ON to OFF at neighboring sites. Is it possible to generate patterns in which blocks of sites with a controllable intrinsic length scale alternate between ON and OFF states? To test if this behavior can arise in our model, we introduce long range spatial interactions. In particular, we assume that the spatial interaction strength between a gene α at lattice site i and at lattice site j decreases exponentially with distance: , as in Fig 6A. Here, defines the amplitude of the interaction, which can also be negative. The parameter r defines the interaction range, such that r → 0 leads to nearest-neighbor interactions and r → ∞ leads to a uniform, all-to-all coupling.

thumbnail
Fig 6. Long range spatial interactions can generate Turing-like patterns with an intrinsic length scale.

(A) Spatial interaction strength of the first gene, , as a function of distance |ij| between lattice sites for different interaction ranges, r. (B) Schematic diagram of interactions in an extended model of patterning that permits long-range interactions. Interaction strength is indicated by the thickness of the arrows. Two genes in the model interact with a strong locally repressive interaction. The first gene has weak long range repressive spatial interactions; the second gene has strong nearest-neighbor positive interactions. (C) Patterns generated by the two genes for three different interaction ranges, r. The expression state of the first gene is pinned to ON at the anterior boundary by a strong morphogen signal, which is zero anywhere else. The length scale of the resulting pattern depends on the interaction range, r, of the first gene.

https://doi.org/10.1371/journal.pone.0163628.g006

To find a Turing-like pattern, we considered two mutually repressing genes, one of which interacts in a strong positive nearest-neighbor fashion, whereas the other has weak long range repression (Fig 6B). The resulting patterns for several interaction ranges r are shown in Fig 6C. The system is clearly able to generate a blockwise alternating pattern with a length scale that is controlled by the interaction range, r. We emphasize that the only morphogen signal in this case is a strong positive bias at the anterior boundary, experienced by the first lattice site, provided to break the symmetry between the resulting pattern and its inverse. This explicitly demonstrates that the block length scale is not inherited from the shape of the morphogen signal, but is intrinsic to the interactions between the patterning genes. The same effect can be generated with a single gene, if we allow spatial interactions to change sign with distance (e.g., positive interaction at the nearest-neighbor range, and negative interaction with lattice sites at greater distance).

Can we combine the ability of the Turing-like mechanism to generate intrinsic patterns with the network architecture that yields high-PI Counter patterns? The Turing mechanism has an attractive property which makes use of the morphogen signal only to break the symmetry between two possible intrinsically stable patters that are inverses of each other; in all other respects, the pattern is invariant, or robust, to changes in the morphogen signal magnitude or shape. The notion that external signals simply serve to select one of the few stable patterns (attractors) of the system, while the properties of the patterns are generated by intrinsic interactions, is known as “canalization” in the biological literature, but also has a very long history in neuroscience and statistical physics. Another feature of patterning that appears beneficial in a biological context is the ability of the system to translate small variations in the overall system size into proportional variations in the resulting gene expression pattern, i.e., to scale the pattern system size. If the system exhibits such “scaling,” gene expression features, for instance boundaries between ON and OFF states, will occur at constant fractional coordinates in the system, rather than at constant absolute coordinates. Here we test whether elements of Turing-like patterning can provide “scaling” and “canalization” properties to our Counter networks.

Before proceeding, we give an intuitive account as to why negative long range spatial interactions can be of benefit. A particular property of Counter patterns is their balanced use of ON and OFF expression states. Upon perturbations to the morphogen signal or shifts in pattern position because of system size changes, this balance will be broken. We are thus looking for a modification to the energy function, Eq (2) (alternatively, Eq (4) for multiple genes), that would penalize any deviation away from such balance. The simplest way to implement this is to add a term of the form (or equivalent term for multiple genes). The term in parenthesis will be zero if the ON and OFF states are exactly balanced, and will be positive on any deviations in balance. When is chosen to be negative, such deviations are disfavored. If one analytically expands the square and takes into account that σ ∈ [−1, 1], one finds that the term corresponds to a global, all-to-all (long range) negative interaction between any pair of locations in the lattice. Crucially, such a term is an additive addition to an optimized Counter network: because the optimized Counter exhibits the ON / OFF balance already, the new term doesn’t change the Counter state itself, but only makes states deviating from it less likely. Realistically, the strength and range of such interaction cannot be made infinite; the relevant question is therefore whether canalization and scaling can be obtained with a of finite magnitude and range.

We took the optimized two-gene Counter network and equipped it with long range negative spatial interactions, as shown in Fig 7D. We first tested whether this addition confers the scaling property to our Counter network. To this end, we consider a morphogen signal m(x) which spans its dynamic range over a default system size of N = 60 lattice sites. We model variations of the system size without scaling of the gradient by cutting the system and the gradient short by 10 lattice sites or extending it by 10 lattice sites at constant value of the morphogen signal in the posterior (Fig 7A). The corresponding gene expression patterns of the two-gene Counter network are depicted in Fig 7B. Without new long range spatial interactions, variations in system size do not result in the scaling of the pattern, as expected; instead, boundaries are fixed to their absolute positions. In contrast, when we add negative long-range interactions to each gene in our network, the pattern scales approximately, as shown in Fig 7C. Specifically, the central boundary is preserved with large precision, while the boundaries at 1/4 and 3/4 shift marginally.

thumbnail
Fig 7. Long-range interactions can stabilize Counter patterns against variations in system size and morphogen signal.

(A) Morphogen signal for three different system sizes, N = 50, 60, 70. Posterior boundary of the system is depicted as a dashed line. We consider how a two-gene Counter network, optimized for N = 60, changes upon variations in system size. (B) Resulting patterns for a Counter network for three system sizes depicted in (A). The pattern does not scale with the system size; instead, boundaries form at the same absolute location. (C) Resulting patterns for the same network as in (B) with additional negative long range interactions. Pattern shifts with system size are largely suppressed. (D) Spatial interaction strength as a function of distance. Nearest neighbors are interacting positively, while sites further away are coupled negatively with exponentially decaying strength of maximal amplitude . (E) We perturb the morphogen signal with a uniform perturbation ϵ, . Example patterns generated by the optimal Counter two-gene network, at different strengths of long range interactions, , and different morphogen signal perturbation magnitudes, ϵ. At ϵ = 0, irrespective of , the system generates the optimal pattern. When |ϵ| increases, the patterns shift (providing less PI) with weak , but when is strong, the pattern is robust to such perturbations. (F) To quantify the robustness to ϵ perturbations, we compute the overlap, , of the resulting pattern with the optimal Counter pattern. The overlap is shown as a function of ϵ and ; for strong negative , the overlap is high irrespective of the perturbation strength, ϵ. (G) Susceptibility to small perturbations ϵ as a function of shows transition into a robust regime, χm → 0, as increases in magnitude.

https://doi.org/10.1371/journal.pone.0163628.g007

Next, we examine the robustness to systematic perturbations in the morphogen signal in detail. We introduce an additive offset ϵ to the morphogen signal, , and ask about the resulting gene expression patterns with or without negative long-range spatial interactions. Note that additive perturbations in the morphogen signal map, in the thermodynamic model of gene regulation, to multiplicative perturbations in the morphogen concentration, c(x). Such perturbations can be interpreted as variations in the morphogen dosage, and robustness to such perturbations has been a focus of several experimental studies. Fig 7E demonstrates that strong long-range spatial interactions indeed successfully make the pattern robust to (large) changes in ϵ, while in the absence of such interactions the patterns experience large shifts with ϵ.

To quantify the stability of the pattern, we compute the overlap, , as a function of the perturbation strength. An overlap of 1 means that under the perturbation the resulting pattern is identical—therefore fully robust—to the (Counter) pattern without perturbation; an overlap 0 means that on average half the expression states are inverted, while an overlap of -1 indicates that the perturbed pattern is the exact inverse of the unperturbed one. Mapping out the overlap as a function of perturbation strength, ϵ, and the long range interaction magnitude, , in Fig 7E, we see that with sufficient, but still finite, , the patterns can be made almost completely robust to large morphogen signal perturbations. A similar differential analysis in Fig 7F shows how the susceptibility of the gene expression pattern to small morphogen signal perturbations vanishes as the strength of the long range interactions increases.

In sum, we have shown that “canalization” and “scaling” in our toy model system can be provided by the addition of long range negative spatial interactions. Curiously, these interactions can be added to a previously optimized Counter network without any need to change the optimized parameters, because they do not affect the energy or the identity of the Counter pattern ground state. As in the Turing model, these interactions stabilize the ground state. Unlike the Turing model, however, the Counter pattern itself is generated solely by a joint action of a French Flag-like morphogen signal readout and local gene-gene interactions.

Should the long-range negative interactions that confer robustness also follow from an information optimization principle? Recalling our introductory remarks, robustness to, e.g., morphogen dosage ϵ could be measured directly as a low value of information , implying that changes in ϵ would not affect the expression patterns. The question is whether to gain robustness one needs to explicitly maximize PI while jointly minimizing , or whether it is sufficient to maximize PI only and get robustness as an automatic consequence. Due to the numerical complexity of the problem we did not perform such a large scale joint optimization here, but at non-zero intrinsic or extrinsic noise long-range interactions will stabilize the Counter pattern and thus maintain high PI beyond what could be achieved by local interactions alone. In the PI-maximization framework which we are proposing here, it thus seems that maximization of PI alone will also yield robustness, so long as the noise statistics under which the optimization is carried out contain the kinds of perturbations to which the system should be robust. As a conjecture, we suggest that, were we capable of carrying out large-scale optimization numerically, we would find optimal solutions with long-range negative couplings that yield canalization if our extrinsic noise also consisted of correlated additive fluctuations (that mimic the overall additive ϵ shifts in the morphogen signal considered above).

Discussion

We introduced a tractable toy model of patterning that extends Wolpert’s French Flag model, in which several two-state genes respond to a morphogen signal based on a fixed set of thresholds. Our extension allows these genes to cross-regulate each other, to interact spatially (e.g., due to diffusion or cell-cell signaling), and to include the effects of both intrinsic noise (e.g., due to stochasticity in gene expression) and extrinsic noise (e.g., due to variability in the morphogen signal). A physics equivalent of our model is a set of coupled 1D Ising chains responding to inhomogeneous external field and this link to statistical physics allows us to perform most of our computations exactly. In this well-defined setup we ask which patterns of gene expression encode the maximal amount of positional information; our enquiry is set in an optimization framework, where for each choice of noise magnitude and morphogen signal profile we look for the optimal pattern of gene-gene and spatial interactions, and examine the resulting spatial patterns of gene expression.

We find that with vanishing noise magnitude, the optimal gene expression pattern is the so-called Counter, where each of the 2K binary patterns that can be realized by K genes appears equally often along the spatial coordinate; this combinatorial code for position is, on information-theoretic grounds, the best achievable solution, independently of the patterning model. The Counter pattern can be realized if each patterning gene can be activated at a different threshold, and if the local gene-gene interactions can be adjusted; the optimal interactions are predominantly repressive. Increasing the noise quickly perturbs the Counter pattern, until the optimal pattern switches from Counter to French-Flag-like set of stripes, and finally, to a set of redundant genes that are all activated at the same threshold. Addition of short-range positive spatial interactions of intermediate strength makes the optimal patterns substantially more stable against increasing noise; qualitatively, this effect is analogous to noise averaging due to diffusive coupling in continuous systems. Strong short-range negative interactions generate an alternating pattern of gene expression which robustly separates odd from even rows of cells, a strategy that can yield one additional bit of PI when noise is low.

Qualitatively new effects emerge if the spatial interactions can be long-ranged. We observe that our discrete, Ising-model-based model exhibits Turing-like patterns whose spatial scale is set by the range of the repressive interactions. Surprisingly, we find that the energy function, optimized to yield Counter patterns with high positional information, can be modified by the addition of strong long-range repressive interactions. This modification does not perturb the Counter pattern, but makes it robust to changes in the morphogen dosage and to changes in system size by producing patterns that approximately scale with system size.

Taken together, our analysis allows us to identify elements—the basic building blocks—of positional information: adjustable thresholds as in the French Flag model to differentially drive gene activation; local repressive interactions to generate combinatorial codes for position; short-range positive spatial couplings to enable noise averaging and stabilize the patterns; and long-range negative spatial couplings to provide scaling and robustness via canalization. The optimal patterning system thus combines elements from both the French Flag model with the elements inherent to the Turing mechanism. Furthermore, these elements need not be identified and combined by hand, but emerge from a single information-theoretic optimization principle, and their contribution towards encoding of positional information can be individually quantified.

The explicit purpose of this work has been to provide conceptual clarity and computational tractability rather than a detailed model of any particular patterning system. In order to achieve our goals, we had to sacrifice several crucial aspects of biological realism. First, real patterning does not happen at equilibrium, but is rather a driven dynamical process evolving from some initial state. Second, as the real patterning mechanisms are dynamic and might involve multiple timescales, noise mitigation mechanisms beyond spatial averaging, e.g., temporal averaging, should be available. Third, the assumption that expression states of patterning genes are binary might be poor. For example, in the gap gene system in Drosophila, intermediate levels of expression are of crucial importance [25]. On the other hand, regulatory circuits where individual genes have strong positive self-interactions and thus exhibit bistability could be well captured by our model. Fourth, the Ising framework assumes a particular (Boltzmann) distribution over expression states and thus leaves no degree of freedom to describe intrinsic stochasticity beyond its magnitude; in contrast, gene expression noise in real regulatory networks has a complicated relation to the mean expression [55]. Furthermore, because our model is a statistical physics model at equilibrium, interactions between the genes are necessarily symmetric, which does not need to be the case in realistic gene regulatory networks. Last but not least, we here follow previous models of embryonic development in that we consider patterns along a single (one-dimensional) axis of the embryo, assuming independence from the patterns along the other axes [2, 16]. An interesting direction of future research would be to study patterns with high positional information in more than one dimension and in different geometries.

Despite these approximations, our model qualitatively recapitulates many aspects of the optimal patterning solutions that have been reported previously in more realistic setups, where the required computations are substantially more complicated [3640, 50]. This is in line with our previous observations that many mechanistic details that define the pattern-forming system Q do not matter so long as the system has the ability to access patterns that achieve high positional information (which can be found by optimization). Pattern-forming processes can be complicated and can include spatial interactions between nearby cells or even long-range or global interactions. Ultimately, however, these systems generate patterns whose power is quantified by a local quantity, the positional information I, which is sufficient to summarize the limits to readout error. For optimal solutions to the patterning problem, this locality of positional code is crucial: there are many possible pattern-forming systems, but only in a restricted subset do we expect high information about global position to be available locally.

Supporting Information

S1 Appendix. Error bound on an estimator of position.

A lower bound on the error of a positional estimator with limited positional information is derived.

https://doi.org/10.1371/journal.pone.0163628.s001

(PDF)

S2 Appendix. Computation of positional information in an Ising model.

The effect of noise in the input field on an Ising model is approximated. Furthermore, methods to compute positional information in an Ising model by transfer matrices and Monte Carlo sampling are outlined.

https://doi.org/10.1371/journal.pone.0163628.s002

(PDF)

S3 Appendix. Computation of positional information in a discrete morphogen field.

The positional information in a discrete morphogen field with Gaussian noise is computed.

https://doi.org/10.1371/journal.pone.0163628.s003

(PDF)

Acknowledgments

The authors would like to thank Thomas Sokolowski and Filipe Tostevin for helpful discussions. This research was supported by the German Excellence Initiative via the program “Nanosystems Initiative Munich”, the German Research Foundation via the SFB 1032 “Nanoagents for Spatiotemporal Control of Molecular and Cellular Reactions” and by the Austrian Science Fund (FWF P 28844).

Author Contributions

  1. Conceived and designed the experiments: GT UG.
  2. Performed the experiments: PH.
  3. Analyzed the data: PH.
  4. Wrote the paper: PH UG GT.

References

  1. 1. Driever W, Nüsslein-Volhard C. The bicoid protein determines position in the Drosophila embryo in a concentration-dependent manner. Cell. 1988;54(1):95–104. pmid:3383245
  2. 2. Gregor T, Tank D, Wieschaus E, Bialek W. Probing the limits to positional information. Cell. 2007;130(1):153–164. pmid:17632062
  3. 3. Wolpert L. Positional information and the spatial pattern of cellular differentiation. J Theor Biol. 1969;25(1):1–47. pmid:4390734
  4. 4. Dahmann C, Oates AC, Brand M. Boundary formation and maintenance in tissue development. Nat Rev Genet. 2011;12(1):43–55. pmid:21164524
  5. 5. Turing AM. The chemical basis of morphogenesis. Phil Trans R Soc B. 1952;237(641):37–72.
  6. 6. Drossopoulou G, Lewis KE, Sanz-Ezquerro JJ, Nikbakht N, McMahon AP, Hofmann C, et al. A model for anteroposterior patterning of the vertebrate limb based on sequential long-and short-range Shh signalling and Bmp signalling. Development. 2000;127(7):1337–1348. pmid:10704381
  7. 7. Tickle C. Making digit patterns in the vertebrate limb. Nat Rev Mol Cell Biol. 2006;7(1):45–53. pmid:16493412
  8. 8. Sheth R, Marcon L, Bastida MF, Junco M, Quintana L, Dahn R, et al. Hox genes regulate digit patterning by controlling the wavelength of a Turing-type mechanism. Science. 2012;338(6113):1476–1480. pmid:23239739
  9. 9. Raspopovic J, Marcon L, Russo L, Sharpe J. Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients. Science. 2014;345(6196):566–570. pmid:25082703
  10. 10. Bray SJ. Notch signalling: a simple pathway becomes complex. Nat Rev Mol Cell Biol. 2006;7(9):678–689. pmid:16921404
  11. 11. Hamada H, Watanabe M, Lau HE, Nishida T, Hasegawa T, Parichy DM, et al. Involvement of Delta/Notch signaling in zebrafish adult pigment stripe patterning. Development. 2014;141(2):318–324. pmid:24306107
  12. 12. Gregor T, Fujimoto K, Masaki N, Sawai S. The onset of collective behavior in social amoebae. Science. 2010;328(5981):1021–1025. pmid:20413456
  13. 13. Cooke J, Zeeman EC. A clock and wavefront model for control of the number of repeated structures during animal morphogenesis. J Theor Biol. 1976;58(2):455–476. pmid:940335
  14. 14. Palmeirim I, Henrique D, Ish-Horowicz D, Pourquié O. Avian hairy gene expression identifies a molecular clock linked to vertebrate segmentation and somitogenesis. Cell. 1997;91(5):639–648. pmid:9393857
  15. 15. Oates AC, Morelli LG, Ares S. Patterning embryos with oscillations: structure, function and dynamics of the vertebrate segmentation clock. Development. 2012;139:625–639. pmid:22274695
  16. 16. Jaeger J, Surkova S, Blagov M, Janssens H, Kosman D, Kozlov KN, et al. Dynamic control of positional information in the early Drosophila embryo. Nature. 2004;430(6997):368–371. pmid:15254541
  17. 17. Wolpert L. Positional information revisited. Development. 1989;107:3–12. pmid:2699855
  18. 18. Crauk O, Dostatni N. Bicoid determines sharp and precise target gene expression in the Drosophila embryo. Curr Biol. 2005;15(21):1888–1898. pmid:16271865
  19. 19. Jaeger J, Reinitz J. On the dynamic nature of positional information. BioEssays. 2006;28(11):1102–1111. pmid:17041900
  20. 20. Kerszberg M, Wolpert L. Specifying positional information in the embryo: looking beyond morphogens. Cell. 2007;130(2):205–209. pmid:17662932
  21. 21. Jaeger J, Irons D, Monk N. Regulative feedback in pattern formation: towards a general relativistic theory of positional information. Development. 2008;135(19):3175–3183. pmid:18776142
  22. 22. Bollenbach T, Pantazis P, Kicheva A, Bökel C, González-Gaitán M, Jülicher F. Precision of the Dpp gradient. Development. 2008;135(6):1137–1146. pmid:18296653
  23. 23. Jaeger J. Modelling the Drosophila embryo. Mol Biosyst. 2009;5(12):1549–1568. pmid:20023719
  24. 24. He F, Wen Y, Cheung D, Deng J, Lu LJ, Jiao R, et al. Distance measurements via the morphogen gradient of Bicoid in Drosophila embryos. BMC Dev Biol. 2010;10(1):80. pmid:20678215
  25. 25. Dubuis JO, Tkačik G, Wieschaus EF, Gregor T, Bialek W. Positional information, in bits. Proc Natl Acad Sci USA. 2013;110(41):16301–16308. pmid:24089448
  26. 26. Tkačik G, Dubuis JO, Petkova MD, Gregor T. Positional Information, Positional Error, and Readout Precision in Morphogenesis: A Mathematical Framework. Genetics. 2015;199(1):39–59. pmid:25361898
  27. 27. Cover TM, Thomas JA. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience; 2006.
  28. 28. Ziv E, Nemenman I, Wiggins CH. Optimal signal processing in small stochastic biochemical networks. PLoS ONE. 2007;2(10):e1077. pmid:17957259
  29. 29. Tkačik G, Callan CG, Bialek W. Information capacity of genetic regulatory elements. Phys Rev E. 2008;78:011910.
  30. 30. Tostevin F, ten Wolde PR. Mutual information between input and output trajectories of biochemical networks. Phys Rev Lett. 2009;102:218101. pmid:19519137
  31. 31. Bowsher CG, Swain PS. Environmental sensing, information transfer, and cellular decision-making. Curr Opin Biotech. 2014;28:149–155. pmid:24846821
  32. 32. Tkačik G, Callan CG, Bialek W. Information flow and optimization in transcriptional regulation. Proc Natl Acad Sci USA. 2008;105(34):12265–12270. pmid:18719112
  33. 33. Cheong R, Rhee A, Wang CJ, Nemenman I, Levchenko A. Information transduction capacity of noisy biochemical networks. Science. 2011;334:354–358. pmid:21921160
  34. 34. Selimkhanov J, Taylor B, Yao J, Pilko A, Albeck J, Hoffmann A, et al. Accurate information transmission trhough dynamic biochemical signaling networks. Science. 2014;346:1370–1373. pmid:25504722
  35. 35. Hansen AS, O’Shea EK. Limits on information transduction through amplitude and frequency regulation of transcription factor activity. eLife. 2015;4:e06559.
  36. 36. Tkačik G, Walczak AM, Bialek W. Optimizing information flow in small genetic networks. Phys Rev E. 2009;80:031920.
  37. 37. Walczak AM, Tkačik G, Bialek W. Optimizing information flow in small genetic networks. II. Feed-forward interactions. Phys Rev E. 2010;81:041905.
  38. 38. Tkačik G, Walczak AM, Bialek W. Optimizing information flow in small genetic networks. III. A self-interacting gene. Phys Rev E. 2012;85:041903.
  39. 39. Sokolowski TR, Tkačik G. Optimizing information flow in small genetic networks. IV. Spatial coupling. Phys Rev E. 2015;91:062710.
  40. 40. Sokolowski TR, Walczak AM, Bialek W, Tkačik G. Extending the dynamic range of transcription factor action by translational regulation. Phys Rev E. 2016;93(2):022404. pmid:26986359
  41. 41. Holloway DM, Harrison LG, Kosman D, Vanario-Alonso CE, Spirov AV. Analysis of pattern precision shows that Drosophila segmentation develops substantial independence from gradients of maternal gene products. Dev Dyn. 2006;235(11):2949–2960. pmid:16960857
  42. 42. Liu F, Morrison AH, Gregor T. Dynamic interpretation of maternal inputs by the Drosophila segmentation gene network. Proc Natl Acad Sci USA. 2013;110(17):6724–6729. pmid:23580621
  43. 43. Staller MV, Fowlkes CC, Bragdon MDJ, Wunderlich Z, Estrada J, DePace AH. A gene expression atlas of a bicoid-depleted Drosophila embryo reveals early canalization of cell fate. Development. 2015;142(3):587–596. pmid:25605785
  44. 44. Schneidman E, Berry MJ, Segev R, Bialek W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature. 2006;440(7087):1007–1012. pmid:16625187
  45. 45. Tkačik G, Marre O, Mora T, Amodei D, Berry MJ II, Bialek W. The simplest maximum entropy model for collective behavior in a neural network. J Stat Mech Theor Exp. 2013;2013(03):P03011.
  46. 46. Heitzler P, Simpson P. The choice of cell fate in the epidermis of Drosophila. Cell. 1991;64(6):1083–1092. pmid:2004417
  47. 47. Henrique D, Adam J, Myat A, Chitnis A, Lewis J, Ish-Horowicz D. Expression of a Delta homologue in prospective neurons in the chick. Nature. 1995;375(6534):787–790. pmid:7596411
  48. 48. Swain PS, Elowitz MB, Siggia ED. Intrinsic and extrinsic contributions to stochasticity in gene expression. Proc Natl Acad Sci USA. 2002;99(20):12795–12800. pmid:12237400
  49. 49. Paulsson J. Summing up the noise in gene networks. Nature. 2004;427(6973):415–418. pmid:14749823
  50. 50. Erdmann T, Howard M, Ten Wolde PR. Role of spatial averaging in the precision of gene expression patterns. Phys Rev Lett. 2009;103(25):258101. pmid:20366291
  51. 51. Lott SE, Kreitman M, Palsson A, Alekseeva E, Ludwig MZ. Canalization of segmentation and its evolution in Drosophila. Proc Natl Acad Sci USA. 2007;104(26):10926–10931. pmid:17569783
  52. 52. Manu , Surkova S, Spirov AV, Gursky VV, Janssens H, Kim A, et al. Canalization of gene expression in the Drosophila blastoderm by gap gene cross regulation. PLoS Biol. 2009;7(3):e1000049. pmid:19750121
  53. 53. Manu , Surkova S, Spirov AV, Gursky VV, Janssens H, Kim A, et al. Canalization of gene expression and domain shifts in the Drosophila blastoderm by dynamical attractors. PLoS Comput Biol. 2009;5(3):e1000303. pmid:19282965
  54. 54. Jaeger J, Monk N. Bioattractors: dynamical systems theory and the evolution of regulatory processes. J Physiol. 2014;592(11):2267–2281. pmid:24882812
  55. 55. Tkačik G, Gregor T, Bialek W. The role of input noise in transcriptional regulation. PLoS ONE. 2008;3(7):e2774. pmid:18648612