Size and structure of the sequence space of repeat proteins

The coding space of protein sequences is shaped by evolutionary constraints set by requirements of function and stability. We show that the coding space of a given protein family—the total number of sequences in that family—can be estimated using models of maximum entropy trained on multiple sequence alignments of naturally occuring amino acid sequences. We analyzed and calculated the size of three abundant repeat proteins families, whose members are large proteins made of many repetitions of conserved portions of ∼30 amino acids. While amino acid conservation at each position of the alignment explains most of the reduction of diversity relative to completely random sequences, we found that correlations between amino acid usage at different positions significantly impact that diversity. We quantified the impact of different types of correlations, functional and evolutionary, on sequence diversity. Analysis of the detailed structure of the coding space of the families revealed a rugged landscape, with many local energy minima of varying sizes with a hierarchical structure, reminiscent of fustrated energy landscapes of spin glass in physics. This clustered structure indicates a multiplicity of subtypes within each family, and suggests new strategies for protein design.

Introduction Natural proteins contain a record of their evolutionary history, as selective pressure constrains their amino-acid sequences to perform certain functions. However, if we take all proteins found in nature, their sequence appears to be random, without any apparent rules that distinguish their sequences from arbitrary polypeptides. Nonetheless, the volume of sequence space taken up by existing proteins is very small compared to all possible polypeptide strings of a given length [1], even more so when specializing to a given structure [2]. Clearly, not all variants are equally likely to survive [3][4][5]. To better understand the structure of the space of natural proteins, it is useful to group them into families of proteins with similar fold, function, and sequence, believed to be under a common selective pressure. Assuming that the ensemble of protein families is equilibrated, there should exist a relationship between the conserved features of their amino acid sequences and their function. This relation can be extracted by examining statistics of amino-acid composition, starting with single sites in multiple alignments (as provided by e.g. PFAM [6,7]). More interesting information can be extracted from covariation of amino acid usages at pairs of positions [8][9][10] or using machine-learning techniques [11]. Models of protein sequences based of pairwise covariations have been shown to successfully predict pair-wise amino-acid contacts in three dimensional structures [12][13][14][15][16][17], aid protein folding algorithms [18,19], and predict the effect of point mutations [17,[20][21][22]. However, little is known on how these identified amino-acid constraints affect the global size, shape and structure of the sequence space. Accounting for these questions is a first step towards drawing out the possible and the realized evolutionary trajectories of protein sequences [23,24].
We use tools and concepts from the statistical mechanics of disordered systems to study collective, protein-wide effects and to understand how evolutionary constraints shape the landscape of protein families. We go beyond previous work which focused on local effectspairwise contacts between residues, effect of single amino-acid mutations-to ask how aminoacid conservation and covariation restrict and shape the landscape of sequences in a family. Specifically, we characterize the size of the ensemble, defined as the effective number of sequences of a familiy, as well as its detailed structure: is it made of one block or divided into clusters of "basins"? These are intrinsically collective properties that can not be assessed locally.
Repeat proteins are excellent systems in which to quantify these collective effects, as they combine both local and global interactions. Repeat proteins are found as domains or subdomains in a very large number of functionally important proteins, in particular signaling proteins (e.g. NF-κB, p16, Notch [25]). Usually they are composed of tandem repetitions of *30 amino-acids that fold into elongated architectures. Repeat proteins have been divided into different families based on their structural similarity. Here we consider three abundant repeat protein families: ankyrin repeats (ANK), tetratricopeptide repeats (TPR), leucine-rich repeat (LRR) that fold into repetitive structures (see Fig 1). In addition to interactions between residues within one repeat, repeat protein evolution is constrained by inter-repeat interactions, which lead to the characteristic accordeon-like folds. Through these separable types of constraints, as well as the possibility of intra-and inter-familly comparisons, repeat proteins are perfect candidates to ask questions about the origins and the effects of the constraints that globally shape the sequences.
A recent study [26] addressed the question of the total number of sequences within a given protein family, focusing on ten single-domain families. They took a similar thermodynamic approach to the one followed here, but had to estimate experimentally the free energy threshold ΔG below which the sequences would fold properly. Here we overcome this limitation by forgoing this threshold entirely. Instead we determine the sequence entropy directly, which is argued to be equivalent to using a threshold free energy by virtue of the equivalence of ensembles. We precisely quantify the sequence entropy of three repeat-protein families for which detailed evolutionary energetic fields are known [27]. We explore the properties of the evolutionary landscape shaped by the amino-acid frequency constraints and correlations. We ask whether the energy landscape, defined in sequence space of repeat proteins, is made of a single basin, or rather of a multitude of basins connected by ridges and passes, called "metastable states", as would be expected from spin-glass theory. Using the specific example of repeat proteins makes it possible to analyze the source of the potential landscape ruggedness, and use it to identify which repeat-protein families can be well separated into subfamilies. The rich metastable state structure that we find demonstrates the importance of interactions in shaping the protein family ensemble.

Statistical models of repeat-protein families
We start by building statistical models for the three repeat protein families presented in Fig 1 (ANK, TPR, LRR). These models give the probability P(σ) to find in the family of interest a particular sequence σ = (σ 1 , . . ., σ 2L ) for two consecutive repeats of size L. The model is designed to be as random as possible, while agreeing with key statistics of variation and co-variation in a multiple sequence alignment of the protein family. Specifically, P(σ) is obtained as the distribution of maximum entropy [28] which has the same amino-acid frequencies at each position as in the alignment, as well as the same joint frequencies of amino acid usage in each pair of positions. Additionally, repeat proteins share many amino acids between consecutive repeats, both due to sharing a common ancestor and to evolutionary selection acting on the protein. To account for this special property of repeat proteins, we require that the model reproduces the distribution of overlaps IDðσÞ ¼ P L i¼1 d s i ;s iþL between consecutive repeats. Using the technique of Lagrange multipliers, the distribution can be shown to take the form [17]: with where h i (σ), J ij (σ i , σ j ), and {λ ID }, ID = 0, 1, . . ., L, are adjustable Lagrange multipliers that are fit to the data to reproduce the experimentally observed site-dependent amino-acid frequencies f i (σ i ), joint probabilities between two positions, f ij (σ i , σ j ), and the distribution of Hamming distances between consecutive repeats P(ID(σ)), which is equivalent to maximize the likelihood of the data under the model. We fit these parameters using a gradient ascent algorithm: we start from an initial guess of the parameters, then generate sequences via Monte-Carlo simulations and update the parameters proportionally to the difference between the empirical and model generated observables P(ID(σ)) model . We repeat the previous steps until the model reproduces the empirical observables defined above, with a target precision motivated according to the finite size of our original dataset, as in Ref. [17]. See Methods for more details. We tested the convergence of the model learning by synthetically generating datasets and relearning the model (see Methods). By analogy with Boltmzan's law, we call E(σ) a statistical energy, which is in general distinct from any physical energy. The particular form of the energy (2) resembles that of a disordered Potts model. This mathematical equivalence allows for the possibility to study effects that are characteristic of disordered systems, such as frustration or the existence of an energy landscape with multiple valleys, as we will discuss in the next sections.
Eq 2 is the most constrained form of the model, which we will denote by E full (σ). One can explore the impact of each constraint on the energy landscape by removing them from the model. For instance, to study the role of inter-repeat sequence similarity due to a common evolutionary origin, one can fit the model without the constraint on repeat overlap ID, i.e. without the λ ID term in Eq 2. We call the corresponding energy function E 2 . One can further remove constraints on pairwise positions that are not part of the same repeat, making the two consecutive repeats statistically independent and imposing h i = h i+L (E ir ), or only linked through phylogenic conservation through λ ID (E ir,λ ). Finally one can remove all interaction constraints to make all positions independent of each other (E 1 ), or even remove all constraints (E rand � 0).

Statistical energy vs unfolding energy
The evolutionary information contained in multiple sequence alignments of protein families is summarized in our model by the energy function E(σ). Since this information is often much easier to access than structural or functional information, there is great interest in extracting functional or structural properties from multiple sequence alignments, provided that there exists a clear quantitative relationship between statistical energy and physical energy.
Such a relationship was determined experimentally for repeat proteins by using E(σ) to predict the effect of point mutations on the folding stability measured by the free energy difference between the folded and unfolded states, ΔG, called the unfolding energy [17,20]. Synthetic sequences with low E(σ) have also been shown to reproduce the fold and function of natural sequences [29]. Here, extending an argument already developed in previous work [30][31][32][33], we show how this correspondance between statistical likelihood and folding stability arises in a simple model of evolution.
Evolutionary theory predicts that the prevalence of a particular genotype σ, i.e. the probability of finding it in a population, is related to its fitness F(σ). In the limit where mutations affecting the protein are rare compared to the time it takes for mutations to spread through the population, Kimura [34] showed that the probability of a mutation giving a fitness advantage (or disadvantage depending on the sign) ΔF over its ancestor will fix in the population with probability 2ΔF/(1 − e −2NΔF ), where N is the effective population size. The dynamics of successful substitution satisfies detailed balance [35], with the steady state probability Again, one may recognize a formal analogy with Boltzmann's distribution, where F plays the role of a negative energy, and N an inverse temperature. If we now assume that fitness is determined by the unfolding free energy ΔG, F(σ) = f(ΔG(σ)), then the distribution of genotypes we expect to observe in a population is Note that a similar relation should hold even if we relax the hypotheses of the evolutionary model. While in more general contexts (e.g. high mutation rate, recombination), the relation between ln P(σ) and F(σ) may not be linear, such nonlinearities could be subsumed into the function f.
Identifying terms in the two expressions (1) and (3), we obtain a relation between the statistical energy E, and the unfolding free energy ΔG: For instance, if we assume a linear relation between fitness and ΔG, f(ΔG) = A + BΔG, then we get a linear relationship between the statistical energy and ΔG, as was found empirically for repeat proteins [17].
Strikingly, the relationship f does not have to be linear or even smooth for this correspondance to work. Imagine a more stringent selection model, where f(ΔG) is a threshold function, f(ΔG) = 0 for ΔG > ΔG sel and −1 otherwise (lethal). In that case the probability distribution is P(σ) = (1/Z)Θ(ΔG − ΔG sel ), where Θ(x) is Heaviside's function. Using a saddle-point approximation, one can show that in the thermodynamic limit (long proteins, or large L) the distribution concentrates at the border ΔG sel , and is equivalent to a "canonical" description [30,31,33]: where the "temperature" T sel is set to match the mean ΔG between the two descriptions: This correspondance is mathematically similar to the equivalence between the microcanonical and canonical ensembles in statistical mechanics.
Statistical energy and unfolding free energy are linearly related by equating (Eq 1) and (Eq 6): despite f being nonlinear. Eq 8 is in fact very general and should hold for any f in the thermodynamic limit in the vicinity of hEi.

Equivalence between two definitions of entropies
There are several ways to define the diversity of a protein family. The most intuitive one, followed by [26], is to count the total number of amino acid sequences that have an unfolding free energy ΔG sel above a threshold ΔG sel [2]. This number naturally defines a Boltzmann entropy, Alternatively, starting from a statistical model P(σ), one can calculate its Shannon entropy, defined as as was done in Ref. [27]. What is the relation between these two definitions? By the same saddle-point approximation as in the previous section, the two are identical in the thermodyamic limit (large L), provided that the condition (Eq 7) is satisfied. We can thus reconcile the two definitions of the entropy in that limit.
To calculate the Boltzmann entropy (Eq 9), one needs to first evaluate the threshold E sel in terms of statistical energy. This threshold is given by E sel = E 0 − ΔG sel /T sel , where E 0 and T sel can be obtained directly by fitting (Eq 8) to single-mutant experiments. E sel can also be obtained as a discrimination threshold separating sequences that are known to fold properly versus sequences that do not [26]. In that case, assuming that the linear relationship (Eq 8) was evaluated empirically using single mutants, this relationship can be inverted to get ΔG sel in physical units.
Calculating the Shannon entropy Eq (10), on the other hand, does not require to define any threshold. However, the threshold in the equivalent Boltzmann entropy can be obtained using Eqs 7 and 8, i.e. E sel = hEi, where the average is performed using the distribution defined in Eqs 1 and 2.

Entropy of repeat protein families
To compare how the different elements of the energy function affect diversity, we calculate the entropy of ensembles built of two consecutive repeats from a given protein family for the different kinds of models described earlier, from the least constrained to the most constrained: E rand , E 1 , E ir , E ir,λ , E 2 , E full . In the case of models with interactions, calculating the entropy directly from the definition Eq (10) is impossible due to the large sums. A previous study of entropies of protein families used an approximate mean-field algorithm, called the Adaptive Cluster Expansion [27], for both parameter fitting and entropy estimation. Here we estimated the entropies using thermodynamic integration of Monte-Carlo simulations, as detailed in Methods. This method is expected to be asymptotically unbiased and accurate in the limit of large Monte-Carlo samples.
The resulting entropies and their differences are reported in Table 1 and Fig 2. All three considered families (ankyrins (ANK), leucine-rich repeats (LRR), and tetratricopeptides (TPR)) show a large reduction in entropy (*40 − 50%) compared to random polypeptide string models of the same length 2L (of entropy S rand = 2L ln (21)). Interactions and phylogenic similarity between repeats generally have a noticeable effect on family diversity, although the magnitude of this effect depends on the family: (S 1 − S full )/S full = 7% for ANK, versus, 13% for LRR, and 16% for TPR. Thus, although interactions are essential in correctly predicting the folding properties, they seem to only have a modest effect on constraining the space of accessible proteins compared to that of single amino-acid frequencies. However, when converted to numbers of sequences, this reduction is substantial, from e S 1 � 3 � 10 54 to e S full � 2 � 10 50 for ANK, from 10 39 to 10 34 for LRR, and from 7 � 10 50 to 4 � 10 42 for TPR.
By considering models with more and more constraints, and thus with lower and lower entropy, we can examine more finely the contribution of each type of correlation to the entropy reduction, going from E 1 to E ir to E ir,λ to E full . This division allows us to quantify the relative importance of phylogenic similarity between consecutive repeats (λ ID ) relative to the impact of functional interactions (J ij ), as well as the relative weights of repeat-repeat versus within-repeat interactions (Fig 2). We find that phylogenic similarity contributes substantially to the entropy reduction, as measured by S ir − S ir,λ = 4.5 bits for ANK, 4.3 bits for LRR, and 10.7 bits for TPR. The contribution of repeat-repeat interactions (S ir,λ − S full * 5 bits for all three families) is comparable or of the same order of magnitude as that of within-repeat interactions (S 1 − S ir = 4.3 bits for ANK, 6.9 bits for LRR, and 11.4 bits for TPR). This result emphasizes the importance of physical interactions between neighboring repeats in the whole protein.
On a technical note, we also find that pairwise interactions encode constraints that are largely redundant with the constraint of phylogenic similarity between consecutive repeats, as can be measured by the double difference S ir − S ir,λ − S 2 + S full > 0 (Fig 2, orange bars). This redundancy comes from the fact that, in absence of an explicit constaint on P(ID) in E 2 , the interaction couplings J i,i+L (σ, σ) between homologous positions in the two repeats is expected to favor pairs of identical residues to mimic the effect of λ ID . This redundancy motivates the need to correct for this phylogenic bias before estimating repeat-repeat interactions.
Comparing the three families, ANK has little phylogenic bias between consecutive repeats, and relatively weak interactions. By contrast, TPR has a strong phylogenic bias and strong within-repeat interactions. Fig 1. Entropies are calculated for models of different complexity: model of random amino acids (S rand = 2L ln (21), divided by ln(2) when expressed in bits); independent-site model (S 1 ), pairwise interaction model (S 2 ); pairwise interaction model with constraints due to repeat similarity λ ID (S full ); pairwise interaction model of two non-interacting repeats learned without (S ir ) and with (S ir,λ ) constraints on repeat similarity. Fig 2 shows

Effect of interaction range
We wondered whether interactions constraining the space of accessible proteins had a characteristic lengthscale. To answer this question, for each protein family in Fig 1, we learn a sequence of models of the form Eq 2, in which J ij was allowed to be non-zero only within a certain interaction range d(i, j) � W, where the distance d(i, j) between sites i and j can be defined in two different ways: either the linear distance |i − j| expressed in number of amino-acid sites, or the three-dimensional distance between the closest heavy atoms in the reference structure of the residues. Details about the learning procedure and error estimation are given in the Methods; see also S1 Fig for an alternative error estimate. The entropy of all families decreases with interaction range W, both in linear and threedimensional distance, as more constraints are added to reduce diversity (Fig 3 for ANK, and  S2 Fig for LRR and TPR). The initial drop as a function of linear distance (Fig 3A) is explained by the many local interactions between nearby residues in the sequence. The entropy then plateaus until interactions between same-position residues in consecutive repeats are included in the W range, which leads to a sharp entropy drop at W = L. This suggests that long range interactions along the sequence generally do not constrain the protein ensemble diversity, except for interactions at exactly the scale of the repeat. This result suggests that the repeat structure is Contributions of within-repeat interactions (S 1 − S ir green), repeat-repeat interactions (S ir,λ − S full , purple), and phylogenic bias between consecutive repeats (S ir − S ir,λ , blue), to the entropy reduction from an independent-site model. All three contributions are comparable, but with a larger effect of within-repeat interactions and phylogenic bias in TPR. The fourth bar (orange) quantifies the redundancy between two constraints with overlapping scopes: the constraint on consecutive-repeat similary, and the constraint on repeat-repeat correlations. This redundancy is naturally measured within information theory by the difference of impact (i.e. entropy reduction) of a constraint depending on whether or not the other constraint is already enforced. an important constraint limiting protein sequence exploration. These observations hold for all three repeat protein families. The importance of 3D structure in reducing the entropy can also be appreciated in the entropy decay as a function of physical distance (Fig 3B for ANK) where most of the entropy drop happens within the first 10 angstroms, indicating that above this characteristic distance interactions are not crucial in constraining the space of accessible sequences.

Multi-basin structure of the energy landscape
The energy function of Eq (2) takes the same mathematical form as a disordered Potts model. These models, in particular in cases where σ i can only take two values, have been extensively studied in the context of spin glasses [36]. In these systems, the interaction terms −J ij (σ i , σ j ) imply contradictory energy requirements, meaning that not all of these terms can be minimized at the same time-a phenomenon called frustration. Because of frustration, natural dynamics aimed at minimizing the energy are expected to get stuck into local, non-global energy minima (Fig 4), significantly slowing down thermalization. This phenomenon is similar to what happens in structural glasses in physics, where the energy landscape is "rugged" with many local minima that hinder the dynamics. Incidentally, concepts from glasses and spin glasses have been very important for understanding protein folding dynamics [37].
We asked whether the energy landscape of Eq (2) was rugged with multiple minima, and investigated its structure. To find local minima, we performed a local energy minimization of E full (learned with all constraints including on P(ID), but taken with λ ID = 0 to focus on functional energy terms). By analogy with glasses, such a minimization is sometimes called a zerotemperature Monte-Carlo simulation or a "quench". The minimization procedure was started from many initial conditions corresponding to naturally occuring sequences of consecutive repeat pairs. At each step of the minimization, a random beneficial (energy decreasing) single mutation is picked; double mutations are allowed if they correspond to twice the same single mutation on each of the two repeats. Minimization stops when there are no more beneficial mutations. This stopping condition defines a local energy minimum, for which any mutation increases the energy. The set of sequences which, when chosen as initial conditions, lead to a given local minimum defines the basin of attraction of that energy mimimum (Fig 4). The size of a basin corresponds to the number of natural proteins belonging to that basin.
Performing this procedure on natural sequences of consecutive repeat pairs from all three families yielded a large number of local minima (Fig 5). To control for the phylogenetic bias that links natural sequences, we repeated this analysis on sequences synthetically generated from the model (E full ), and obtained very similar results (see S6 Fig for ANK). When ranked from largest to smallest, the distribution of basin sizes follows a power law (Fig 5A for ANK and S3A and S4A Figs for LRR and TPR). The energy of the minimum of each basin generally increases with the rank, meaning that largest basins are also often the lowest. Despite this multiplicity of local minima, the Monte-Carlo dynamics that we used in previous sections for learning the model parameters and for estimating the entropy did not get stuck in these minima, suggesting relatively low energy barriers between them.
The partition of sequences into basins allows for the definition of a new kind of entropy S conf = −∑ b P(b) ln P(b) called configurational entropy, based on the distribution of basin sizes, P(b) = ∑ σ2b P(σ), where σ 2 b means that energy minimization starting with sequence σ leads to basin b. This configurational entropy measures the effective diversity of basins, and is thus much lower than the sequence entropy S full , while the difference S full − S conf measures the average diversity of sequences within each basin. We find S conf = 5.1 bits for ANK, 6.0 bits for LRR, and 10.4 bits for TPR. As each basin corresponds to a distinct sub-family within each family [32], this entropy quantifies the effective number of these subgroups.
While basins are very numerous, they are also not independent of each other. An analysis of pairwise distances (measured as the Hamming distance between the local minima) between the largest basins reveals that they can be organised into clusters (panels B of Fig 5, S3 and S4  Figs), suggesting a hierarchical structure of basins, as is common in spin glasses [36].
The impact of repeat-repeat interactions on the multi-basin structure can be assessed by repeating the analysis on the model of non-interacting repeats, E ir . In that model the two repeats are independent, so it suffices to study local energy minima of single repeats-local minima of pairs of repeats follow simply from the combinatorial pairing of local minima in each repeat. The analyses of basin size distributions, energy minima, and pairwise distances in single repeats are shown in panels C and D of Fig 5, S3, and S4 Figs. We still find a substantial number of unrelated energy minima, suggesting again several distinct subfamilies even at the single-repeat level. For comparison, the configurational entropy of pairs of independent repeats is 6.9 bits for ANK, 6.7 for LRR, and 7.6 for TPR. While for ANK and LRR repeatrepeat interactions decrease the configurational entropy, as they do for the conventional entropy, they in fact increase entropy for TPR, making the energy landscape even more frustrated and rugged.
Note that the independent sites model E 1 defines a convex energy landscape with a single local minimum-the consensus sequence-as all constraints h i can be optimized independently. To address how the interactions contribute in shaping the sequence space, going from a convex to a rugged landscape, we repeated the analysis with a limited linear interaction range W of 3 and 10 (models of Fig 3A). We find that the more interactions we add, the more local minima we find (S5A and S5B Fig for ANK with W = 3, and C and D for W = 10). The minima cluster into clearer sub-blocks structure as the interaction range is increased, consistent with the entropy reduction observed in Fig 3A. In summary, the analysis of the energy landscape reveals a rich structure, with many local minima ranging many different scales, and with a hierarchical structure between them.

Distance between repeat families
Lastly, we compared the statistical energy landscapes of different repeat families. Specifically, we calculated the Kullback-Leibler divergence between the probability distributions P(σ) (given by Eqs 1 and 2) of two different families, after aligning them together in a single multiple sequence alignment (see Methods).
We find essentially no similarity between ANK and TPR, despite them having similar lengths: D KL (ANK||TPR) = 227.6 bits, and D KL (TPR||ANK) = 214.1 bits. These values are larger than the Kullback-Leibler divergence between the full models for these families and a random polypetide, D KL (ANK||rand) = 122.8 bits, and D KL (TPR||rand) = 157.6 bits. LRR is not comparable to ANK or TPR as it is much shorter, and a common alignment is impractical. These large divergences between families of repeat proteins show that different families impose quantifiably different constraints, which have forced them to diverge into different troughs of non-overlapping energy landscapes. This lack of overlap makes it impossible to find intermediates between the two families that could evolve into proteins belonging to both families.

Discussion
Our analysis of repeat protein families shows that the constraints between amino acids in the sequences allows for an estimation of the size of the accessible sequence space. The obtained numbers (ranging from 141 bits to 167 bits, corresponding to 10 36 to 10 50 sequences) are of course huge compared to the number of sequences in our initial samples (*20, 500 for ANK, *18, 800 for LRR, and *10, 000 for TPR), but comparable to the total number of proteins having been explored over the whole span of evolution, estimated to be 10 43 in Ref. [1].
In particular, we have quantified the reduction of the accessible sequence space with respect to random polypeptides. While most of this reduction is attributable to conservation of residues at each site, interactions between amino acids, both within and between consecutive repeats, significantly constrain the diversity of all repeat families. The break-up of entropy reduction between the three different sources of constraints-within-repeat interactions, between-repeat interactions, and evolutionary conservation between consecutive repeats-is fairly balanced, although TPR stands out as having more within-repeat interactions and more conservation between neighbours, suggesting that it may have had less time to equilibrate.
All studied repeat families have rugged energy landscapes with multiple local energy minima. Note that the emergence of this multi-valley landscape is a consequence of the interactions between amino acids: models of independent positions (E 1 ) only admit a single energy minimum corresponding to the consensus sequence. This multiplicity of minima allow us to collapse multiple sequences to a small number of coarse-grained attractor basins. These basins suggest that mutations between sequences within one coarse-grained basin are much more likely than mutating into sequences in other basins. In general, our results paint a picture of further subdivisions within a family, and define sub-families due to the fine grained interaction structure. Going beyond single families, this analysis suggest a view in which natural proteins all live in a global evolutionary landscape, of which families would be basins, or clusters of basins, with a hierarchical structure [32].
This overall picture of the sequence energy landscape is reminiscent of the hierarchical picture of the structural energy landscape of globular proteins, an overall funneled shape with tiers within tiers [38]. The form of the energy landscape forcibly shapes the accessible evolutionary paths between sequences. The rugged and further subdivided structure shows that the uncovered constraints are global, and not just pairwise between specific residues. Therefore even changing two residues together, as is often done in laboratory experiments, is not enough to recover the evolutionary trajectories. While other approaches have explored local accessible directions of evolution [39], our results suggest more global, non local modes of evolution between clusters.
Interestingly, the sequences that correspond to the energy minima of the landscapes are not found in the natural dataset. This observation can be either due to sampling bias (we have not yet observed the sequence with the minimal energy, although it exists), or this sequence may not have been sampled by nature. Alternatively, there may be additional functional constraint that are not included in our model to avoid these low energy sequences (e.g. a too stable protein may be difficult to degrade).
Even more intriguingly, sequences with minimal energy do not correspond to the consensus sequence of the alignment (whose energy is marked by a gray line in panel A of Fig 5, S3,  and S4 Figs), suggesting that the consensus sequence can be improved upon. All three repeat protein families studied here have been shown to be amenable to simple consensus-guided design of synthetic proteins. Synthetic proteins based on the consensus sequences of multiple alignments [40] were found to be foldable and very stable against chemical and thermal denaturation. Mutations towards consensus amino acids in the ANK family members have been experimentally shown to both stabilize the whole repeat-array and they may tune the folding paths towards nucleating folding in the consensus sites [41,42]. Our results suggest that interactions may play an additional role in stabilizing the sequences, and propose alternative solutions to the consensus sequences in the design of synthetic proteins.

Data curation
We use a previously curated alignment of pairs of repeats for each family [17]: ANK (PFAM id PF00023 with a final alignment of 20513 sequences of L = 66 residues each), LRR (PFAM id PF13516 with a final alignment of 18839 sequences of L = 48 residues each) and TPR (PFAM id PF00515 with a final alignment of 10020 sequences of L = 68 residues each). Those multiple sequence alignments of repeats were obtained from PFAM 27.0 [6,7]. In order to improve the data obtained from the PFAM database, we used original full protein sequences available in UniProt database [43] to add available information using the headers of the original alignement. Firstly, to decrease the number of gaps positions, misdetected initial and final amino acids in repeats were completed with residues from full sequences. Secondly, individual repeats which appeared consecutively in natural proteins were joined into pairs. Finally, positions with more than 80% of gaps along the alignment were removed, eliminating in this way insertions.
From the multiple sequence alignement of each family, they were calculated the observables that we use to constrain our statistical model. Particularly, we calculated the marginal frequency f i (σ i ) of an amino acid σ i at position i and the joint frequency f ij (σ i , σ j ) of two amino acids σ i and σ j at two different positions i and j. These quantities were calculated using only sequences selected by clustering at 90% of identity computed with CD-HIT [44] and then normalizing by the amount of sequences. In this way, the occurrences of residues in every position are not biased by overrepresentation of proteins in the database. Furthermore, to take into account the repeated nature of the protein families that we are considering, an additional observable was calculated, the distribution of sequence overlap between two consecutive repeats, P(ID(σ)), with IDðσÞ ¼

Model fitting
In order to obtain a model that reproduces the experimentally observed site-dependent amino-acid frequencies, f i (σ i ), correlations between two positions, f ij (σ i , σ j ), and the distribution of Hamming distances between consecutive repeats, P(ID(σ)), we apply a likelihood gradient ascent procedure, starting from an initial guess of the h i (σ i ), J ij (σ i , σ j ) and λ ID (σ) parameters. At each step, we generate 80000 sequences of length 2L through a Metropolis-Hastings Monte-Carlo sampling procedure. We start from a random amino-acid sequence and we produce many point mutations in any position, one at a time. If a mutation decreases the energy (2) we accept it. If not, we accept the mutation with probability e −ΔE , where ΔE is the difference of energy between the original and the mutated sequence. We add one sequence to our final ensemble every 1000 steps. Once we generated the sequence ensemble, we measure its marginals f model i ðs i Þ and f model ij ðs i ; s j Þ, as well as P model (ID(σ)), and update the parameters of Eq 2 following the gradient of the likelihood. The local field and λ ID (σ) are updated along the gradient of the per-sequence log-likelihood, equal to the difference between model and data averages: l ID ðσÞ tþ1 l ID ðσÞ t À � ID ½PðIDðσÞÞ À PðIDðσÞÞ model �: As the number of parameters for the interaction terms J ij is large (= 21 2 L 2 ), we force to 0 those that are not contributing significantly to the model frequencies through a L 1 regularisation γ∑ ij, σ, τ |J ij (σ, τ)| added to the likelihood. This leads to the following rules of maximization: If J ij (σ i , σ j ) t = 0 and jf ij ðs i ; s j Þ À f model The optimization parameters were set to: � m = 0.1, � j = 0.05, � ID = 10, and γ = 0.001. To estimate the model error, we compute f i ðs i Þ À f model i ðs i Þ and f ij ðs i ; s j Þ À f model ij ðs i ; s j Þ. We also calculate the difference of generated and natural repeat similarity distribution for all the possible repeats Hamming distances, penalized by a factor 5 to better learn the parameter λ ID : 5(P(ID(σ)) − P(ID(σ)) model ). We repeat the procedure above until the maximum of all errors, jf i ðs i Þ À f model i ðs i Þj, jf ij ðs i ; s j Þ À f model ij ðs i ; s j Þj and 5|P(ID(σ)) − P(ID(σ)) model |, goes below 0.02, as in Ref. [17].

Models with different sets of constraints
Using this procedure we can calculate the model defined in Eq 2 with different interaction ranges used in the entropy estimation in Fig 3A. We start from the independent model h i (σ i ) = log f i (σ i ). We first learn the model in Eq 2 with J = 0. We then re-learn models with interactions between sites i, j along the linear sequence such that |i − j| � W, in a seeded way starting from the previous model. The first and last point of Fig 3 correspond to the independent site model with λ ID and the full model in Eq 2 The entropy in Fig 3B is calculated in the same way as in Fig 3, but now interactions are turned on progressively according to physical distance in the 3D structure rather than the linear sequence distance. In order to obtain the physical distance between residues we use as a reference structure the first two repeats of a consensus designed ankyrin protein 1n0r [45,46], which have exactly 66 amino-acids. We define the 3D separation between two residues as the minimum distance between their heavy atoms in the reference structure.
To learn the Potts model without λ ID (E 2 ) we remove λ ID from Eq 2 and re-learn the Potts field using the full model parameters as initial contition.
To learn the single repeat models with and without λ (E ir and E ir,λ , we take as initial condition the model with interactions below the length of a repeat (W = L − 1, dashed vertical line in Fig 3), and then learn a model removing all the J ij terms between different repeats. We also impose that the h i fields and intra-repeats J ij terms are the same in each repeat, and the experimental amino-acid frequencies to be reproduced by the model are the average over the two repeats of the 1-and 2-points intra-repeats frequencies f i (σ i ) and f ij (σ i , σ j ), such that if i and j represent sites within the same repeat. In this way we obtain a model for a single repeat that can be extended to both the repeats in the original set of sequences of our dataset.

Entropy estimation
In practice to calculate the entropy S of the protein families we relate it to the internal energy E = −log p(σ) and the free energy F = −log Z: We generate sequences according to the energy function in Eq 2 and use them to numerically compute hEi. To calculate the free energy we use the auxilliary energy function: where the interaction strength across different sites can be tuned through a parameter α that is changed from 0 to 1. We generate protein sequence ensembles with different values of α and use them to calculate F as a function of α, Fð1Þ ¼ Fð0Þ þ R 1 0 da dF da : where the average over α is taken over the sequences generated with a certain value of α, characterized by the ensemble with probability p a ðσÞ ¼ ð1=Z a Þe À E a ðσÞ . F(0) is the free energy for an independent sites model: where the first sum is taken over protein sites and the second over all possible amino-acids at a given site. Eqs 22 and 19 result in the thermodynamic sampling approximation for calculating the entropy [47]: We generate 80000 sequences using Monte Carlo sampling for the energy in Eq 20 with 50 different α values, equally spaced between 0 and 1 at a distance of 0.02, and then numerically compute the integral in Eq 23 using the Simpson rule.

Entropy error
The entropy estimate is subject to three sources of uncertainty: the finite-size of the dataset, convergence of parameter learning, and the noise in the thermodynamic integration. We estimate the contribution of each of these errors using the independent sites model. In the independent sites model each site i is simply described by a multinomial distribution with weights given by the observed amino-acid frequencies in the datasets. The variance in the estimation of the frequencies from a finite size sample is Var(f i (σ i )) = (p i (σ i )(1 − p i (σ i )))/N s and the covariance between the frequencies of different amino-acids σ and σ 0 at the same site i is Covðf i ðs i Þ; f i ðs 0 i ÞÞ ¼ À ðp i ðs i Þp i ðs 0 i ÞÞ=N s where N s is the sample size and p i (σ i ) are the weights of the true multinomial distribution sampled. Through error propagation from these quantities we calculate the variance in the entropy of the independent sites model, to first order in 1/ N s : Evaluating this equation using the empirical frequencies p = f assuming they are sampled from an underlying multinomial distribution, gives an estimate of the standard deviation of 0.05. We assume that the interaction terms do not change the order of magnitude of this estimation. Also the standard deviation in the averages in Eq (23) scales as 1= ffi ffi ffi ffi ffi N s p with N s = 80000.
The parameter inference is affected not only by noise, but also by a systematic bias depending on the parameters of the gradient ascent described above and the initial condition that we chose to start learning from. S1 Fig shows the average entropy of 10 realizations of the learning and thermodynamic integration procedure for the ANK family and its standard deviation as error bars. If we learn the models with an increasing W window progressively we get a different profile than learning each point starting from the independent model, and above L these two profiles are more distant than the magnitude of the standard deviation, signalling a systematic bias. S1 Fig also shows that progressively learning the model results in a better parameters convergence to values that give lower entropy values.
In order to estimate how this bias is reflected in the entropy estimation we take the singlesite amino-acid frequencies produced by the inferred energy function in the last Monte-Carlo phase of the learning procedure and calculate the corresponding entropy for this independentsites model. We compute the absolute value of the difference between this estimate of the entropy and the independent-sites entropy calculated from the dataset. Again in doing this we assume that neglecting the interaction terms does not change the order of magnitude of this error. These procedure results in the errorbars shown in Figs 3 and 2, Table 1 We repeat 10 realizations of both the parameter inference procedure and the entropy estimation, and in Fig 3 we show the average entropy of these 10 numerical experiments for the ANK family where error bars are estimated as explained above to sketch the order of magnitude of the error coming from systematic bias in the parameters learning. S1 Fig shows the mean entropy of ANK as in Fig 3A with the standard deviations of the realizations entropy as error bars, to give an idea of the combined noise in the thermodynamic integration and in the gradient descent, starting from the same initial conditions and with the same update parameters (see Section). The combined noise is smaller than the entropy decrease at 33 residues, showing the decrease is real.
To further check the robustness of the entropy estimation procedure, we generate two synthetic ANK datasets, one with an independent sites model, the other with a model of two noninteracting repeats obtained as explained above, and relearn the model from the synthetic datasets. Repeating the learning and entropy estimation procedure on each on the synthetic protein families gives results that are consistent with the model used for the dataset generation. The entropy of the model learned taking an independent sites dataset does not decrease with the interaction range W and the entropy of the model learned taking a non-interacting repeats dataset does not show any drop around the repeat length.
We repeat the procedure described for the LRR and TPR repeat-proteins families reaching similar conclusions (S2 Fig).

Calculating the basins of attraction of the energy landscape
In order to characterize the ruggedness of the inferred energy landscapes and the sequence identity of the local minima, we start from all the sequences in the natural dataset as initial conditions and for each of them we perform a T = 0 quenched Monte-Carlo procedure. Repeating this analysis on sequences synthetically generated from E full yields very similar results (see S6 Fig for ANK) We perform this energy landscape exploration learning the parameters of the Hamiltonian in Eq 2 (refer to Section for the learning procedure), and then set λ ID = 0 in the energy function because we want to investigate the shape of the energy landscape due to selection rather than the phylogenic dependence.
We scan all the possible mutations that decrease the sequence energy and then draw one of them from a uniform random distribution. The possible mutations are all single point mutations. If the same amino-acid is present in the same relative position in the two repeats we allow for double mutations that mutate those two positions to a new amino-acid, that is identical in both repeats, at the same time. We do this so that the phylogenetic biases that are still partially present in the parameters of the model do not result in spurious local minima biasing the quenching results. The Monte-Carlo procedure ends when every proposed move results in a sequence with an increased energy, and the identified sequence is a local minimum of the energy landscape.
To explore how turning on interactions makes the energy landscape more rugged, we perform the same procedure with the Hamiltonian corresponding to two intermediate interaction ranges in Fig 3A. That is Eq 2, in which J ij was allowed to be non-zero only within a certain interaction range W. We picked W = 3 and W = 10.
In order to assess what is the role of the inter-repeat interactions we repeat this T = 0 quenched Monte-Carlo procedure on single repeats, with all the unique repeats in the natural dataset as initial condition. The learning procedure of the Hamiltonian for a single repeat is explained in Section. In this single repeat case the possible mutations are just the single point mutations.
Once we have the local minima of the energy landscape, we obtain the coarse-grained minima using the Python Scipy hierarchical clustering algorithm. In this hierarchical clustering the distance between two clusters is calculated as the average Hamming distance between all the possible pairs of sequences belonging each to one cluster. As a result we plot the clustered distance matrix, the clustering dendogram and the basin size corresponding to the distance matrix entries.
In the end we can repeat the quenching procedure described above for LRR and TPR families. The result are sketched in S3 and S4 Figs and lead to similar conclusions as for the ANK family.

Kullback-Leibler divergence
The Kullback-Leibler divergence between two families A and B is defined as D KL (A||B) = ∑ σ p A (σ) log 2 p B (σ)/p A (σ). We can substitute the sequence ensembles for ANK and TPR in the definition of the probabilities obtaining: where the notation hi ANK means that the average is calculated over sequences drawn from the ANK ensemble: PðσÞ ANK ¼ ð1=Z ANK Þe À EðσÞ ANK . Therefore hE TPR i ANK is the average TPR energy function evaluated, via the structural alignment between the two families, on 80000 sequences generated through a Monte Carlo sampling of the ANK model 2 (and analogously for hE ANK i TPR ). The terms F ANK and F TPR are calculated in the same way as when estimating the entropy through Eqs 21 and 22, as explained in Section.