Skip to main content
Advertisement
  • Loading metrics

How local spectral gaps regulate the multistability of Turing patterns on graphs

  • Selim Haj Ali,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Constructor University Bremen, Computational Systems Biology, Bremen, Germany

  • Marc-Thorsten Hütt

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing

    Affiliation Constructor University Bremen, Computational Systems Biology, Bremen, Germany

Abstract

Due to their self-organised, collective nature, Turing patterns on graphs are an important source of information about the relationship between graph architecture and dynamics. One defining feature of these dynamics is the coexistence of multiple stable patterns (‘pattern diversity’), the dependency of which on network architecture is still not well understood. Here we create standardised situations near the Turing instability threshold and study the multistability of patterns as a function of structural perturbations of the graph. In particular, we analyse the resulting changes in pattern diversity as a binary classification problem. We find an asymmetry between lower and higher eigenvalues near the Turing instability, which can be understood in terms of the interlacing theorem, known from spectral graph theory. This allows us to derive rules governing the multistability of Turing patterns using local spectral gaps of the graph’s Laplacian as input, but also evaluate the contribution of nonlinear interactions between eigenmodes to pattern diversity, independently of the interaction model considered.

Author summary

Pattern formation has for a long time been at the core of the exploration of complex systems. Also, the nonlinear models giving rise to spatiotemporal patterns have informed method development and analysis strategies in nonlinear dynamics. Studying such processes on graphs is a relatively new trend with the goal of understanding structure-function relationships in complex networks. Here we consider a class of spatiotemporal patterns arising in reaction-diffusion systems, Turing patterns, and use a machine-learning approach – binary classification – to understand, which network properties affect the diversity of such patterns.

1. Introduction

In their seminal work [1], Nakao and Mikhailov established the theoretical basis for the study of Turing pattern formation on graphs. Since then, the Turing framework [2] has been extended to several graph topologies including temporal [3, 4], directed [57], non-normal [8] and multiplex [9, 10] networks, but also time-delayed [11, 12], non-diffusive [13] and higher-order interactions via hyperlinks [14], simplicial [15] and cell complexes [16].

What still remains poorly understood is, how network architecture shapes one of the decisive features of Turing patterns, their multistability: For a given set of dynamical parameters within the Turing instability regime, a reaction-diffusion deterministic system converges to, not one, but often a substantial number of possible stable states, under different initial conditions. These coexisting solutions in the deterministic model are a property of network-organised systems that can only be attributed to their complex structure.

The analytical solution provided by the linear stability analysis [1], as described in Sect 2.4, constitutes a first step towards a structural understanding of this phenomenon. This solution translates into a dispersion relation that combines the eigenvalues of the graph Laplacian with the system’s dynamical parameters to predict which of the graph Laplacian eigenmodes contribute to pattern formation.

A challenge in addressing the question is the lack of comparability among different models implementing the Turing framework, parameter choices and network architectures. In [17], a formal calibration strategy has been suggested based on the original formalism from [1]: By selecting, for each model and network, parameters such that the dispersion relation is placed in a standardised fashion, comparability is ensured.

Hütt et al. [17] postulate that the number of unstable pattern-forming eigenmodes is proportional to pattern diversity near the instability threshold. This is true insofar as all unstable eigenmodes contribute to pattern formation and their nonlinear interactions are negligible, which is yet to be proven. In practice, the relationship between the graph eigenmodes and emergent pattern diversity is more complex than originally deposited.

To better understand the multistability of Turing patterns, we need to: (i) relate each growth rate to the effective contribution of its eigenmodes to pattern diversity, (ii) evaluate the importance of nonlinear interactions between joint pattern-forming eigenmodes and (iii) identify key structural regulators of pattern diversity.

The spectral gap and its structural sensitivity, sometimes referred to as the dynamical importance [18], are known to regulate various dynamical systems [1923]. In this study, we highlight the importance of local spectral gaps [24] and the entire Laplacian spectrum in the case of reaction-diffusion systems, namely in regulating the multistability of Turing patterns.

We first present the theoretical foundations of our study and how we use them to build our predictive toolbox. We then perform extensive numerical experiments on several random graphs and observe how different structural perturbations affect both the Laplacian spectrum of the graph and the multistability of the emerging Turing patterns. Then, through heuristic arguments and formal classification, we derive eigenvalue-based criteria capable of predicting changes in pattern diversity caused by structural perturbations. We conclude on the central role of local spectral gaps in explaining the structural sensitivity of Turing pattern diversity and, consequently, its multistability in general.

2. Methods

2.1. Graph

Let be a simple connected graph with nodes and edges, i.e. undirected and unweighted with neither self-loops nor parallel links. Let L = AkI denote the Laplacian matrix of G where A is its adjacency matrix, k is its vector of node degrees and I is its identity matrix of size .

Let denote the N real non-positive eigenvalues of the Laplacian matrix, sorted in ascending order such that . Let denote the eigenmode associated with the eigenvalue such that , which we will generally refer to by its index i for simplicity. We refer to as the local spectral gap between i and i + 1.

After removing an edge e from G, we obtain a perturbed graph with N nodes, E–1 edges and N Laplacian eigenvalues for which the interlacing theorem [25] states that for i = 1...N−1, with .

Let denote the structural sensitivity of the eigenvalue : a real number between 0 and 1 measuring how much an eigenvalue shifts after edge removal .

In the case of edge addition, we write .

2.2. Model

The Gierer-Meinhardt model [26] is an activator-inhibitor model capable of creating Turing patterns. Two species, an activator and an inhibitor diffuse and interact on a graph G. Their respective concentrations are given by the vectors and , and they evolve according to the following set of differential equations:

(1)

The reaction part of the dynamics is modulated by two positive kinetic parameters a and b, defining the homogeneous steady state , solution of Eq (1).

The diffusion part of the dynamics is driven by the graph Laplacian L and modulated by and the respective diffusion coefficients of activators and inhibitors.

A Turing pattern is a heterogeneous steady-state solution of Eq (1) induced by perturbations of the homogeneous distribution , i.e.  +  and  +  . Here we conventionally refer to , the final distribution of activator concentration, as the Turing pattern.

2.3. Pattern diversity

A number of different Turing patterns can coexist on a graph G for the same set of dynamical parameters , depending on the initial conditions .

Two simulation runs are said to converge to the same Turing pattern if their steady states have a mutual Pearson correlation coefficient (see [27]) superior to 0.9. Clustering the results of n simulations yields distinct Turing patterns , each with multiplicity . To quantify the observed multistability, we compute a pattern diversity score d = 1 − s where is the pattern pairwise similarity score, with a normalisation factor and the set of r distinct Turing patterns.

2.4. Linear stability analysis

We linearise Eq (1) around and solve the decoupled system of equations to obtain the linear growth rates relative to each eigenmode i as a function of its eigenvalue for i = 1...N. As established in [1], this dispersion relation is given by

(2)

where the activator concentration at a given node k can be expanded over the set of Laplacian eigenmodes: , where the expansion coefficients ci are given by the initial perturbation. The linear solution presented in Eq (2) then provides initial estimates of: (i) the parameters required for pattern onset, driven by the non-vanishing contribution of the eigenmodes, (ii) how different eigenmodes compete for pattern formation [28, 29], the weight of which is approximated by their respective linear growth rates.

An eigenmode i is . The Turing instability regime is the parameter space allowing at least one positive growth rate and thus, the formation of Turing patterns according to the linear approximation.

Without loss of generality, we restrict our analysis to configurations starting with two unstable eigenmodes and find that, in this case, only three eigenmodes are relevant to the problem of multistability under structural perturbations:

  1. (1) The stable eigenmode S that can become unstable after structural perturbation, i.e. of eigenvalue and growth rates (before perturbation) and (after perturbation) with .
  2. (2) The unstable eigenmode U that can become stable after structural perturbation, i.e. of eigenvalue and growth rates (before perturbation) and (after perturbation) with .
  3. (3) The remaining unstable eigenmode M, i.e. of eigenvalue and growth rates (before perturbation) and (after perturbation) with .

2.5. Parameter selection

To isolate the main drivers of pattern diversity changes using Laplacian eigenvalues, we adopt the following standardised protocol to ensure that we always start with two degenerate unstable eigenmodes. This restrains our system near the instability threshold where its dynamics remain dominated by linear terms and growth rates can be used as a proxy for eigenmode contribution to pattern formation.

First, assuming a continuous dispersion relation , we solve , which holds for

(3)

Then, for each , we compute in this way:

  1. (1) We set  +  inside the Turing regime. defines the kinetic limit below which (see Eq (3)) and defines the Hopf bifurcation limit above which (see Eq (2)). Without loss of generalisation, we choose and d = 0.8.
  2. (2) Since is quadratic around 0, we have at the edge of the Turing regime by setting  +  and calculating the corresponding kinetic parameters according to Eq (3).
  3. (3) We want to obtain two unstable degenerate eigenmodes and , i.e. . Therefore, we choose such that is an increment for which where is the smallest stable growth rate of the dispersion relation.
    Let denote the set of indices such that (hence ) and the set of indices such that (hence ).

2.6. Dispersion relation and parametrisation

According to Eq (2) and the interlacing theorem applied to a single structural perturbation (see Sect 2.1), both growth rates and are monotonous as a function of and respectively: After edge removal, and , and after edge addition, and . However, in both cases, the growth rate is not monotonous as a function of , but convex around 0.5 due to the dispersion relation being locally quadratic.

Therefore, to relate the eigenvalues of the three eigenmodes S, U and M to changes in pattern diversity, we track , and respectively.

In the case of an edge removal, starting with two unstable degenerate modes and  +  1, we identify − 1, and  +  1. Therefore, the three relevant structural sensitivities are , and . In S1 Fig in S1 Appendix, we present a fully annotated version of the dispersion relation shown in Fig 1(j).

thumbnail
Fig 1. Structural understanding versus numerical analysis of Turing patterns,

before (first row) and after edge removal (second or third row): (a), (b) and (c) represent an Erdős-Rényi random graph G of 30 nodes and 70 edges before and after removal of an one edge (red dashed line) or another (blue dashed line). (d), (e) and (f) represent the dispersion relation for a Gierer-Meinhardt model [26] with reaction parameters a5 = 0.27 and b5 = 1.57 and diffusion parameters and , zoomed into the three eigenmodes S, M and U around the Turing instability regime. (g), (h) and (i) represent the distinct Turing patterns that emerge from 500 simulations of each separate network. (j), (k) and (l) represent pie charts of their multiplicity, which translates to pattern diversity scores of d1 = 0.54 (before edge removal), d2 = 0.75 (after edge removal, causing an increase in pattern diversity) and d3 = 0.15 (after edge removal, causing a decrease in pattern diversity).

https://doi.org/10.1371/journal.pcsy.0000044.g001

In the following, we only consider the effects of edge removal. The case of edge addition is explored in Sect F.2 in S1 Appendix, where we prove that our results can be generalised to both cases of structural perturbations.

2.7. Numerical analysis strategy

For illustration, we run the Gierer-Meinhardt model on an Erdős-Rényi graph G of 30 nodes and 70 edges (see Fig 1(a)) for a given set of dynamical parameters and represent the growth rates of the eigenmodes S, M and U on the corresponding dispersion relation in Fig 1(d). The Turing patterns that emerge out of 500 runs are given in Fig 1(g) and their pattern diversity is represented in Fig 1(j).

After removing an edge from G (see dashed red line in Fig 1(b) or dashed blue line in Fig 1(c)) and we observe the corresponding motions of S, M and U along the dispersion relation in Fig 1(e) and 1(f), respectively. The Turing patterns that emerge out of 500 runs (assuming the same dynamical parameters) are given in Fig 1(h) and 1(i), respectively. Their pattern diversity are represented in Fig 1(k) and 1(l), respectively.

The multistability analysis of a graph G consists in studying the response of the network (dynamics) to varying:

  1. (1) Dynamical parameters, by making pairs of adjacent eigenmodes degenerate and unstable (e.g. M and U in Fig 1(b)). This amounts to at most N − 1 = 29 possible instances, since not all eigenmodes can converge to Turing patterns. See S1 Fig in S1 Appendix for a quantitative analysis of this frequency of convergence to Turing patterns.
  2. (2) Structural perturbations, by removing one edge of the graph at a time. This amounts to exactly E = 70 possible instances.

Moreover, G can have up to configurations, one for each eigenmode-edge pair. For each one of these, we run 500 simulations before and after edge removal, and measure pattern diversity changes.

3. Predicting multistability as a binary classification problem

Structural perturbations simultaneously impact both the Laplacian eigenvalues of a graph and the multistability of Turing patterns. We translate this complex relationship into a binary classification problem that systematically maps how microscopic structural changes (structural condition) probabilistically correspond to macroscopic higher or lower pattern diversity (binary dynamical outcome).

3.1. Hypotheses

Unstable eigenmode degeneracy is hypothesised to be the dominant factor influencing pattern diversity [17]. Here we want to challenge this basic view and look for other, more general and similarly strong, structural indicators of pattern diversity. Our approach uses the structural sensitivities of the eigenmodes S, U and M as features to discriminate between below-average and above-average pattern diversity changes. For simplicity, we refer to these as pattern diversity decreases and increases, respectively.

In a condensed form, our hypotheses are:

  1. (1) Dissimilar growth rates near the Turing instability (e.g. breaking an eigenmode degeneracy) indicate a pattern diversity decrease. In terms of Laplacian eigenvalues, this means defining boundaries on , and .
  2. (2) Similar growth rates near the Turing instability (e.g. approaching an eigenmode degeneracy) indicate a pattern diversity increase. In terms of Laplacian eigenvalues, this means defining boundaries on , and .

3.2. Evaluation

Understanding the multistability of Turing patterns can be reduced to a binary classification problem, where two non-intersecting Boolean criteria c( + ) and can either succeed or fail in discriminating between decreases (identified as the negative outcome N) and increases (identified as the positive outcome P) in pattern diversity. We define:

  1. (1) The negative criterion , a Boolean criterion predicting pattern diversity decreases as a function of the boundaries , and , and where is the AND Boolean operator.
  2. (2) The positive criterion , a Boolean criterion predicting pattern diversity increases as a function of the boundaries , and , and where is the AND Boolean operator.

Table 1 summarises the classification outcomes of our evaluation algorithm, where d is the pattern diversity measured on a given graph for a given of set of dynamical parameters (corresponding to a given pair of unstable eigenmodes, see Sect 2.4), is the pattern diversity measured on that same configuration but after structural perturbation, and is the average pattern diversity changes measured for a given graph for all possible pairs of unstable eigenmodes and structural perturbations.

thumbnail
Table 1. Confusion matrix relative to the pattern diversity prediction problem: dynamical outcome (actual) and structural condition (predicted).

https://doi.org/10.1371/journal.pcsy.0000044.t001

We evaluate the proportion of correct predictions of:

  1. (1) The negative criterion using the negative predictive value for different values of , and .
  2. (2) The positive criterion c( + ) using the positive predictive value for different values of , and .

We also evaluate the joint performance of and c( + ) by measuring their accuracy .

These performance metrics are then balanced against two structural quantities: the predicted negatives and the predicted positives , which measure the size of the samples predicted by (classified as negative outcomes) and c( + ) (classified as positive outcomes) respectively, regardless of whether they are correct.

Furthermore, we aim to identify the optimal eigenvalue-based criteria and c( + ) capable of accurately predicting pattern diversity decreases (maximising NPV) and increases (maximising PPV), across a meaningful range of the graph spectrum (maximising N and P, respectively). To heuristically search for these criteria, we analyse the multistability of 50 different Erdős-Rényi random graphs, each comprising N = 30 nodes and E = 70 edges.

4. Results

For each graph, we find a characteristic distribution of pattern diversity changes that is robust within the chosen volume of numerical simulations, indicating that this phenomenon has a structural origin. We illustrate this convergence to a stable distribution in S2 Fig in S1 Appendix.

Next, we compute the negative predictive value NPV and corresponding predicted sample size N as a function of the possible boundaries , and ; and the positive predictive value PPV and corresponding predicted sample size P as a function of the possible boundaries , and .

4.1. Predicting pattern diversity decreases

In this subsection, we search for the optimal criterion . Fig 2 shows the average measured NPV (first row) and corresponding N (second row) over all 50 graphs as a function of the possible boundaries and , while assuming (unbounded), for (left column) and (right column) separately.

thumbnail
Fig 2. Predicting pattern diversity decreases as a function of and .

For 50 Erdős-Rényi graphs with 30 nodes and 70 edges, we split the results into two columns according to the two initial degenerate unstable eigenmodes  +  : on the left, (if ), and on the right, (if ). (a) and (b): Average negative predictive value for and , respectively for and . (b) and (d): Average proportion of cases predicted as pattern diversity decreases for and , respectively for and .

https://doi.org/10.1371/journal.pcsy.0000044.g002

We find that the primary factor predicting pattern diversity decreases is the sensitivity of U: The influence of greatly outweighs that of .

Another striking finding here is that the sign is not accompanied by a shift in pattern diversity as predicted by linear stability analysis. This suggests that, in general, the sign of is not a reliable predictor as to whether an eigenmode i contributes to pattern diversity. If the sign of growth rates regulated pattern diversity, we would observe that:

  1. (1) changing sign from negative to positive values (as increases) is accompanied by a pattern diversity increase due to the added contribution of eigenmode S. This boundary is crossed at (where ) for and we illustrate as a dashed white line in Fig 2(a).
  2. (2) changing sign from positive to negative values (as increases) is accompanied with a pattern diversity decrease due to the withdrawn contribution of eigenmode U. This boundary is crossed at (where ) for and we illustrate as a dashed white line in Fig 2(b).

However, Fig 2(a) shows the existence of two regimes in the data, as a function of : The NPV (capturing the proportion of pattern diversity decreases) fluctuates around 0.94 and starts decreasing only after . The fact that becomes positive (at ) does not, in itself, affect pattern diversity. This trend is better illustrated in S4(a) Fig in S1 Appendix.

Similarly, Fig 2(b) shows the existence of two regimes in the data, as a function of : The NPV increases and then plateaus, fluctuating around 0.9 after . The fact that becomes negative (at ) does not affect the pattern diversity any further. This trend is better illustrated in S4(b) Fig in S1 Appendix.

The failures of propositions (1) and (2) indicate that pattern diversity is tied to the position of growth rates, not relative to 0 (as suggested by the linear stability analysis), but to a larger, positive offset l > 0 which in our numerical results corresponds approximately to if (where ) and if (where ). We illustrate this growth rate offset l > 0 on the dispersion relation in S4(c) Fig in S1 Appendix.

The importance of and outweighs that of , which cannot further improve our predictions of pattern diversity decreases. We illustrate its role in S5 Fig (when coupled to ) and S6 Fig (when coupled to ) in S1 Appendix. Nevertheless, the observation of the effect of (inversely proportional to ) leads us to conclude that pattern diversity is not simply the product of having multiple eigenmodes contributing to pattern formation, but is also an emergent property of them having comparable growth rates , indicating that the patterned activity is distributed across different eigenmodes, thus producing a richer attractor landscape. This phenomenon is confirmed in the next subsection.

4.2. Predicting pattern diversity increases

In this subsection, we search for the optimal criterion c( + ). Fig 3 shows the average measured PPV (first row) and corresponding P (second row) over all 50 graphs as a function of the possible boundaries and , while assuming (unbounded), for (left column) and (right column) separately.

thumbnail
Fig 3. Predicting pattern diversity increases as a function of and .

Considering the dataset used in Fig 2, we split the results into two columns according to the two initial degenerate unstable eigenmodes : on the left, (if ), and on the right, (if ). (a) and (c): Average positive predictive value for and , respectively for and . (b) and (d): Average proportion of cases predicted as pattern diversity increases for and , respectively for and .

https://doi.org/10.1371/journal.pcsy.0000044.g003

The primary factor predicting pattern diversity increases is the robustness of U, and secondarily the robustness or highly sensitivity of M (i.e., for ), two constraints that maximise eigenmode interaction, of which eigenmode degeneracy is a limit case verified for and .

What is less intuitive is that an increase of (via ) does not enhance pattern diversity, assuming the aforementioned constraints on and . The importance of and outweighs that of , which cannot further improve our predictions of pattern diversity decreases. We illustrate its role in S7 Fig (when coupled to ) and S8 Fig (when coupled to ) in S1 Appendix.

Also, a larger local spectral gap is statistically more likely to result in a larger , and thus theoretically more pattern diversity after edge removal. However, in terms of precision, whether in Fig 2 or Fig 3, there is only a marginal advantage to considering separate criteria for and . The diffusion parameter selection protocol has a negligible effect on our predictions. We therefore drop all distinctions between and .

4.3. Application to a population of graphs

In this subsection, we apply the previously derived criteria and c( + ) and evaluate our ability to split simulated data into two subsets, separating pattern diversity decreases and increases using only structural quantities.

Fig 4 compares the overall distribution of changes in pattern diversity versus those predicted by and c( + ) where:

  1. (1) is the intersection between the two sets verifying and identifying pattern diversity decreases (below-average changes).
  2. (2) c( + ) is the intersection between the two sets verifying and identifying pattern diversity increases (above-average changes).
thumbnail
Fig 4. Histograms of changes in pattern diversity

for the dataset used in Fig 2 which amounts to a total of 65107 cases (, grey histogram) of average pattern diversity change d = −0.06 (black dashed vertical line) according to which we can define two sets: (1) 27616 observed pattern diversity decreases ( of all observations) of average d(−) = −0.167 (black-blue dashed vertical line) and 37491 observed pattern diversity increases ( of all observations) of average d( + ) = 0.0145 (black-red vertical line). The negative criterion predicts pattern diversity decreases for 15749 cases ( of all observations, blue histogram) of average (blue dashed vertical line) which translates into a negative predictive value of 0.86. The positive criterion predicts pattern diversity increases for 17024 cases ( of all observations, red histogram) of average (red dashed vertical line) which translates into a positive predictive value of 0.91. Using both and c( + ), we obtain an accuracy of 0.88.

https://doi.org/10.1371/journal.pcsy.0000044.g004

Note that the choice of and c( + ) involves a trade-off between prediction performance and predicted sample size. We can have more accurate predictions on a smaller sample by considering more extreme value boundaries (e.g. and as c( + ) would increase the PPV but drastically reduce P), and vice versa. We detail our prediction performance per graph in S9 Fig in S1 Appendix. In addition, we present the detailed analysis of a single graph in S10 Fig, S11 Fig, S12 Fig, S13 Fig in S1 Appendix, where we show how our conclusions extend to the case of edge addition.

5. Discussion

The problem of predicting changes in pattern diversity under structural perturbations can be treated as a binary classification problem, where each data point is defined by a set of dynamical parameters and an edge of the graph, and for which local spectral gaps of the graph Laplacian constitute powerful discriminative features. By considering the structural sensitivity of the eigenvalues, we show how we can predict with high accuracy whether a given data point produces below-average or above-average pattern diversity changes, two possible outcomes referred to in this paper as decreases and increases in pattern diversity.

As follows from the interlacing theorem, structural perturbations affect the Laplacian eigenvalues in a single direction. This results in an asymmetry between eigenvalues on either side of the Turing instability regime, i.e. the ascending and descending arcs of the dispersion relation parabola, as shown in Fig 1. For this reason, each eigenmode must be treated separately and not only on the basis of its growth rates, as illustrated by the eigenmodes M and U in our example. Fortunately, this does not lead to an explosion of complexity (and relevant structural indicators). In fact, we find that, regardless of the interaction model, the response of the network to structural perturbations can be explained by exactly three eigenmodes of the Laplacian spectrum: S the stable eigenmode that can become unstable, U the unstable eigenmode that can become stable, and M the unstable eigenmode that remains unstable after a single structural perturbation.

As a consequence of the asymmetry between eigenvalues near the Turing threshold, we find that multistability depends significantly more on the structural sensitivity of U than of M or S. In fact, varying is more likely to affect pattern diversity through two interdependent factors: (i) reducing the number of unstable pattern-forming eigenmodes (unlike ), and (ii) reducing the interactions between pattern-forming eigenmodes (unlike ).

Moreover, our numerical analysis reveals the existence of a phenomenological growth rate offset l > 0, region in which eigenmodes interact and induce further multistability, which we evaluate using the structural sensitivity of either S () or U (), depending on the narrowest local spectral gap on either side of the Turing instability. We note that this quantity l > 0 is not to be confounded with the (usually larger) convergence threshold made explicit in S1 Fig in S1 Appendix, and which concerns pattern formation in the first place. Although more informative than the sign of growth rates, the number of unstable eigenmodes of growth rates is by itself a poor predictor of pattern diversity (not shown here). Instead, we propose to track two quantities simultaneously: (1) U and S to infer whether a structural perturbation suppresses multistability (see Sect 4.1), or (2) U and M to infer whether a structural perturbation encourages multistability (see Sect 4.2).

In the general case, starting with k unstable eigenmodes (where in this study k = 2), the same principles apply if we assume that M is not 1 but a set of k–1 eigenmodes within the Turing instability regime. However, we expect that considering a larger number k of initially unstable and competing eigenmodes will lead to an overall increase in pattern diversity, which will narrow the distribution of pattern diversity changes and hinder our ability to separate decreases and increases in pattern diversity. Furthermore, an increase in k also implies going deeper into the Turing instability regime where our results, originally interpolated from the linear growth rates, begin to break down and would require incorporating the increasing contribution of nonlinear terms, e.g. using the Ginzburg-Landau formalism [30, 31]. Similarly, a larger graph spectral density (e.g. for larger and denser networks) implies less sensitive eigenvalues, less changes in pattern diversity and therefore, less accurate predictions.

Node degrees are at the core of diffusion processes on graphs and as such, their distribution has been seen as the primary structural indicator of Turing pattern formation, or lack thereof. Nakao and Mikhailov [1] argued for the existence of a characteristic node degree kc, which acts as a lower bound that the graph mean degree must exceed in order for Turing patterns to occur.

Degree heterogeneity affects eigenmode localisation [32], e.g. the eigenmodes of a scale-free graph are generally more localised than those of a regular graph. Consequently, the latter has been presented as another structural indicator that determines whether the emergent patterns are related to graph topology [33]. However, this does not explain how: (i) on a regular graph, all eigenmodes are delocalised, yet pattern formation occurs and correlates with the subset of eigenmodes within the Turing regime; (ii) on a heterogeneous graph, eigenmodes have very different localisations, which can be excited separately depending on the set dynamical parameters of the system (using the dispersion relation, as presented in Sect 2.5). This prevents the possibility of deriving generalisable model and parameter-independent principles relating Turing pattern formation and graph topology based on eigenmode localisation.

Furthermore, statistical measures such as the degree distribution fail to capture the impact of node and neighbourhood similarity in reaction-diffusion systems. More recently, using geometric soft configuration models [34, 35], Pranesh et al. [36] showed how graph clustering, also referred to as transitivity, provides a more refined picture of the relationship between Turing pattern formation and graph topology.

We find that local spectral gaps are the overarching structural indicators that can explain the impact of all the previous ones, be it: the mean degree (as it influences graph connectedness, captured by the second smallest Laplacian eigenvalue, i.e. the first local spectral gap of a connected graph), degree heterogeneity (as it relates to the distribution of eigenvalues [32, 37], i.e. local spectral gaps), graph clustering [38] or community structure (as captured by higher order Cheeger constants [39], which define boundaries for eigenvalues and hence, local spectral gaps).

Our methodology differs from previous ones in that we do not explore different classes of random graphs, build configuration models in between and conclude on the influence of specific graph measures (be it degree heterogeneity or graph clustering). Instead, we apply minute structural perturbations to a generic graph and observe significant changes in the pattern forming capabilities of the network. As a consequence, our results can be generalised and inform us about the influence of various structural changes on Turing pattern formation, especially in the case of less studied graph classes such as small-world and geometric networks [40], provided we know how their particular structural features affect the Laplacian eigenvalues [41]. Furthermore, this binary classification framework not only aids in understanding how network structure shapes Turing patterns multistability but provides a predictive tool applicable to other complex systems where multistability plays a critical role, such as ecological networks [4244], power-grid systems [45, 46] and neural dynamics [47, 48].

Supporting information

S1 Appendix. A. Annotated dispersion relation.

B. Turing-convergence and largest unstable growth rate. C. The convergence of the distribution of pattern diversity changes. D. Growth rates offset threshold. E. Statistical prediction performance. F. Application to a single graph

https://doi.org/10.1371/journal.pcsy.0000044.s001

(PDF)

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 859937.

References

  1. 1. Nakao H, Mikhailov AS. Turing patterns in network-organized activator–inhibitor systems. Nat Phys 2010;6(7):544–50.
  2. 2. Turing AM. The chemical basis of morphogenesis. Philos Trans Roy Soc B. 1952;237(641):37–72.
  3. 3. Petit J, Lauwens B, Fanelli D, Carletti T. Theory of turing patterns on time varying networks. Phys Rev Lett 2017;119(14):148301. pmid:29053314
  4. 4. Van Gorder RA. A theory of pattern formation for reaction–diffusion systems on temporal networks. Proc Roy Soc A. 2021:477(2247);20200753.
  5. 5. Asllani M, Challenger JD, Pavone FS, Sacconi L, Fanelli D. The theory of pattern formation on directed networks. Nat Commun. 2014;5:4517. pmid:25077521
  6. 6. Di Patti F, Fanelli D, Miele F, Carletti T. Benjamin–Feir instabilities on directed networks. Chaos Solitons Fract. 2017;96:8–16.
  7. 7. Ritchie J. Turing instability and pattern formation on directed networks. Commun Nonl Sci Numer Simulat. 2023;116:106892.
  8. 8. Muolo R, Asllani M, Fanelli D, Maini PK, Carletti T. Patterns of non-normality in networked systems. J Theor Biol. 2019;480:81–91. pmid:31295478
  9. 9. Asllani M, Busiello DM, Carletti T, Fanelli D, Planchon G. Turing patterns in multiplex networks. Phys Rev E Stat Nonlin Soft Matter Phys 2014;90(4):042814. pmid:25375556
  10. 10. Kouvaris NE, Hata S, Guilera AD-. Pattern formation in multiplex networks. Sci Rep. 2015;5:10840. pmid:26042606
  11. 11. Chang L, Liu C, Sun G, Wang Z, Jin Z. Delay-induced patterns in a predator–prey model on complex networks with diffusion. New J Phys 2019;21(7):073035.
  12. 12. Petit J, Asllani M, Fanelli D, Lauwens B, Carletti T. Pattern formation in a two-component reaction–diffusion system with delayed processes on a network. Phys A: Statist Mech Appl. 2016;462:230–49.
  13. 13. Zhao N, Xie H, Zhang X. Impact of non-diffusive interactions on Turing instability. Commun Nonl Sci Numer Simulat. 2024;132:107931.
  14. 14. Carletti T, Fanelli D, Nicoletti S. Dynamical systems on hypergraphs. J Phys Complex 2020;1(3):035006.
  15. 15. Gao S, Chang L, Perc M, Wang Z. Turing patterns in simplicial complexes. Phys Rev E. 2023;107(1–1):014216. pmid:36797896
  16. 16. Giambagli L, Calmon L, Muolo R, Carletti T, Bianconi G. Diffusion-driven instability of topological signals coupled by the Dirac operator. Phys Rev E. 2022;106(6–1):064314. pmid:36671168
  17. 17. Hütt M-T, Armbruster D, Lesne A. Predictable topological sensitivity of Turing patterns on graphs. Phys Rev E. 2022;105(1–1):014304. pmid:35193278
  18. 18. Restrepo JG, Ott E, Hunt BR. Characterizing the dynamical importance of network nodes and links. Phys Rev Lett 2006;97(9):094102. pmid:17026366
  19. 19. Young E, Porter MA. Dynamical importance and network perturbations. arXiv preprint. 2024. https://arxiv.org/abs/2403.14584.
  20. 20. Clark R, Punzo G, Macdonald M. Network communities of dynamical influence. Sci Rep 2019;9(1):17590. pmid:31772210
  21. 21. Beber ME, Becker T. Towards an understanding of the relation between topological characteristics and dynamic behavior in manufacturing networks. Procedia CIRP. 2014;19:21–6.
  22. 22. Milanese A, Sun J, Nishikawa T. Approximating spectral impact of structural perturbations in large networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2010;81(4 Pt 2):046112. pmid:20481791
  23. 23. Arenas A, Díaz-Guilera A, Kurths J, Moreno Y, Zhou C. Synchronization in complex networks. Phys Rep 2008;469(3):93–153.
  24. 24. Kaufman T, Oppenheim I. High order random walks: Beyond spectral gap. Combinatorica. 2020;40:245–81.
  25. 25. Godsil C, Royle GF. Algebraic graph theory. vol. 207. Springer; 2001.
  26. 26. Gierer A, Meinhardt H. A theory of biological pattern formation. Kybernetik 1972;12(1):30–9. pmid:4663624
  27. 27. Benesty J, Chen J, Huang Y, Cohen I. Pearson correlation coefficient. In: Noise reduction in speech processing. Springer; 2009. p. 1–4.
  28. 28. Gambino G, Lombardo MC, Sammartino M. Turing instability and traveling fronts for a nonlinear reaction–diffusion system with cross-diffusion. Math Comput Simulat 2012;82(6):1112–32.
  29. 29. Arecchi FT, Boccaletti S, Ramazza P. Pattern formation and competition in nonlinear optics. Phys Rep. 1999;318(1–2):1–83.
  30. 30. Di Patti F, Fanelli D, Miele F, Carletti T. Ginzburg-Landau approximation for self-sustained oscillators weakly coupled on complex directed graphs. Commun Nonl Sci Numer Simulat. 2018;56:447–56.
  31. 31. Nakao H. Complex Ginzburg-Landau equation on networks and its non-uniform dynamics. Eur Phys J Spec Top 2014;223(12):2411–21.
  32. 32. Hata S, Nakao H. Localization of Laplacian eigenvectors on random networks. Sci Rep 2017;7(1):1121. pmid:28442760
  33. 33. Mimar S, Juane MM, Park J, Muñuzuri AP, Ghoshal G. Turing patterns mediated by network topology in homogeneous active systems. Phys Rev E. 2019;99(6–1):062303. pmid:31330727
  34. 34. Krioukov D, Papadopoulos F, Kitsak M, Vahdat A, Boguñá M. Hyperbolic geometry of complex networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2010;82(3 Pt 2):036106. pmid:21230138
  35. 35. Serrano MA, Krioukov D, Boguñá M. Self-similarity of complex networks and hidden metric spaces. Phys Rev Lett 2008;100(7):078701. pmid:18352602
  36. 36. Pranesh S, Jaiswal D, Gupta S. Effect of clustering on Turing instability in complex networks. arXiv preprint. 2024. https://arxiv.org/abs/2406.17440.
  37. 37. Zhan C, Chen G, Yeung LF. On the distributions of Laplacian eigenvalues versus node degrees in complex networks. Phys A: Statist Mech Appl 2010;389(8):1779–88.
  38. 38. Pham TM, Peron T, Metz FL. Effects of clustering heterogeneity on the spectral density of sparse networks. arXiv preprint. 2024. https://arxiv.org/abs/2404.08152.
  39. 39. Lee JR, Gharan SO, Trevisan L. Multiway spectral partitioning and higher-order cheeger inequalities. J ACM 2014;61(6):1–30.
  40. 40. van der Kolk J, García-Pérez G, Kouvaris N, Serrano M, Boguná M. Emergence of geometric Turing patterns in complex networks. Phys Rev X 2023;13(2):021038.
  41. 41. Van Mieghem P. Graph spectra for complex networks. Cambridge University Press; 2023.
  42. 42. Meng Y, Lai Y-C, Grebogi C. The fundamental benefits of multiplexity in ecological networks. J R Soc Interface 2022;19(194):20220438. pmid:36167085
  43. 43. Suzuki K, Nakaoka S, Fukuda S, Masuya H. Energy landscape analysis elucidates the multistability of ecological communities across environmental gradients. Ecol Monographs 2021;91(3):e01469.
  44. 44. Cenci S, Song C, Saavedra S. Rethinking the importance of the structure of ecological networks under an environment-dependent framework. Ecol Evol 2018;8(14):6852–9. pmid:30073049
  45. 45. Kim H, Lee SH, Davidsen J, Son S-W. Multistability and variations in basin of attraction in power-grid systems. New J Phys 2018;20(11):113006.
  46. 46. Halekotte L, Feudel U. Minimal fatal shocks in multistable complex networks. Sci Rep 2020;10(1):11783. pmid:32678252
  47. 47. Páscoa dos Santos F, Verschure PF. Metastable dynamics emerge from local excitatory-inhibitory homeostasis in the cortex at rest. bioRxiv. 2024:2024–08.
  48. 48. Hancock F, Rosas F, Luppi A, Zhang M, Mediano P, Cabral J, et al. Metastability demystified—the foundational past, the pragmatic present and the promising future. Nat Rev Neurosci. 2024.