Figures
Abstract
Complex neural systems can display structured emergent dynamics. Capturing this structure remains a significant scientific challenge. Using information theory, we apply Dynamical Independence (DI) to uncover the emergent dynamical structure in a minimal 5-node biophysical neural model, shaped by the interplay of two key aspects of brain organisation: integration and segregation. In our study, functional integration within the biophysical neural model is modulated by a global coupling parameter, while functional segregation is influenced by adding dynamical noise, which counteracts global coupling. Leveraging transfer entropy, DI defines a dimensionally-reduced macroscopic variable (e.g., a coarse-graining) as emergent to the extent that it behaves as an independent dynamical process, distinct from the micro-level dynamics. Dynamical dependence (a departure from dynamical independence) is measured by minimising the transfer entropy from microlevel variables to macroscopic variables across spatial scales. Our results indicate that the degree of emergence of macroscopic variables is relatively minimised at balanced points of integration and segregation and maximised at the extremes. Additionally, our method identifies to which degree the macroscopic dynamics are localised across microlevel nodes, thereby elucidating the emergent dynamical structure through the relationship between microscopic and macroscopic processes. We find that deviation from a balanced point between integration and segregation results in a less localised, more distributed emergent dynamical structure as identified by DI. This finding suggests that a balance of functional integration and segregation is associated with lower levels of emergence (higher dynamical dependence), which may be crucial for sustaining coherent, localised emergent macroscopic dynamical structures. This work also provides a complete computational implementation for the identification of emergent neural dynamics that could be applied both in silico and in vivo.
Citation: Milinkovic B, Barnett L, Carter O, Seth AK, Andrillon T (2025) Capturing the emergent dynamical structure in biophysical neural models. PLoS Comput Biol 21(5): e1012572. https://doi.org/10.1371/journal.pcbi.1012572
Editor: Daniel Bush, University College London, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
Received: October 18, 2024; Accepted: March 31, 2025; Published: May 12, 2025
Copyright: © 2025 Milinkovic et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The code used in this study integrates multiple toolboxes and custom scripts. The core custom methods for neural simulations in the present study are available under the Github repository TVBEmergence, developed by Borjan Milinkovic available at: https://github.com/bmilinkovic/TVBEmergence/. This package includes all the scripts necessary to reproduce the simulation data and results, including instructions for environment setup and dependencies. Dynamical Independence (DI) analysis is implemented through the SSDI toolbox developed by Lionel Barnett, which is available at: https://github.com/lcbarnett/ssdi. This toolbox depends on the MVGC2 toolbox, also developed by Lionel Barnett, for multivariate Granger causality analysis, available at: https://github.com/lcbarnett/MVGC2. Additionally, the simulations of biophysical neural models were adapted from The Virtual Brain (TVB) software, which is available at: https://www.thevirtualbrain.org/. Data Availability: There is no additional data used in this paper.
Funding: This work was supported by the European Research Council (ERC-StG SleepingAwake 101116748 to T. A.), the National Foundation for Medical Research and Innovation (APP1183280 to O.C. and B.M.), and internal research funding from the University of Melbourne (to O.C.). The work was also supported in part by the European Research Council (ERC) under the Horizon 2020 programme (Grant 10109254 to A.K.S. and L.B.). B.M. was further supported by the Australian Government Research Training Program Scholarship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The co-existence of functional integration and segregation between distinct brain regions has been argued to support perceptual and cognitive states [1–6]. Typically, for a system to be fully integrated, strong interactions between microlevel constituents must be reinforced and drive the dynamics; tending towards complete order through global coupling [7–9]. Conversely, for a system to be fully segregated, local fluctuations must dominate global influences induced by coupling, fracturing functional connectivity and segregating the microlevel constituents [10, 11]. Dynamical noise has previously been employed as a way to functionally segregate variables with known anatomical connectivity [10, 12, 13], and is used in dynamical systems theory as a way of reducing the effect of the coupling parameter in state-variables [9, 14]. Yet, at some balanced point(s) between complete functional integration and segregation, dynamics emerge as highly organised patterns of activity [15, 16]. A pressing issue in computational neuroscience remains the quantification and identification of these emergent dynamics.
Neural complexity measures, which peak at a balance between functional integration and segregation, have been used as biomarkers of conscious states [17–23]. More generally, the co-existence of these two aspects of brain organisation has been associated with criticality in complex systems [6, 24]. Related to criticality [25], empirical measures of integrated information also peak at a balanced point between functional integration and segregation in brain data [18, 26–29]. This previous work highlights an important and unresolved challenge: to precisely characterise how the interplay between functional integration and segregation mediates the emergent macroscopic dynamical structure of the underlying brain network. Simply, to not only quantify when emergent dynamics occur, but also to be able to describe them. We address this challenge by using a minimal 5-node biophysical neural model and leveraging an information-theoretic measure of emergence—called Dynamical Independence (DI) [30]. Specifically, we set out to (i) characterise how the modulation of functional integration and segregation impacts the emergent dynamics and (ii) identify the underlying macroscopic dynamical structure across spatial scales. We thus provide a complete computational implementation to capture emergent dynamical structures, which could be applied to both in vivo and in silico recordings.
In this paper, we examine the emergent dynamical structure by focusing on how macroscopic dynamics are distributed across specific microlevel nodes. Specifically, we quantify the extent to which each microlevel node contributes to the overall macroscopic process, thereby highlighting the localisation of these dynamics.
In general, capturing the organisation of brain activity at a macroscopic level has been approached through various methods, including absolute signal correlations [31], clustering analysis [32], small-worldness and graph theory [33], phase synchronisation [34], metastability [35, 36], and more recently, eigenmode decomposition techniques [37–39]. Recently, there has also been a growing interest in coarse-graining to identify macroscopic patterns in neural models [52]. While these methods often capture functional structures, they do not explicitly reveal emergent macroscopic dynamics arising from microlevel interactions. Given that the fluctuation of brain activity between degrees of integration and segregation dictates the information flow between regions—ranging from predictable and structured (integrated) to unpredictable and stochastic (segregated) [40]—; Information Theory [41] emerges as a natural candidate for uncovering macroscopic dynamical structure in neural systems [30, 42–47].
Recent information-theoretic approaches to emergence show considerable promise in revealing the capacity for higher-order dynamics [48–51]. Though these approaches align with capturing the capacity for emergence in brain networks, they do not explicitly consider the relation between functional integration and segregation and its effect on the emergent dynamical structure. Further, with the exception of [51], when applied to larger systems, measures based on decomposing information are approximated from pairwise interactions between microlevel variables alone, and do not capture macroscopic organisation across the full array of possible spatial scales.
To understand the relation between emergent dynamical structure and functional integration and segregation in neural systems a metric is needed that simultaneously captures (i) coarse-grained macroscopic patterns that represent the dynamical structure of the system’s interactions and (ii) measures the degree to which the coarse-grainings are to be considered as emergent variables with respect to the microlevel constituents.
The problem can be framed as an optimisation task where macroscopic (coarse-grained) variables across spatial scales are identified to best describe the underlying stochastic dynamical process. This approach highlights how brain activity can manifest organisation at the macroscopic level, revealing patterns not evident from the microlevel perspective alone. This data-driven approach would be a significant advance over existing complexity-based approaches in neuroscience, offering deeper insights into how emergent structure arises in neural processes—from the ground, up.
Consequently, we apply DI to specifically identify emergent macroscopic variables in complex dynamical systems across all spatiotemporal scales. DI captures macroscopic coarse-grainings of the system’s dynamics whereby the self-prediction of the future state of the coarse-grained macroscopic variable is not significantly improved by knowledge of the historical past of the microlevel dynamics. DI therefore identifies macroscopic coarse-grainings of the dynamics generated by the interactions between the microlevel constituents but that appear to be independent of them. Because DI can be applied directly to continuous-valued random variables, it is ideally suited for revealing the emergent dynamical structure in neurophysiological data [52].
To apply Dynamical Independence (DI) to biophysical neural models, we start by simulating a brain network model with five nodes. Each node’s local dynamics are defined by the Stefanescu-Jirsa 3D (SJ3D) neural mass model, which simulates the neural population activity of brain regions capable of spike-burst behavior [53, 54]. This model was specifically chosen given recent studies proposing spike-bursts as a neural mechanism for conscious processing [55, 56].
While this model is not a large-scale whole-brain model informed by MRI/DTI data, it is nevertheless a complex biophysical model that incorporates biologically realistic local dynamics, global coupling, dynamical noise, and temporal delays. The term biophysical neural model is preferred to emphasise that, while it is not a full representation of whole-brain dynamics, it remains grounded in biological realism by displaying local activity attributed to neural populations [54], and system-wide activity governed by three parameters known to influence whole-brain activity; global coupling, dynamical noise, and temporal delay [57–59]. This study uses the model as a means to validate the approach in a controlled, albeit simplified, setting, acknowledging that it does not represent the full complexity of whole-brain dynamics but is still informed by the principles that govern such systems.
In the biophysical model, functional integration is modulated by the global coupling parameter, influencing the strength of connections between regions and affecting local dynamics [9, 60]. Conversely, functional segregation is influenced by dynamical noise, which reduces the signal-to-noise ratio, thereby limiting the impact of global coupling on local dynamics [9, 12, 57]. Biophysically, this injected noise can be interpreted as representing the influence of unobserved exogenous inputs to the network, which are not explicitly modeled but affect the system’s behaviour.
We then minimised the dynamical dependence (DD) of macroscopic variables across varying degrees of functional integration and segregation. Our results show that when the coexistence of integration and segregation are balanced, the DD of macroscopic variables is higher compared to other parameter regime conditions, indicating that macroscopic dynamics are less emergent and more dependent on the underlying micro-level dynamics. This seems counterintuitive because measures of neural and dynamical complexity typically peak when integration and segregation are balanced [1, 18, 26]. However, at these balanced points, the emergent dynamical structure—defined by the localisation of macroscopic variables—is maximised. In our context, localisation refers to how closely the emergent macrovariables align with specific axes of the microscopic state space—each axis corresponding to a neural source or node. A macrovariable is considered localised when it closely aligns with one or a few of these axes, meaning it primarily captures the dynamics of specific microlevel components rather than being a distributed combination of many nodes. This alignment indicates that specific microlevel nodes contribute predominantly to the macroscopic dynamics. Conversely, deviating from these balanced points leads to a loss of localisation, where the contributions of microlevel nodes to the macroscopic variables become more distributed. These findings suggest that when functional integration and segregation are finely balanced, the macroscopic structure exhibits lower degrees of emergence in terms of dynamical dependence but simultaneously achieves maximal organisation into distinct localised contributions from the micro-level nodes.
Models and methods
Theory
Coarse-graining and macroscopic variables.
Consider a discrete-time multivariate stochastic process X taking values in a state space
, which we will refer to as the microlevel or microscopic scale of the system, where
is the discrete time index. (We generally set specific state values in lower case, and random variables in upper case. When referring to a stochastic process as a whole, we write just X.) Generally,
will have some additional mathematical structure, e.g., topological, metric, differentiable, linear, etc. Later we shall specialise to the case where
, i.e., N-dimensional Euclidean space with the usual metric and linear vector space structure. In that case, we use standard vector-matrix notation and set vector quantities in bold type; so the microscopic system becomes a multivariate vector process
defined by
with superscripts indexing components.
Now consider a surjective, structure-preserving mapping from the microscopic state-space
to a macroscopic state space
. Surjectivity means that for every
, there exists at least one element
such that
. We refer to such a mapping as a coarse-graining. A coarse-graining effects a partitioning of the microscopic state space as a disjoint union:
. However, distinct coarse-grainings across scales do not necessarily partition the dimensions of the microscopic state space into non-overlapping sets, meaning that the dimensions
could belong to multiple coarse-grainings. This also results in the possibility that coarse-grainings can be nested.
The macroscopic state space will typically be of smaller cardinality or dimension—which we refer to as the scale of the coarse-graining—so coarse-grainings may be considered as dimensional reductions. The coarse-graining M naturally maps the microscopic process X to the macroscopic process (or macroscopic variable) Y defined by
; this will in general entail a loss of information. In the Euclidean case
,
with
, in order to preserve the vector space structure we consider only linear coarse-grainings, so that M becomes an
full-rank. Note that the linear mapping is surjective iff M has full rank n.
The DI framework.
DI is a data-driven information-theoretic principle aimed at the identification of emergent coarse-grained macroscopic variables from discrete-valued or continuous-valued time-series data [30]. Intuitively, a macroscopic variable Y given by is dynamically independent if—notwithstanding its deterministic dependence on the microscopic process—it behaves like a dynamical process in its own right, following its own dynamical laws, distinct from the laws governing the dynamics of the microlevel variable X. Note that in general an arbitrary macroscopic variable will not have the property of dynamical independence from the micro-level base.
DI is defined in a predictive sense: Y is dynamically independent of X if knowledge of the history of X does not enhance prediction of Y beyond the extent to which Y already self-predicts. Discovering the macroscopic variables, Y, may be framed as an optimisation problem; namely to minimise, across all coarse-grainings, the objective function of dynamical dependence (DD), an information-theoretic measure of departure from dynamical independence for macroscopic variables:
Definition 1. Dynamical dependence is defined as the transfer entropy [61, 62]—a nonparametric measure of information flow—from the historical past of the microscopic process X to the present state of the macroscopic process, Y at time t:
We use the notation −
, where
is not a single value but represents a set of lagged time points,
, referring to the history of the process. In the case of an infinite history
, this set would extend indefinitely.
Next, we impose a supervenience relation, which asserts that:
This indicates that there is no additional information in the macroscopic process beyond what is already captured in the history of the microscopic process. Note that a coarse-grained macroscopic variable trivially satisfies (2).
If the process X (and hence also Y) is stationary—i.e., its statistical structure does not change over time—then the DD is not time-dependent, and we drop the subscript t. We assume stationarity for all processes from here on. In practice, to compute dynamical dependence from neurophysiological data such as that simulated by biophysical neural models, we employ an approximation method based on linear state-space modelling (see S2 Appendix of the Supporting Information).
Thus a macrovariable is perfectly dynamically independent if and only if (iff) its dynamical dependence on the microlevel vanishes identically (is zero); i.e., its capacity to predict its future based only on its own history is not enhanced by knowledge of the history of the microscopic process. We define this with a transfer-entropic identity:
Definition 2. A macrovariable Y given by the coarse-graining is dynamically indep- endent of the microscopic process X iff
This formalises the intuition of an emergent macrovariable as one which behaves as a dynamical process in its own right, independently of the micro-level dynamics. In the language of Bertschinger et al. [63, 64], the macroscopic variable is informationally (or dynamically) closed with respect to the microscopic dynamics. Dynamical independence encapsulates the specific notion of emergence discussed in this article, and dynamical dependence stands as a quantitative information-theoretic measure of the degree of (non-)emergence. That is, emergence is not an all-or-none categorical distinction, but rather, in our framework emergence is a continuous property of a macroscopic variable which is quantified by the dependence of the macroscopic process on the microlevel multivariate process.
In general, given a microscopic process X, there may well be no perfectly dynamically-independent macroscopic variables (i.e., variables for which Eq 3 holds identically) at some, or indeed at any, spatial scale. Instead via an optimisation strategy, we search for macrovariables with minimal DD—i.e., macroscopic variables with a significant degree of emergence. In practice, at any given scale, we attempt to identify macroscopic variables that minimise the DD over the space of all possible structure-preserving coarse-grainings at that scale. Importantly, in an empirical scenario, minimal DD may be quantified in a principled manner: namely, that at some predefined significance level, and accounting appropriately for multiple hypotheses, we cannot reject the null hypothesis that an optimised DD value is zero.
For non-trivial problems like the biophysical neural model considered in this study, optimisation is generally analytically intractable, so we are bound to deploy numerical methods. DD minimisation—which will generally take the form of multiple optimising runs with random initial configurations—will, furthermore, be limited by time and computational resources, so will not be sure to guarantee that globally minimal-DD macrovariables are identified. Indeed, it turns out that numerical optimisation runs are in practice likely to terminate at local suboptima of the DD objective function. We therefore adopt the following pragmatic criterion for the discovery of emergent macroscopic variables:
Definition 3. Emergent n-macros are macroscopic variables with minimal dynamical dependence over all optimisation runs at scale n.
We consider the emergent n-macros across all candidate spatial scales and we define the localisation perspective of emergent dynamical structure as:
Definition 4. Emergent dynamical structure (Localised Perspective) is defined by the set of all emergent n-macros which together reveal the emergent dynamical structure across spatial scales of the whole system.
We note, for example, that dynamical independence is transitive [30], and that emergent macrovariables may thus potentially (but not necessarily) be nested, leading to the possibility of heterarchical emergence structures.
These definitions of an emergent n-macro and the emergent dynamical structure aligns with the ansatz proposed in the original publication [30]. The dynamical structure is revealed by identifying the degree to which the microlevel node contribution to emergent n-macros is distributed. This definition allows us to derive the localisation of n-macros in the original biophysical neural model.
In the context of our study, emergent macrovariables are defined as linear subspaces of the microscopic state space, where each dimension corresponds to a neural source or node in the network. The term localisation refers to the degree to which these macrovariables are aligned with specific axes of the microscopic space. Specifically, a macrovariable is considered localised when it aligns closely with one or a few of the original coordinate axes—meaning it has smaller subspace angles with these axes. This alignment indicates that the macrovariable primarily captures the dynamics of specific microlevel components, rather than being a distributed combination of many nodes. This concept of localisation is crucial for understanding how emergent dynamical structure forms within the network.
The localisation of emergent n-macros within the microscopic state space does not resemble partitioning as understood in network theory–where systems are divided into discrete, non-overlapping modules. Partitioning, in the network-theoretic sense, typically refers to the identification of modular structures based on static connectivity patterns, such as communities or clusters within a graph, often derived from measures like modularity or edge density.
In contrast, the emergent macroscopic variables identified through DI analysis are not mere static partitions or modules. Rather, they represent dynamical subsystems–low-dimensional processes arising from the complex interplay of microlevel dynamics, capturing dependencies that may span traditional module boundaries. It is important to note that, in general, n-macros will not simply correspond to subsets of network nodes. However, in the present study, they are linked to the linear subspace spanned by particular subsets of nodes under particular dynamical conditions. This by design construction is motivated by didactic considerations: when n-macros are associated with subsets of nodes, dynamical independence is clearly defined by the absence of incoming directed connections, making them straightforward to construct. Though this is not always the case, a point that we see by the minimisation of dynamical dependence in noisy regimes. Yet, the former scenario is not universally applicable, which is why our approach also involves examining subspace angles and other measures to capture the broader, inherently dynamic and distributed nature of functional organisation.
Thus, while traditional network partitions capture static network-theoretic dependencies, emergent n-macros as revealed by DI analysis capture functional organisation that is dynamic, distributed, and potentially non-local. Accordingly, DI analysis applied to neurophysiological time-series will not necessarily reveal clearly modularised emergent dynamical structure at any single spatial scale. Instead, the present study develops a method to assess whether functional integration and segregation modulate emergent dynamical structure–via local contributions to dynamical subsystems–across all spatial scales, and how this modulation occurs.
Dynamical dependence minimisation in linear systems.
We operationalise DI and the identification of emergent n-macros in simulated neurophysiological data using a linear approximation (see S1 Appendix and S2 Appendix of the Supporting Information as to the appropriateness of the linear approximation). The microscopic state space is thus the Euclidean space , and coarse-grainings at spatial scale n are full-rank linear mappings (
matrices)
.
For the coarse-grained macrovariable , the dynamical dependence (Eq 1a) is invariant under nonsingular (invertible) linear transformations of the macroscopic state space
. Thus we may consider two coarse-grainings
to specify the same macrovariable if they are related by
for some nonsingular linear transformation
. The space of all possible linear coarse-grainings at scale n < N—the space over which DD will be minimised—may consequently be identified with the set of n-dimensional subspaces of
. This defines a mathematical object known as a Grassmannian manifold [65, 66], written
. A coarse-graining M may be visualised as an n-dimensional subspace in the original microscopic state space
, on which the dynamics of the macroscopic variable
play out. Intuitively, the coarse-graining map M projects the N-dimensional microscopic dynamics
down onto the macrovariable
, which resides on the associated n-dimensional subspace [30, 67].
Grassmannian manifolds are a type of homogeneous Riemannian manifold; they are non-Euclidean spaces with a distinctive metric geometry and symmetries, on which we can do calculus. Given a microscopic process , then, we can minimise the DD
—considered now as the objective function on
parametrised by the matrix M—using standard methods like gradient descent [65, 68] on the Grassmannian. The subspace associated with a coarse-graining is spanned by the n basis vectors defined by the columns of the
transposed coarse-graining matrix
. Exploiting the invariance of DD under transformations of the macroscopic space
, it turns out to be convenient to parametrise the Grassmannian by orthogonal matrices M; i.e., matrices for which
. These matrices constitute the Stiefel manifold of orthonormal bases, written
—also a homogeneous Riemannian manifold—and we consequently refer to the parametrisation of the Grassmannian
by orthogonal
matrices as the Stiefel parametrisation. We note that for n > 1 this parametrisation is not one-to-one;
has dimension
−
, which for n > 1 is larger than the dimension n(N–n) of
. In practice, optimising over the Grassmannian and Stiefel manifolds produce similar results [67], and the latter is simpler to implement.
For the class of linear state-space models for the microscopic dynamics, moreover, both the DD and its gradient may be calculated explicitly; see S3 Appendix of the Supporting Information. In this study, we used gradient descent with an adaptive step-size [69] to minimise DD. Typically, minimisation is initiated at a uniform random M for a specific scale n, and allowed to run until it converges to a specified tolerance. However, as the optimisation landscape described by the DD is quite deceptive, with multiple local suboptima (minima), we repeat randomised optimisation runs many times to obtain acceptable minima for the specified scale.
Statistical significance of dynamical dependence.
In the present work, we focus on the exploration of dynamical dependence values (i.e., transfer entropy or Granger causality in linear settings) quantifying the degree of emergence without imposing a specific statistical cutoff. Having said this, it is worth noting that statistically principled thresholds can be established if needed (see Reference [30], Sec. III E). Given the linear GC operationalisation of dynamical independence [30], one can employ standard statistical tests (e.g., , F, Wald tests) to evaluate the null hypothesis that dynamical dependence is zero. One can interpret failure to reject this null hypothesis (at a given significance level) as a definition of ‘small enough’ dynamical dependence, corresponding to a principled threshold for asserting near dynamical independence. We note that in frequentist statistics, failure to reject the null hypothesis is not evidence for dynamical dependence being zero, hence we interpret this situation as an operational rather than inferential criterion.
In short, one can draw a direct analogy to correlation: just as a correlation statistic quantifies the extent to which two variables deviate from perfect statistical independence, the dynamical dependence measure reflects how far a macrovariable departs from perfect dynamical independence. In both cases, these measures serve as effect sizes for the departure from independence. However, the use of statistical significance tests (e.g., for correlation) is what enables rigorous inference on whether such deviations are meaningful, and similar inference-based methods can be applied to dynamical dependence in empirical contexts.
Characterisation the composition of linear macroscopic variables.
A putatively emergent macroscopic variable is perhaps best thought of as a dimensionally-reduced subsystem of the global microscopic dynamics. Computational neuroscience has a tendency to present functional structure in terms of networks. We stress that in DI, macroscopic variables are, in general, not networks in any meaningful sense. As explained above, in the linear case a macrovariable may be considered a projection of the microscopic process onto a subspace in the original microscopic Euclidean state space. In the context of modelling multi-region neurophysiological time-series data, we may get a sense of the extent to which different nodes (corresponding to neural signals in specific anatomical brain regions) in the brain network participate in, or contribute to, a macroscopic subsystem that we term an n-macro. We do this by invoking geometric intuition: consider, for example, a 2-dimensional plane in a 3-dimensional Euclidean space. The plane is uniquely identified if we know the angles between it and each of the three x,y,z coordinate axes in the 3-dimensional space (Fig 1d).
(a) a time-series capturing the dynamics of 3 variables, brain regions, or electrodes. (b) A state-space model is estimated from the time-series dynamics, from which we calculate the pairwise-conditional Granger-causal statistics representing directed information flow. The numeric labels (e.g., 1.75, 4.30, 0.54) indicate illustrative values of the three variables (along the dashed line in panel (a)) and are included here simply to schematically demonstrate how these values appear in the state-space representation. (c) A 2D plane embedded in the 3-dimensional system. This plane represents the lower-dimensional macroscopic variable—a 2-macro—capturing the projection of the 3D dynamics onto a 2D space (dimensional reduction). In optimisation, the plane shifts incrementally toward decreasing levels of dynamical dependence, moving closer to a local or global minimum. (d) The minimally dynamical dependent plane is selected as the emergent 2-macro. To determine the single-node contribution to the emergent 2-macro distances between the axes of the original 3D system, and the axes of our emergent 2-macro are computed. A projection of the emergent 2-macro back onto the G-causal graph results in a visualisation of the degree of contribution from each node.
In the multi-channel recording scenario, microscopic coordinate axes correspond to recording channels (Fig 1a and 1b). Consider again the 3D example: if, say, the angle between axis y (corresponding to channel y) and the 2D plane associated with a given macroscopic variable at scale is close to the maximum
, this tells us that neural activity in region y is projected away by the corresponding coarse-graining; region y does not participate strongly in the macroscopic process. As the angle approaches zero, participation is maximised.
With some caveats, this generalises to arbitrary dimensions: to quantify participation of brain regions in a macroscopic dynamical subsystem we calculate the angles between each of the region axes and the macroscopic subspace; then small angles correspond to high contribution, and vice versa (see S5 Appendix of the Supporting Information for details).
Subspace angles have another important use, as a metric to measure the similarity, or co-planarity between macrovariables. For example, when optimising dynamical dependence for (nearly-)DI n-macros, two gradient-descent runs may yield coarse-grainings and
, respectively, which we suspect may be identical, or nearly so. Measuring the angle between
and
can help us decide how similar they are: angles close to zero indicate high similarity. In the case where we have subspaces
and
of different dimensions
, a subspace angle near zero indicates that
is nested in
; that is, the
dynamics may be viewed as a self-contained subsystem of the
dynamics. Within the current study we only consider subspace angles from microlevel constituents to n-macros and leave the comparison across higher-order scales for future research.
For a summary of key terms used within this paper, please see S6 Appendix of the Supporting Information.
Worked example
Before we introduce the biophysical neural model, this section serves to illustrate the pipeline for implementing DI on stationary discrete-time, continuous-valued multivariate stochastic processes using a linear state-space (SS) parameterisation, details of which can be found in S2 Appendix. The general computational routine is schematically shown as a geometric interpretation with corresponding data visualisations in Fig 1.
First, we simulate synthetic data from a 9-variable VAR(9) model, where the temporal model order (i.e., the number of discrete-time lags considered in the prediction) is . The model was prespecified with specific structural connectivity (VAR coefficients) that reflect the G-causal structure evident in Fig 2. We then compute the full causal graph representing the directed information flow between pairs of variables (Fig 2). This causal graph serves as the functional ground truth model of the microlevel dynamics and the information flow between the variables. The variables in our system correspond to the activity of network nodes in the causal graph. Therefore, without loss of generality, we will refer to these variables as nodes from this point forward.
The figure illustrates the strength and directionality of connections between variables in a 9-variable VAR model, with darker colors indicating stronger causal relationships. (a) Matrix representation of the pairwise causal estimates, where each cell’s colour intensity reflects the strength of the causal effect from the variable on the y-axis (“From”) to the variable on the x-axis (“To”). (b) Graph representation of the causal relationships, where the arrow colors correspond to the same weighting as in the matrix, visually depicting the directional influences between variables.
All pairwise directed information flow estimates were retained to avoid pruning the causal graph based on statistical significance. Pruning, which is commonly done to retain only significant directed functional connections would risk disrupting the statistical structure of the entire system. By keeping all pairwise estimates, we preserve the full range of statistical information inherent in the system. This approach assumes that information that may not be apparent at the microscopic scale could still contribute to the identification of the emergent macroscopic dynamical structure.
We employ a pre-optimisation step to accelerate computational efficiency (see S3 Appendix of the Supporting Information for details). This step performs 100 random restarts, which terminate either after 10,000 iteration steps or if the gradient falls below the threshold set at . Then, n-macros are clustered according to their pre-optimisation DD values and the optimisation of DD is performed on the reduced clusters. The optimisation history panels in Fig 3 illustrate the remaining optimisation runs after pre-optimisation clustering for n-macros of scale 2 to 8, whilst the similarity matrices in Fig 3 illustrate the similarities between the n-macros across optimisation runs. Note that the number of optimisation runs shown differs across scales (e.g., 93 for 2-macros, 8 for 3-macros, and 17 for 8-macros) due to the removal of redundant solutions identified during the clustering procedure applied in the pre-optimisation step, which reduces computational load.
Panels (a)–(g) show results for coarse-grainings (n-macros) with n ranging from 2 to 8. In each panel, the left plot illustrates the history of the gradient-descent optimisation for dynamical dependence: the x-axis is the gradient descent step, while the y-axis shows the corresponding dynamical dependence value. The right plot is a similarity matrix indicating how similar the local minima (distinct n-macros) are across random-restart optimisation runs; both the x- and y-axes represent individual runs, sorted by similarity. White squares denote identical n-macros, whereas darker squares indicate increasingly dissimilar n-macros. Notably, only the 2-macro and 6-macro converge to a value of absolute dynamical independence (i.e., zero dynamical dependence). The number of optimisation runs differs between panels due to the exclusion of redundant solutions identified via clustering in the pre-optimisation step, explaining the variation in optimisation runs displayed across macroscale sizes.
Results show that distinct optimisation runs converge to distinct values of DD. Definition 3 states that an emergent n-macro is the minimally dynamically dependent macroscopic variable at the prescribed spatial scale. We observe that given the prespecified underlying structure of the causal graph (Fig 2) an observer might preempt the possibility of discovering perfect dynamical independence (Eq 2) for both; a 2-macro, 3-macro, and 4-macro. However, results show that this is only identified for a 2-macro and a 6-macro (Fig 3 panels(a) and (e)).
Given that dynamical dependence converges to different values across the optimisation runs, it raises the question: Are the n-macros corresponding to these different dynamical dependence values the same? The similarity matrices in Fig 3 (Similarity matrices subplots) provide insight into this question by illustrating the similarity between n-macros identified across 100 optimisation runs on the same scale, n. The similarity between n-macros is quantified by the principal angle between them (see S5 Appendix).
The optimisation results presented in Fig 3 show that runs converging to the same dynamical dependence value correspond to the same n-macro, as evidenced by the high similarity in the matrices (White cells in the Similarity matrices panels of Fig 3). In contrast, when optimisation runs converge to different dynamical dependence values, the corresponding n-macros exhibit varying degrees of dissimilarity. Though in the figure the dissimilarity seems to be substantial between macros given the darkness of the cells, in general, this will not be the case, and dissimilarity can be significantly graded. Furthermore, the results indicate a predominant single 2-macro structure within the optimisation landscape, whereas the 3-macro results display greater diversity, suggesting a more complex landscape with multiple distinct local-minima.
These similarity matrices indicate the presence of basins of attraction in the energy landscape suggesting that a single n-macro can dominate the dynamics at a particular scale. This dynamical dominance indicates that the randomised optimisation restarts (the gradient descent) consistently converge to the same local minima (n-macro); reflecting a strong emergent dynamical structure at that scale. Such basins of attraction offer a powerful way to assess another aspect of emergence dynamical structure—specifically, the stability and uniqueness of the macroscopic variables across higher-order scales. Although this method of evaluating emergent dynamics is not utilised in the present study, it provides a promising direction for future research, particularly in understanding how these basins of attraction contribute to the formation of stable, higher-order structures.
When dealing with high-dimensional systems, single-node contributions may not always meet the necessary empirical criteria, as subspace angles between emergent n-macros and their microlevel constituents do not generally provide a unique identification of the macroscopic subspace. To address this limitation, a grouped-node approach offers a valuable alternative, building on the subset classification illustrated in Fig 4. However, it is important to note that using grouped-node level analysis involves a trade-off between capturing the nuanced, graded nature of localised contributions and inducing a partitioning of the system. Therefore, the choice of analysis should be carefully considered and guided by the specific empirical question being addressed. For details on the full implementation, see S5 Appendix. Here we offer both approaches for completeness.
Panels (a–g) show the contributions of individual nodes and grouped-nodes (see S5 Appendix) to emergent n-macros, ranging from 2-macros to 8-macros: Histograms represent grouped-node contributions, where red bars indicate the n-subsets of microlevel nodes that are maximally aligned with the n-macro. The x-axes represents all subsets of the same cardinality as the n-macro, and the y-axis represents the inverse of the subspace angle (1– subspace angle). 1 indicates complete coplanarity between the n-subset and the n-macro, whilst zero indicates orthogonality. Only subsets with contributions greater than 0.1 are labelled. Inset causal graphs depict single-node contributions, with intensity of node colour reflecting their individual contribution on the n-macro. Panel (h) summarises the distinctly partitioned subsets using grouped-node analysis that optimally contribute to each n-macro across different scales.
Fig 4 explores the contributions of individual and grouped nodes to emergent n-macros across a range of spatial scales, from 2-macros to 8-macros. Each panel (a-g) includes a histogram and an inset causal graph with the emergent n-macro projected onto it, offering complementary perspectives on how microlevel nodes contribute to the emergent dynamics.
The histograms illustrate grouped-node contributions, where each bar represents the degree of co-planarity between a specific n-subset of nodes and the n-macro. A red bar highlights the subset that is most strongly aligned with the n-macro, indicating the microlevel nodes that are most integral to the emergent structure. Contributions are only displayed for subsets with values exceeding 0.1, emphasising the most significant relationships.
The inset causal graphs provide a visual representation of single-node contributions, mapping these contributions back onto the original causal graph. In these graphs, the colour correspond to the strength of the contribution to the n-macro, with darker colors indicating stronger connections.
Finally, panel (h) offers a summary of the emergent dynamical structure across all scales, listing the specific node subsets that optimally contribute to each n-macro. In this sense it partitions the system without a graded analysis of contribution offered by the single-node analysis. This analysis reveals the complexity and high-order nature of dynamical structures in the system, where both individual and collective node contributions can be implemented. This concludes our worked example.
To understand the emergent dynamical structure by assessing how n-macros are localised in space across nodes, we aim to avoid inducing a partitioning of node contributions to n-macros. Therefore, we utilise the more sensitive and nuanced single-node analysis in our biophysical neural model.
Brain network model
Armed with the mathematical machinery of DI analysis we now turn to the core of this present study. Using The Virtual Brain (TVB) [9, 70] we simulate biophysical neural models by constructing a 5-node network. Each node’s local dynamics are governed by the Stefanescu-Jirsa 3D (SJ3D) neural mass model (NMM), a reduced model capturing the mean field activity of 150 excitatory and 50 inhibitory Hindmarsh-Rose neurons [9, 53, 54]. Spike-burst neurons are thought to be implicated as critical neural mechanisms underlying conscious processing [55, 56]. The SJ3D model, governed by six coupled differential equations (considered state variables), is detailed in S4 Appendix of the Supporting Information. The model parameters used have been optimised to fit resting-state EEG data, following [70].
We choose our microscopic level as represented by the SJ3D NMMs, and the macroscopic scales defined across higher-order spatial scales and
.
Local dynamics.
The SJ3D neural mass model consists of an excitatory and an inhibitory neural mass, which are interconnected through the fast excitatory variable and the fast inhibitory variable
(detailed in Table A in S4 Appendix). These variables, along with the connectivity between the excitatory masses, define the dynamics at each node. The SJ3D model can generate various ensemble dynamics, including excitable regimes, oscillations, and transient spike-bursts [54]. Fig 5 illustrates the construction of the NMM.
is the inhibitory-to-excitatory mass connectivity,
is the excitatory-to-inhibitory connection, and
is the excitatory-to-excitatory connectivity. The left-most panel indicates that each neural mass model, consisting of both, excitatory and inhibitory neural masses represents a single node in a region-based brain network simulation. Going from the central panel to the right-most indicates the reduced model of the mean-field approximations representing the ensemble activity on the excitatory (pink) and inhibitory (green) neural masses, respectively. Created in BioRender. Milinkovic, B (2025) https://BioRender.com/f40j657
A key advantage of the SJ3D worth noting is its multi-modal construction. Modes represent different dynamical regimes in which the local dynamics can exhibit and represent distinct population dynamics which give rise to rich heterogeneous activity exhibited locally [9, 70, 71].
Global dynamics.
Our simulations are based on a well-established evolution equation governing the dynamics of brain network models, adapted and modified from The Virtual Brain (TVB) for this study (details in S4 Appendix of the Supporting Information). The structural connectivity, which defines the anatomical backbone of the network model, is represented by a weight matrix and a tract-length matrix. In the context of a whole-brain model (WBM), the weights matrix quantifies the strength of pairwise anatomical connections between brain regions, while the tract-length matrix measures the axonal fibre lengths between these regions. Tract-length matrices, which encode conduction delays and attenuations across white-matter pathways, critically shape the global dynamics of large-scale brain network models by modulating the timing among interconnected regions [7, 8]. Longer tracts generally introduce greater delays, potentially shifting synchrony or altering phase relationships, and thus influencing the emergent patterns of whole-brain activity [9].
The 5-node biophysical neural model used here is derived from an empirically-informed TVB structural connectome, which incorporates a biologically realistic tract-length matrix. This matrix is created through homotopical morphing, a computational technique that optimizes and aligns primate tracer imaging data with the human anatomical connectome, based on a Desikan-Killiany parcellation atlas [72]. To evaluate the plausibility of the emergent dynamical structure, constrained by network connectivity, we also perform the same analysis on an uncoupled structural connectome as a control condition. Throughout the experiments, the network connectivity was kept constant across both the coupled and uncoupled regimes, serving as a ground-truth model that constrains the dynamics revealed by varying degrees of functional integration and segregation (see Fig 6).
Weights matrix (left) for the coupled (a) and uncoupled (b) regimes, where each entry indicates the coupling strength between two nodes. Higher values denote stronger interactions. Tracts matrices (right) showing the corresponding inter-node tract lengths (in mm), indicating the physical distance over which signals travel. The numerical values in both matrices originate from diffusion-based tractography analyses, as further described in the main text.
To reiterate, two global parameters known to influence macroscopic brain dynamics [2, 58, 73] were manipulated: global coupling and dynamical noise. Functional integration is defined by the global coupling factor (G in Eq (1) of S4 Appendix), which scales the influence of incoming activity from other nodes in the network. Conversely, dynamical noise, represented by the term in Eq (1) of S4 Appendix, reflects functional segregation by reducing the signal-to-noise ratio, thereby dampening the impact of external nodes on local dynamics.
Simulation protocol
A parameter sweep was performed on the values of global coupling (G) and dynamical noise () between 0.01 to 0.31 and 0.001 to 0.1 in 20 logarithmic steps, respectively (see Table B in S4 Appendix). To obtain the time-series data, we consider the
state variable at each node, representing the excitatory activity. The activity for each
was summed over the 3 dynamic modes [71] and then z-scored. The resulting time series represents the LFP-like excitatory activity of the respective brain region nodes [9, 54, 74]. For numerical stability a Heun stochastic integration scheme was used with a step size of
. This parameter can vary depending on the NMM used and the global parameter regime explored. For the present analysis it was identified that
was consistently stable throughout the parameter sweep. Simulations were run for 5000 ms with the first 500 ms excluded to discount initial transients. Simulations were sampled at 256 Hz to retain consistency with regularly deployed neurophysiological data acquisition methods such as electroencephalography (EEG) and intracranial-EEG (iEEG).
The resulting simulated neural time series for each of the 400 (20 x 20) simulations is subject to DI analysis using a linear approximations to estimate causal graphs and capture the emergent dynamical structure (see S1 Appendix of the Supporting Information). Using a standard MacBook Pro (Apple M2 processor, 16 GB RAM), performing 100 optimisation runs for each macrovariable scale in each simulation requires approximately four seconds of computation time. A comprehensive benchmarking of the optimisation procedure is provided in Appendix E of the original publication [30].
Results
To validate the capacity of our methodology to identify emergent macroscopic dynamics, we applied our DI analysis pipeline across 400 simulations with varying degrees of functional integration and segregation, using the network models defined in Fig 6. As mentioned, we leverage an uncoupled network model as a control condition, which is defined by the connectivity illustrated in Fig 6. For each simulation, we ran optimisations for a 2-macro and 3-macro in our 5-node biophysical network model. We selected a 5-node brain network to offer the simplest model in which we can vary the values of functional integration and segregation, while keeping the analysis and results as transparent as possible.
Fig 7 illustrates the minimal DD values of each emergent n-macro across simulations defined by varying global coupling and dynamical noise parameter values. Each matrix represents the entire bivariate parameter space, with dynamical noise varying along the x-axis and global coupling along the y-axis. The top row displays the DD values for the 2-macro and 3-macro in the coupled network, while the bottom row shows the corresponding results for the uncoupled network.
Each matrix presents a parameter sweep across global coupling (y-axis) and dynamical noise (x-axis). The top row shows DD values for 2-macros and 3-macros in the coupled brain network. The bottom row presents the corresponding DD values for the uncoupled brain network. Darker colours indicate higher DD values, while lighter colours reflect lower DD values.
Dynamical dependence peaks in a parameter regime balancing functional integration and segregation.
Fig 7 illustrates that in the coupled network, there exists a parameter regime where the DD of emergent 2-macros and 3-macros is higher, indicating lower emergence of macroscopic dynamics—a pattern absent in the uncoupled network. Notably, this regime exhibits a consistent relationship where increases in both functional integration (global coupling) and functional segregation (dynamical noise) coincide. This balance results in the emergent dynamical structure being maximised through the localisation of contributions from specific microlevel nodes to the emergent n-macros, as dynamical noise increases proportionally to global coupling (see Figs 8 and 9).
Global coupling varies along the y-axis, and dynamical noise varies along the x-axis. Here, and on the subsequent figures identifying node contribution values on both axes are plotted on a logarithmic scale. Higher contributions are indicated by darker colours. The degree of localisation of single-node contributions within the emergent 2-macro is most distinct at a balance point between functional integration and segregation.
Global coupling varies along the y-axis, and dynamical noise varies along the x-axis. Higher contributions are indicated by darker colours. The localisation of single-node contributions within the emergent 3-macro is evident at balanced points between functional integration and segregation, with a loss of localisation in extreme regimes.
This pattern suggests that the observed regularity across simulations is not merely a result of individual parameter values but rather emerges from the interaction between global coupling and dynamical noise. Their combined influence shapes the system’s dynamical structure, leading to higher DD (lower emergence) and maximised localisation of contributions at this balanced parameter regime. This finding underscores the importance of considering the interplay between functional integration and segregation when assessing both the emergence and the organisational structure of complex macroscopic dynamics.
Identifying structure: Spatial localisation of single-node contributions in emergence n-macros across functional integration and segregation.
We next performed a single-node analysis to examine the contributions of individual nodes to the emergent 2-macros across simulations. As shown in Fig 8 nodes 4 and 5 exhibit high contributions to the emergent 2-macro, while nodes 1, 2, and 3 show negligible contributions, particularly at the parameter regime where functional integration and segregation are balanced. This distinct organisation of node contributions towards an emergent 2-macro which is localised across 2 microlevel nodes is closely associated with the parameter regime characterised by higher DD (lower emergence) values (Fig 7). This indicates that at the balance point, the emergent dynamical structure is maximised through the localisation of contributions from specific nodes.
The degree of single-node contribution is represented as follows: a value of 0 (lighter colour) indicates that the node is not implicated in the emergent 2-macro, and a value of 1 (darker colour) indicates that the node is fully implicated. For a detailed explanation see the worked example in above.
Further analysis, as illustrated in Fig 8, reveals that in simulations where the parameter regime is dominated by either excessive functional integration or segregation, all nodes show varied degrees of contribution to the emergent 2-macro. This variability indicates a breakdown in the localisation of single-node contributions, which underpins the dynamical structure of the emergent n-macros. Notably the dynamical structure of the n-macros appears randomly distributed across the entire network.
By examining single-node contributions across all parameter regimes, we assess the degree of localisation within each n-macro, providing insight into the integrity of the emergent dynamical structure. Importantly, distinct localisation is most apparent in parameter regimes associated with higher dynamical dependence (lower emergence)—that is, at a balanced point of the co-existence of functional integration and segregation. In contrast, in regimes with extreme dynamical noise or global coupling, this localisation diminishes, leading to a more distributed contribution of nodes across the entire network and a subsequent weakening of the dynamical structure.
Similarly, Fig 9 shows that single-node contributions to an emergent 3-macro exhibit a distinct localised pattern at balanced points between functional integration and segregation. In this regime, nodes 1, 2, and 3 contribute significantly to the emergent 3-macro, while nodes 4 and 5 show minimal to no contribution, as reflected by the lighter colours in their respective matrices. However, when the parameter regime shifts towards extreme functional integration or segregation, this localisation of contributions weakens. The result is a more distributed contribution across all nodes, leading to a diminished and less coherent dynamical structure.
Finally, given that the uncoupled network serves as a control, we should expect the minimal DD-valued n-macros in such a system to reflect subspaces that span some combination of the original coordinate axes. That is, they are localised or distributed across some set of nodes. In fact, without the need for simulation, we can theoretically predict that for a macro dimension n, any subspace formed by linear combinations of n of the N original coordinate axes (microscopic state space) should exhibit close-to-zero DD values. Indeed, this expectation is supported by the results obtained in the lower graphs in Fig 7, where the 2- and 3-dimensional emergent macros identified by the optimisation process in the uncoupled system correspond to these close-to-zero DD subspaces.
Further, Fig 10 illustrates that in the uncoupled network, the contributions of microscopic nodes to the emergent 2- and 3-macros vary randomly across the entire parameter space, resulting in the absence of distinct localisation of the dynamical structure of emergent macros, even when functional integration and segregation are balanced. These findings suggest that the anatomical connectivity between nodes in the brain network is crucial in determining the localisation of dynamics and the contribution of nodes to emergent dynamical structure at the macroscopic scale. In the absence of coupling, the emergent dynamical structure is not maximised, and the DD values are close to zero, indicating higher emergence (lower DD) but without the distinct localisation that characterises the maximised emergent dynamical structure in the coupled network.
Global coupling varies along the y-axis, and dynamical noise varies along the x-axis. Higher contributions are indicated by darker colours. The contributions of nodes to the emergent 3-macros vary randomly across the parameter space, indicating the absence of distinct localisation in the uncoupled network.
Statistical significance of single-node contributions to emergent n-macros.
Thus far, we have explored how and when macroscopic dynamical structure emerges through the localisation of single-node contributions to emergent n-macros. To assess whether these single-node contributions to emergent 2-macros and 3-macros across the parameter space are statistically significant compared to the uncoupled brain network (control condition), we performed a Wilcoxon rank-sum test. Specifically, this analysis compares the distribution of node weights across all simulations for the coupled and uncoupled regimes (Figs 8, 9, and 10). The statistical comparison is conducted across the entire parameter space explored, not just within the parameter regime where we qualitatively observe distinct localisation of the emergent macroscopic dynamical structure. By performing the statistical comparison across the full parameter space, we ensure a robust and comprehensive assessment of the significance of single-node contributions to emergent macroscopic dynamical structures, thereby avoiding potential biases that could arise from selectively focusing on regions where qualitative observations suggest distinct localisation.
Consulting Fig 11, the Wilcoxon rank-sum test reveals a significant difference in the contribution of node 4 to an emergent 2-macro in the coupled brain network compared to the uncoupled brain network used as a control condition (). No significant contribution was observed from any other nodes. Interestingly, despite the qualitative illustration in Fig 8 showing distinctly higher contributions from node 5 to the emergent 2-macro, this node did not exhibit statistical significance when compared to the control Fig 10. This discrepancy might be due to the statistical analysis being conducted across the entire parameter space, rather than being confined to the regime where distinct localisation is observed. However, it is curious that node 4 still shows significance under the same conditions, suggesting that the lack of significance for node 5 may not be fully explained by this alone.
(Left) Emergent 2-macro and (Right) Emergent 3-macro in the coupled network versus the uncoupled network, evaluated across a parameter sweep of global coupling and dynamical noise. Statistical significance is indicated by star symbols in the plots: one star () denotes a pâ€#144;value less than 0.05, two stars (
) represent a p-value less than 0.01, and three stars (
) indicate a p-value less than 0.001. Note that the z-score is near zero for node 5 in the left plot
Similarly, a Wilcoxon rank-sum test reveals a significant difference in the contributions of nodes 1 () , 2 (
), and 3 (
) to an emergent 3-macro in the coupled network compared to the uncoupled network. In contrast, nodes 4 and 5 do not show significant contributions.
Discussion
Overall, our results reveal that when the co-existence functional integration and segregation are finely balanced, the dynamical dependence (DD) of macroscopic variables is higher compared to other parameter regimes. This indicates that macroscopic dynamics are less emergent and more dependent on the underlying micro-level dynamics at these points. However, the emergent dynamical structure–defined by the localisation of contributions to macroscopic variables–is maximised under these conditions, with specific micro-level nodes distinctly contributing to the macroscopic dynamics. Conversely, deviating from these balanced points leads to a lower DD (indicating more emergent macroscopic dynamics) but results in a loss of localisation, where the contributions of micro-level nodes become more distributed.
We developed and outlined a complete computational method for identifying emergent dynamical structure in neural models. By modulating global coupling–pushing the system toward functional integration–and dynamical noise–pushing the system toward functional segregation–we examined how their balanced coexistence influences both the dynamical dependence and the localisation of contributions to macroscopic variables in a biophysical neural model. This approach allowed us to uncover the nuanced relationship between minimised emergence (in terms of higher DD) and maximised emergent dynamical structure (in terms of localisation) at balanced integration and segregation.
First, in a worked example, we illustrated that even in simple toy systems, achieving absolute dynamical independence is nearly impossible, as predicted by theoretical claims [30], and that the weighting of each node’s contribution rarely equals zero. It is important to note, however, that these node contributions primarily reflect the localisation of the n-macros within the original system’s state space, rather than directly indicating their emergent characteristics on the higher-order scales themselves. For instance, in a more general scenario where the network connectivity is arbitrarily rotated in a high-dimensional space, the emergent n-macros would remain unchanged (due to the transformation-invariance of the DI framework), but the node contributions could appear more dispersed and potentially random. This highlights the distinction between node contributions and the emergent properties on the higher-order scales. While this limitation means that node contributions alone may not fully capture the higher-order structure, they remain valuable for revealing how macroscopic dynamical are spatially localised within the network: an aspect of the emergent dynamical structure of the neural model. As mentioned, a potential utility in capturing the structure of the higher-order interactions can be through similarity matrices, for a implementation on a simple system. By integrating single-node analysis with other methods, we can achieve a more comprehensive understanding of the emergent dynamical structure in empirical data.
Next we constructed biophysical neural models that were designed with prespecified structural connections to assess how functional integration and segregation effect the emergent dynamical organisation even with ground truth modularity of structural connections. While this setup aids in illustrating the concept, it may not fully represent real-world neural data where n-macros do not necessarily correspond to such straightforward structure via analysis of their spatial localisation.
This is important, because in our approach coarse-graining within the DI framework is not formalised as a partitioning function that simplifies a complex high-dimensional network into a more manageable structure, as seen in other formal approaches to emergence [75]. Instead, it serves as an information-theoretic dimensionality reduction technique that captures lower-dimensional descriptions of whole-system dynamics across spatial scales. Because it does not rely on strict partitioning, this approach is well-suited to address the graded nature of contributions from individual nodes within the network. By avoiding the rigid boundaries imposed by partitioning, our method allows for a nuanced analysis of how these contributions vary, enabling us to assess whether the dynamical structure is more localised—where a few nodes dominate—or more distributed—where contributions are spread across many nodes—as the parameter values change. This flexibility is crucial in capturing the complex interplay between functional integration and segregation, and in understanding how local interactions aggregate into global dynamics. Importantly, this method also differs from other dimensionality reduction techniques, such as PCA, t-SNE, and UMAP [76], as it focuses on capturing lower-dimensional descriptions of dynamics derived from the interactions of microlevel constituents, rather than merely accounting for variance or relying on algorithmic clustering. Following this, we now discuss some key results.
Localisation of emergent n-macros: Defining the emergent dynamical structure.
Our results demonstrate that when the co-existence of functional integration and segregation are finely balanced, the emergent macroscopic dynamical structure is maximised through the localisation of single-node contributions to the emergent n-macros. Specifically, at these balanced points, the whole-system dynamics exhibit distinct patterns of single-node contributions to the emergent n-macros, rather than random or evenly distributed patterns. This maximised localisation defines the emergent dynamical structure at the balanced points of integration and segregation.
This distinct structure is evident from Figs 8 and 9, where nodes 1, 2, and 3 show negligible contributions to the emergent 2-macro, while nodes 4 and 5 contribute significantly to the emergent 2-macro at the balanced points. Conversely, for the emergent 3-macro, nodes 4 and 5 show negligible contributions, while nodes 1, 2, and 3 contribute significantly. Statistically, we do not have a definitive explanation for why node 5 does not show significant contributions in certain cases while node 4 does (see Fig 11). We suspect that there might be a hidden bias in the optimisation procedure used for the DI analysis, particularly in the case of the uncoupled network. In theory, the optimisation should identify 2-macros that are combinations of pairs of coordinate axes uniformly across all pairs. However, it is possible that the procedure is biased towards certain pairs more than others. This issue requires further investigation into the optimisation procedure across the coupled and uncoupled networks to fully understand the underlying reasons. Moreover, when moving away from the balanced points—either by increasing functional integration or segregation—the lack of distinct, localised single-node contributions to emergent n-macros leads to varied, distributed contributions from all micro-level nodes. This distributed nature induces a loss of distinct dynamical structure over the micro-level nodes that is otherwise observed at the balanced points. Interestingly, this loss of distinct structure is accompanied by relatively lower values of dynamical dependence (DD) for emergent n-macros, suggesting higher emergence of macroscopic dynamics.
While it might initially seem counterintuitive that a distinct, localised emergent dynamical structure occurs alongside lower emergence of macroscopic dynamics (as indicated by higher DD values), this observation remains consistent with the DI framework. At the balanced points, the DD is higher, indicating that the macroscopic dynamics are less emergent and more dependent on the underlying micro-level dynamics. The distinct structure does not necessarily imply that the emergent n-macros should possess a greater degree of dynamical (informational) closure. Crucially, the dynamical closure of the emergent n-macros is determined in relation to the microscopic processes alone, and not in comparison to other n-macros discovered across spatial scales.
Furthermore, lower dynamical dependence (higher emergence) does not necessarily mean that the emergent n-macro predicts itself well, i.e., that it is self-determining or autonomous (see [30]). Rather, it indicates that the macroscopic dynamics are more independent from the microscopic base. Through the same optimisation procedure, one could, in principle, reveal an entirely Gaussian, white-noise process as an n-macro that is independent of the micro-level constituents. Consequently, these ’noisy’ macroscopic variables might not be dynamically relevant for the whole-system dynamical structure.
However, while such a macroscopic variable might initially seem of limited interest, identifying these white-noise macros could be useful depending on the empirical question. They can potentially be factored out, allowing researchers to focus on the core, non-trivial dynamical structures that are more informative. From the results obtained, we suggest that the lower dynamical dependence (higher emergence) values observed when increasing dynamical noise (mediating functional segregation) could be driven by the emergence of such noise-dominated macros.
Consequently, we speculate that emergent macroscopic variables accompanied by relatively higher dynamical dependence (lower emergence) could be expected at the balanced points. This suggests that for the emergent n-macros to hold any dynamical relevance within the whole-system dynamics, some degree of dependence between the macroscopic process and the microscopic process may be necessary. However, it is crucial to carefully assess the functionality and significance of the n-macros, recognising that not all identified macros may contribute meaningfully to the overall system dynamics.
Lastly, Fig 10 indicates that the parameter regime associated with the emergence of macroscopic dynamical structure is absent in an uncoupled network. This finding underscores the importance of interactions between micro-level constituents in driving the emergent macroscopic patterns of activity in neural models. In this study, we focused on the contribution levels of each node to the dynamics of the emergent n-macros, comparing these contributions with those in the uncoupled network using a Wilcoxon rank-sum test.
Beyond indices: Uncovering the dynamical structure of complexity and emergence.
Our approach informs existing intuitions about organisational complexity in both computational [77, 78] and biological [79, 80] systems. Furthermore, it explores the association between organisational structure and the optimal balance between integration and segregation [60, 81, 82]. Our methodology uniquely provides a robust operational approach to identify the dominant dynamical structures underlying global brain states, which are often indexed by measures of criticality [6, 21, 83] and neural complexity [60, 84–87].
An attempt to clarify the relationship between the integration-segregation balance and criticality has been attempted [25]. In general, criticality is commonly associated with systems at a phase transition, often characterised by power-law dynamics [88]. Theoretically, criticality refers to the point at which a system transitions between different phases, typically marked by scale-free properties [6]. However, the term “criticality” can be somewhat ambiguous, as it is empirically measured in various ways [6, 83], often alongside measures of complexity or signal diversity [89].
Both measures of criticality and complexity serve as indices that refer to underlying organisational structure within the system. For instance, the power-law structure, such as the 1/f curve, indicates that the system exhibits scale-free organisation. Similarly, measures of neural complexity aim to index the degree of statistical structure within in neurophysiological data.
While remaining agnostic to specific measures of criticality or complexity, our work seeks to vary the two parameters that are believed to influence both, with the aim of providing a complementary method that goes beyond empirical indices to identify the underlying dynamical structure and its level of emergence. Further, we would like to emphasise that the association between emergence, criticality, and complexity is an historical association rather than a principled one. This is primarily due to both taking emergent phenomena as their object of inquiry. Although criticality identifies critical regimes as their emergent phenomena of inquiry, within our framework dynamical dependence has a quite different definition, primarily based on dynamically independent subsystems (macroscopic variables). An exciting avenue for future research will be to compare our method with other criticality or neural complexity measures—which actually peak during a balanced point of integration and segregation—to explore how these measures deviate from each other in a principled and practical way. Characterising the emergent dynamical structure of global brain states in relation to these indices, using both observational and perturbed datasets, could offer significant advancements in our understanding of complexity in neuroscience.
Building on existing studies that explore the macrostates of brain activity [31, 37, 39, 90, 91], as well as those quantifying synergy between brain region pairs [49, 51, 92], our approach offers a complementary perspective by uncovering the macroscopic dynamical structure derived from microscopic interactions and quantifying its dependence on these interactions. Unlike other views of emergence, such as synergy, which captures a specific type of higher-order interactions, DI provides a framework for understanding dynamical closure at the macroscopic level. Thereby identifying dynamical subsystems that might serve as independent communication subspaces for regions. This approach not only reveals emergent macroscopic variables across spatial scales but also highlights the system-wide organisation that arises from the underlying dynamics. Integrating these tools to form a comprehensive understanding of emergence in the brain represents an intriguing direction for future research. Further research will need to expand the application of our framework to larger artificial networks or neurophysiological data.
Bridging emergence, coarse-graining, dynamical closure, and dimensionality reduction.
This work contributes to the broader effort to (i) quantify and detect coarse-grained macroscopic dynamics, and (ii) provide a precise methodology for further application to large-scale neurophysiological data. In particular, we consider the relation of our approach to effective information-based measures of causal emergence [46, 75], informational closure [63, 64, 93], and common dimensionality reduction techniques [67, 76].
First, we examine the distinction between coarse-graining in the DI framework and coarse-graining within the context of effective information-based causal emergence [46, 75, 94, 95]. In causal emergence, coarse-graining involves recasting a complex network into non-overlapping partitions using a hard-partitioning function and evaluating the effective information of the resulting higher-order network. Effective information is measured by balancing the average degeneracy and determinism within the network. A partitioned graph that exhibits higher effective information than the original is considered causally emergent. This approach can be applied even without interventionist methods of causality [96, 97], extending across various bidirectional networks [75, 98].
In contrast, coarse-graining within the DI framework focuses on the degree of informational or causal closure of a macroscopic variable, aligning more closely with the principles of statistical mechanics. Instead of partitioning the network into sub-networks, this approach involves partitioning the microscopic state-space (which partitions the dynamics, not the nodes) into subsets , where Y represents the macroscopic states. This method captures how individual microlevel nodes contribute to these macroscopic variables, which is much closer aligned to statistical mechanics descriptions macroscopic (ensemble) properties emerging from microscopic interactions [8, 99]. DI serves as a dimensionality-reduction technique that reveals low-dimensional dynamics as self-contained systems, without transforming the network into hierarchical structures. Given its ability to capture the distributed nature of macroscopic variables and their degree of dependence on the microlevel, DI aligns more closely with a heterarchical perspective of dynamical structure [100].
In support of a dynamical closure perspective on the emergence of macroscopic dynamics in the brain, it is important to recognise that a core principle of brain organisation is its function as a highly distributed information system [102–104], where local (microlevel) functional units integrate to generate macrolevel dynamics. These dynamics are not fixed but fluctuate over time, involving the transient recruitment of various microlevel regions [104]. This suggests that the brain’s global dynamics are driven by overlapping compositions of regions, without clear distinction between the microlevel parts and their relation to the whole. Regions within the brain are dynamically organised in response to functional and computational demands, reflecting the adaptive and non-static nature of brain dynamics across spatial scales. Although we worked on a 5-node simulated network in which these challenges are not present, our approach can adapt to the challenges of larger or real brain networks because it emphasises dynamical closure, focuses on the emergent dynamical structure, and examines to what degree microlevel nodes contribute to macroscopic dynamics—allowing for us to assess the degree of localised or distributed activity.
DI distinguishes itself from other formal approaches to informational closure, such as those by [105] and [64], by extending the concept beyond absolute informational closure (perfect dynamical independence) and accommodating non-Markovian dynamics. This allows DI to capture the nuanced interdependence between macroscopic and microscopic processes.
While [93] explore informational closure as a framework for conscious processing, our approach remains neutral on such interpretations. Additionally, unlike Chang and colleagues’ consideration of direct information flow between macroscopic and environmental variables, our framework imposes a supervenience condition (See Eq 2), ensuring that no new information emerges at the macroscopic level beyond what is determined by the microscopic processes. This means that the microscopic fully dictates the information shared with the macroscopic dynamics.
Finally, although DI results in dimensionality reduction, it offers distinct advantages over commonly used techniques such as Principal Components Analysis (PCA) and t-distributed Stochastic Neighbour Embedding (t-SNE). PCA identifies components of maximum variance, and t-SNE preserves local relationships between data points [67, 76, 106]. However, neither method explicitly accounts for the time-dependence of neural activity to capture low-dimensional dynamics. In contrast, DI operates as an information-theoretic dimensionality reduction technique, directly mediated by microlevel interactions by the temporal structure of the time series.
Comparison with synergy-based causal emergence.
A separate but related flavour of causal emergence—utilising a Partial Information Decomposition (PID) framework [85]—captures emergence based on synergistic capacity [47]. Specifically, Rosas et al. propose a measure of emergence defined through the synergistic component of interactions within multivariate systems, explicitly quantifying causal emergence in terms of synergy within the PID framework. This approach has gained increasing attention among practitioners, demonstrated by recent applications to neural data [49, 51, 101].
Like Dynamical Independence (DI), causal emergence conceptualises emergence as a fundamentally multivariate phenomenon, and measures it on a continuous scale. Note that within our DI framework, zero dynamical dependence () corresponds to perfect dynamical independence or perfect emergence, and increasing dynamical dependence values indicate reduced emergence.
The principal difference between DI and causal emergence stems from their distinct theoretical foundations rather than from specific methodological choices. While causal emergence is fundamentally mereological, emphasising part-to-whole relationships, DI is based on informational or dynamical closure, highlighting the degree of independence of emergent dynamical subsystems from the micro-level dynamics. This foundational distinction leads to contrasting operationalisations: causal emergence evaluates how macroscopic variables improve predictions of micro-level processes (macro-to-micro), whereas DI identifies macroscopic variables that minimise predictive dependence on micro-level processes (micro-to-macro).
To summarise, causal emergence measures emergence by the gain in synergy, with higher values indicating more emergence, whereas DI is operationalised through dynamical de-pendence (closure), with lower values corresponding to greater emergence. This inverse relationship underlines the deliberate analogy between dynamical independence and statistical independence emphasised in Methods Section Statistical significance of dynamical dependence.
Limitations and open questions.
DI, in it’s current framework, is based on Granger causality. Although Granger causality is well-defined for non-stationary processes, it is notoriously difficult to estimate in these cases, and as such is not considered here. Thus we assume the wide-sense stationarity of neurophysiological data, and might not be able to capture physiologically-relevant non-stationarities in the time-series strongly correlated with specific brain states. An example can be illustrated by brief neural oscillations (e.g., sleep spindles), or waves (e.g., K-complexes, or epileptic activity) as well as transient responses to external or internal stimuli. However, linear approximations have been argued to optimally capture macroscopic neural dynamics [107], particularly in resting-state activity.
Furthermore, fitting a VAR or SS model for G-causality estimation assumes a linear model rather than a linear process, which is a subtle but important distinction. While the model assumes linearity in its structure, this does not necessarily mean that the underlying time series must be linear. The presence of non-linearities in the data does not inherently imply that the fitted model will be unstable (see [44] for an in-depth discussion); rather, it suggests that the model may not fully capture the effects of those non-linearities. In fact, if the SS or VAR model is stable, then Granger causality inference and DI analysis remain well-defined.
A more precise understanding is that the model, if stable, induces a linear representation of the underlying data. The key issue then becomes whether this induced linear model can sufficiently account for the non-linearities present in the time series. Wold’s decomposition theorem [108] guarantees that any stationary process can be decomposed into a linear model, although this model may require an infinite order, making it potentially non-parsimonious for time series generated by a nonlinear process [109]. Thus, the impact of nonlinearity on G-causality estimation is nuanced and complex. The critical question remains whether the linear model, when applied to nonlinear data, provides a sufficiently accurate representation of the underlying dynamics to make valid inferences. While some research suggests advantages of linear models [107], this remains an important area for future research.
One limitation of our study is that the neural networks we employed are not functionally specialised–they are not designed to perform specific tasks or processes. In biological brain networks, functional specialisation is a fundamental characteristic that influences how integration and segregation manifest in neural dynamics. Therefore, it is possible that functionally specialised systems might exhibit different patterns of dynamical dependence and emergent dynamical structures. To address this, future research could explore simulations of networks with functional specialisation, perhaps by incorporating task-specific modules or connectivity patterns. Testing our methods in such simulated environments would help determine whether the observed results generalise to systems that more closely resemble the functional organisation of the brain.
Our study employs a relatively small 5-node neural model, which allows for detailed exploration of emergent n-macros. This work in part serves to motivate future applications of these methods to larger, functionally specialised systems. In ongoing work, we have begun to extend our techniques to whole-brain EEG data, performing optimisation on 46-region source-reconstructed EEG data from 14 subjects under anaesthetic conditions, requiring approximately two hours of computation time per subject, Furthermore, by employing the CPSD approach instead of the state-space modeling approach (see S1 Appendix of the Supporting Information) on 68-region source-reconstructed EEG data spanning a full sleep cycle (approximately 6–8 hours), we achieved 100 optimisations for each macroscopic scale (ranging from 2 to 10) per participant in about 90–120 minutes. These computations were conducted on the SPARTAN HPC cluster, with participants processed in serial order and macroscopic variables handled in parallel. This progression not only addresses the computational challenges associated with larger models but also brings us closer to understanding emergent dynamical structures in more realistic neural systems.
Ultimately, the dynamical relevance and implications of emergent n-macros for the systems under study remain an open empirical question. It is plausible to consider that emergent n-macros with many equally implicated contributions from microlevel nodes could function as low-dimensional communication subspaces through which higher-order interactions are mediated [110]. These n-macros, by capturing distributed contributions across the network, may represent the channels through which complex, coordinated dynamics occur at a macroscopic level. With DI we can establish the degree of localisability or distribution of these communication subspaces (n-macros). This concept of communication subspaces presents a compelling direction for future research.
Conclusion
This work provides methodological, empirical and theoretical contributions to the exploration of emergent dynamics in complex neural systems. From a methodological and empirical perspective, we demonstrate that the balance between functional integration and segregation significantly influences the emergent dynamical structure across macroscopic spatial scales in biophysical neural models. Our results revealed that a distinct dynamical structure is identified by the spatial localisation of n-macros at a balanced point between integration and segregation, where specific microlevel nodes predominantly contribute to specific emergent n-macros. In contrast, this organised structure becomes less localised and more distributed in parameter regimes marked by either excessive integration or segregation. These results illustrate the heuristic power our approach may have when applied to biological systems, in identifying the brain structures participating in emergent dynamics.
From a theoretical perspective, this work contributes to the broader agenda of moving beyond indexical measures of complexity and emergence to identifying the underlying structure of dynamics that underpin quantities like these. Progress in identifying dominant emergent macroscopic patterns has implications for defining stable global brain states and understanding how the brain organises itself to meet the computational demands of whole-brain function. The findings here represent a promising step towards leveraging DI for empirical investigations into dynamical properties that go beyond traditional complexity indices, offering a more qualitative perspective of brain organisation.
Acknowledgments
We would like to thank the Melbourne School of Psychological Sciences at the University of Melbourne, the Sussex Centre for Consciousness Science at the University of Sussex, and the Paris Brain Institute (ICM) for providing the necessary resources and institutional support. We also extend our gratitude to lab members and colleagues for their insightful discussions and assistance over the drafting of the manuscript.
We thank Adam Barrett, Marcello Massimini, Fernando Rosas, Pedro Mediano, and Aniko Kusztor for invaluable discussions, encouragement, and ongoing support throughout the research process.
Supporting information
S1 Appendix. Granger causality
Granger causality is explained in relation to transfer entropy, with details on least-squares prediction, statistical testing, and computational methods. Alternative estimation techniques and software implementation are also discussed.
https://doi.org/10.1371/journal.pcbi.1012572.s001
(PDF)
S2 Appendix. Linear state-space modelling
This appendix introduces state-space models as an alternative to VAR models for Granger causality estimation, highlighting their advantages in handling moving average components and providing computational methods for implementation.
https://doi.org/10.1371/journal.pcbi.1012572.s002
(PDF)
S3 Appendix. Minimisation of dynamical dependence by gradient descent
A gradient descent method for minimising dynamical dependence is presented, including a proxy function for efficiency and an adaptive step-size algorithm. Alternative optimisation strategies are also briefly discussed.
https://doi.org/10.1371/journal.pcbi.1012572.s003
(PDF)
S4 Appendix. Global brain network dynamics
The evolution equation governing whole-brain models is outlined, incorporating structural connectivity and time delays. The Stefanescu-Jirsa 3D neural mass model is introduced with its mathematical formulation and role in macroscopic dynamics.
https://doi.org/10.1371/journal.pcbi.1012572.s004
(PDF)
S5 Appendix. Principal angles and single-node contribution to macroscopic dynamics
Principal angles are used to quantify similarity between subspaces and identify emergent macroscopic structures. The contribution of individual and grouped microlevel nodes to macroscopic dynamics is analysed, with implications for understanding emergent structures in brain activity.
https://doi.org/10.1371/journal.pcbi.1012572.s005
(PDF)
S6 Appendix. Glossary
A clear summary of technical terms used within the paper.
https://doi.org/10.1371/journal.pcbi.1012572.s006
(PDF)
References
- 1. Tononi G, Sporns O, Edelman GM. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc Natl Acad Sci USA. 1994;91(11):5033–37. pmid:8197179
- 2. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol. 2008;4(8):e1000092. pmid:18769680
- 3. Friston K, Breakspear M, Deco G. Perception and self-organized instability. Front Comput Neurosci. 2012;6:44. pmid:22783185
- 4. Deco G, Kringelbach ML, Jirsa VK, Ritter P. The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical core. Sci Rep. 2017;7(1):3095. pmid:28596608
- 5. Sanz Leon P, Knock SA, Woodman MM, Domide L, Mersmann J, McIntosh AR, et al. The Virtual Brain: a simulator of primate brain network dynamics. Front Neuroinform. 2013;0(MAY):10. pmid:23781198
- 6. Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Prog Neurobiol. 2017;158:132–52. pmid:28734836
- 7. Honey CJ, K̈otter R, Breakspear M, Sporns O. Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc Natl Acad Sci. 2007;104(24):10240–45.
- 8. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol. 2008;4(8):e1000092. pmid:18769680
- 9. Sanz-Leon P, Knock SA, Spiegler A, Jirsa VK. Mathematical framework for large-scale brain network modeling in The Virtual Brain. NeuroImage. 2015;111:385–430. pmid:25592995
- 10. Ay N. Information geometry on complexity and stochastic interaction. Entropy. 2015;17(4):2432–58.
- 11.
Bar-Yam Y. Dynamics of complex systems. Boca Raton: CRC Press; 2019.
- 12. Oizumi M, Amari S, Yanagawa T, Fujii N, Tsuchiya N. Measuring integrated information from the decoding perspective. PLoS Comput Biol. 2016;12(1):e1004654. pmid:26796119
- 13. Leung A, Cohen D, van Swinderen B, Tsuchiya N. Integrated information structure collapses with anesthetic loss of conscious arousal in Drosophila melanogaster. PLoS Comput Biol. 2021;17(2):e1008722. pmid:33635858
- 14. Schirner M, Kong X, Yeo BTT, Deco G, Ritter P. Dynamic primitives of brain network interaction. NeuroImage. 2022;250:118928. pmid:35101596
- 15. Shalizi C, Crutchfield J. Computational mechanics: Pattern and prediction, structure and simplicity. J Stat Phys. 2001;104:817–79.
- 16. Shalizi C, Moore C. What is a macrostate? Subjective observations and objective dynamics. arXiv 2003.
- 17. Tononi G, Edelman GM. Consciousness and complexity. Science. 1998;282(5395):1846–51. pmid:9836628
- 18. Seth AK, Izhikevich E, Reeke GN, Edelman GM. Theories and measures of consciousness: an extended framework. Proc Natl Acad Sci USA. 2006;103(28):10799–804. pmid:16818879
- 19. Seth AK, Barrett AB, Barnett L. Causal density and integrated information as measures of conscious level. Philos Trans A Math Phys Eng Sci. 2011;369(1952):3748–67. pmid:21893526
- 20. Arsiwalla XD, Verschure P. Measuring the complexity of consciousness. Front Neurosci. 2018;12:424. pmid:29997472
- 21. Colombo MA, Napolitani M, Boly M, Gosseries O, Casarotto S, Rosanova M, et al. The spectral exponent of the resting EEG indexes the presence of consciousness during unresponsiveness induced by propofol, xenon, and ketamine. NeuroImage. 2019;189:631–44. pmid:30639334
- 22. Dürschmid S, Reichert C, Walter N, Hinrichs H, Heinze H-J, Ohl FW, et al. Self-regulated critical brain dynamics originate from high frequency-band activity in the MEG. PLoS One. 2020;15(6):e0233589. pmid:32525940
- 23. Sarasso S, Casali AG, Casarotto S, Rosanova M, Sinigaglia C, Massimini M. Consciousness and complexity: a consilience of evidence. Neurosci Conscious. 2021;2021(2):1–24. pmid:38496724
- 24. Kim H, Lee U. Criticality as a determinant of integrated information 𝜙 in human brain networks. Entropy. 2019;21(10):981.
- 25. Mediano P, Farah J, Shanahan M. Integrated information and metastability in systems of coupled oscillators. arXiv Preprint.
- 26. Balduzzi D, Tononi G. Integrated information in discrete dynamical systems: motivation and theoretical framework. PLoS Comput Biol. 2008;4(6):e1000091. pmid:18551165
- 27. Barrett AB, Seth AK. Practical measures of integrated information for time-series data. PLoS Comput Biol. 2011;7(1):e1001052. pmid:21283779
- 28. Mediano PA, Seth AK, Barrett AB. Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy. 2018;21(1):17. pmid:33266733
- 29. Mediano PA, Rosas FE, Farah JC, Shanahan M, Bor D, Barrett AB. Integrated information as a common signature of dynamical and information-processing complexity. Chaos. 2022;32(1):013115. pmid:35105139
- 30. Barnett L, Seth AK. Dynamical independence: discovering emergent macroscopic processes in complex dynamical systems. Phys Rev E. 2023;108(1):014304. pmid:37583178
- 31. Tagliazucchi E, von Wegner F, Morzelewski A, Brodbeck V, Jahnke K, Laufs H. Breakdown of long-range temporal dependence in default mode and attention networks during deep sleep. Proc Natl Acad Sci USA. 2013;110(38):15419–24. pmid:24003146
- 32. Demertzi A, Tagliazucchi E, Dehaene S, Deco G, Barttfeld P, Raimondo F, et al. Human consciousness is supported by dynamic complex patterns of brain signal coordination. Sci Adv. 2019;5(2):eaat7603. pmid:30775433
- 33. Luppi AI, Mediano PAM, Rosas FE, Allanson J, Pickard JD, Williams GB, et al. Paths to oblivion: Common neural mechanisms of anaesthesia and disorders of consciousness. 2021. p. 2021.02.14.431140. bioRxiv.
- 34. Nadin D, Duclos C, Mahdid Y, Rokos A, Badawy M, Létourneau J, et al. Brain network motif topography may predict emergence from disorders of consciousness: a case series. Neurosci Conscious. 2020;2020(1):niaa017. pmid:33376599
- 35. Tognoli E, Kelso JS. The metastable brain. Neuron. 2014;81(1):35–48. pmid:24411730
- 36. Shanahan M. Metastable chimera states in community-structured oscillator networks. Chaos. 2010;20(1):013108. pmid:20370263
- 37. Pang JC, Aquino KM, Oldehinkel M, Robinson PA, Fulcher BD, Breakspear M, et al. Geometric constraints on human brain function. Nature. 2023;618(7965):566–74. pmid:37258669
- 38. Cabral J, Fernandes FF, Shemesh N. Intrinsic macroscale oscillatory modes driving long range functional connectivity in female rat brains detected by ultrafast fMRI. Nat Commun. 2023;14(1):375. pmid:36746938
- 39. Koussis NC, Pang JC, Jeganathan J, Paton B, Fornito A, Robinson P, et al. Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes. bioRxiv. 2024. p. 2024–02.
- 40. Bar‐Yam Y. A mathematical theory of strong emergence using multiscale variety. Complexity. 2004;9(6):15–24.
- 41.
Cover TM, Thomas JA. Elements of information theory. Wiley series in telecommunications and signal processing (Hardcover). New Jersey: Wiley, 2006.
- 42.
Lizier JT, Prokopenko M, Zomaya AY. A framework for the local information dynamics of distributed computation in complex systems. In: Guided self-organization: inception. Berlin: Springer; 2014. p. 115–158.
- 43. Barnett L, Seth AK. The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference. J Neurosci Methods. 2014;223:50–68. pmid:24200508
- 44. Barnett L, Seth AK. Granger causality for state-space models. Phys Rev E. 2015;91(4):040101. pmid:25974424
- 45.
Bossomaier T, Barnett L, Harre M, Lizier JT, Bossomaier T, Barnett L, et al. Transfer entropy. Berlin: Springer; 2016.
- 46. Hoel EP, Albantakis L, Tononi G. Quantifying causal emergence shows that macro can beat micro. Proc Natl Acad Sci. 2013;110(49):19790–95.
- 47. Rosas FE, Mediano PA, Jensen HJ, Seth AK, Barrett AB, Carhart-Harris RL, et al. Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS Comput Biol. 2020;16(12):e1008289. pmid:33347467
- 48. Luppi AI, Stamatakis EA. Combining network topology and information theory to construct representative brain networks. Netw Neurosci. 2021;5(1):96–124. pmid:33688608
- 49. Luppi AI, Mediano PA, Rosas FE, Holland N, Fryer TD, O’Brien JT, et al. A synergistic core for human brain evolution and cognition. Nat Neurosci. 2022;25(6):771–82. pmid:35618951
- 50. Rosas FE, Mediano PA, Luppi AI, Varley TF, Lizier JT, Stramaglia S, et al. Disentangling high-order mechanisms and high-order behaviours in complex systems. Nat Phys. 2022;18(5):476–77.
- 51. Varley TF, Pope M, Faskowitz J, Sporns O. Multivariate information theory uncovers synergistic subsystems of the human cerebral cortex. Commun Biol. 2023;6(1):451. pmid:37095282
- 52. Zhang Z, Ghavasieh A, Zhang J, De Domenico M. Coarse-graining network flow through statistical physics and machine learning. Nat Commun. 2025;16(1):1605. pmid:39948344
- 53. Stefanescu RA, Jirsa VK. A low dimensional description of globally coupled heterogeneous neural networks of excitatory and inhibitory neurons. PLoS Comput Biol. 2008;4(11):e1000219. pmid:19008942
- 54. Stefanescu R, Jirsa V. Reduced representations of heterogeneous mixed neural networks with synaptic coupling. Physical review E, Statistical, nonlinear, and soft matter physics. 2011;83(2 Pt 2):026204.
- 55. Aru J, Suzuki M, Larkum ME. Cellular mechanisms of conscious processing. Trends Cogn Sci. 2020;24(10):814–25. pmid:32855048
- 56. Munn BR, Müller EJ, Aru J, Whyte CJ, Gidon A, Larkum ME, et al. A thalamocortical substrate for integrated information via critical synchronous bursting. Proc Natl Acad Sci USA. 2023;120(46):e2308670120. pmid:37939085
- 57. Ghosh A, Rho Y, McIntosh AR, Kötter R, Jirsa VK. Noise during rest enables the exploration of the brain’s dynamic repertoire. PLOS Comput Biol. 2008;4(10):e1000196. pmid:18846206
- 58. Honey CJ, Kötter R, Breakspear M, Sporns O. Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc Natl Acad Sci USA. 2007;104(24):10240–45. pmid:17548818
- 59. Deco G, Jirsa V, McIntosh A, Sporns O, K̈otter R. Key role of coupling, delay, and noise in resting brain fluctuations. Proc Natl Acad Sci USA. 2009;106(25):10302–07.
- 60. Sporns O, Tononi G, K̈otter R. The human connectome: A structural description of the human brain. PLoS Comput Biol. 2005;1(4):0245–51.
- 61. Kaiser A, Schreiber T. Information transfer in continuous processes. Phys D Nonlinear Phenomena. 2002;166(1–2):43–62.
- 62.
Bossomaier T, Barnett L, Harré M, Lizier JT. An introduction to transfer entropy: Information flow in complex systems. In: An introduction to transfer entropy: Information flow in complex systems. 2016; p. 1–190. https://doi.org/10.1007/978-3-319-43222-9/COVER
- 63.
Bertschinger N, Olbrich E, Ay N, Jost J. Information and closure in systems theory. In: Explorations in the complexity of possible life. Proceedings of the 7th German workshop of artificial life. Amsterdam, The Netherlands: IOS Press; 2006. p. 9–21.
- 64. Pfante O, Olbrich E, Bertschinger N, Ay N, Jost J. Closure measures for coarse-graining of the tent map. Chaos. 2014;24(1):013136. pmid:24697398
- 65. Absil P-A, Mahony R, Sepulchre R. Riemannian geometry of Grassmann manifolds with a view on algorithmic computation. Acta Applicandae Mathematicae. 2004;80(2):199–220.
- 66. Bendokat T, Zimmermann R, Absil PA. A Grassmann manifold handbook: Basic geometry and computational aspects. arXiv preprint. arXiv:201113699. 2020.
- 67. Cunningham JP, Ghahramani Z. Linear dimensionality reduction: Survey, insights, and generalizations. J Mach Learn Res. 2015;16(1):2859–900.
- 68. Edelman A, Tom T, Arias TA, Smith ST. The geometry of algorithms with orthogonality constraints. Soc Ind Appl Math. 1998;20(2):303–53.
- 69. Mathews VJ, Xie Z. A stochastic gradient adaptive filter with gradient adaptive step size. IEEE Trans Signal Process. 1993;41(6):2075–87.
- 70. Ritter P, Schirner M, McIntosh AR, Jirsa VK. The virtual brain integrates computational modeling and multimodal neuroimaging. Brain Connect. 2013;3(2):121. pmid:23442172
- 71. Assisi C, Jirsa V, Kelso J. Synchrony and clustering in heterogeneous networks with global coupling and parameter dispersion. Phys Rev Lett. 2005;94(1):018106. pmid:15698140
- 72. Shen K, Bezgin G, Schirner M, Ritter P, Everling S, McIntosh AR. A macaque connectome for large-scale network simulations in TheVirtualBrain. Sci Data. 2019;6(1):1–12. pmid:31316116
- 73. Ghosh A, Rho Y, McIntosh AR, Kötter R, Jirsa VK. Noise during rest enables the exploration of the brain’s dynamic repertoire. PLOS Comput Biol. 2008;4(10):e1000196. pmid:18846206
- 74. Falcon MI, Riley JD, Jirsa V, McIntosh AR, Shereen AD, Chen E, et al. The virtual brain: Modeling biological correlates of recovery after chronic stroke. Front Neurol. 2015;6:2. pmid:26579071
- 75. Klein B, Hoel E. The emergence of informative higher scales in complex networks. Complexity. 2020;2020:1–12.
- 76. Cunningham JP, Yu BM. Dimensionality reduction for large-scale neural recordings. Nat Neurosci. 2014;17(11):1500–09. pmid:25151264
- 77. Crutchfield JP. Between order and chaos. Nat Phys. 2012;8(1):17–24.
- 78. Shalizi CR, Moore C. What is a macrostate? Subjective observations and objective dynamics. arXiv Preprint. 2003. Available at: https://arxiv.org/abs/cond-mat/0303625
- 79. Moreno A, Mossio M. Biological autonomy. A philo. 2015.
- 80. Montévil M, Mossio M. Biological organisation as closure of constraints. J Theo Biol. 2015;372:179–91. pmid:25752259
- 81. Crutchfield JP. The calculi of emergence: computation, dynamics and induction. Phys D Nonlinear Phenomena. 1994;75(1–3):11–54.
- 82. Tononi G, Edelman GM, Sporns O. Complexity and coherency: integrating information in the brain. Trends Cogn Sci. 1998;2(12):474–84. pmid:21227298
- 83. Maschke C, Duclos C, Owen AM, Jerbi K, Blain-Moraes S. Aperiodic brain activity and response to anesthesia vary in disorders of consciousness. NeuroImage. 2023;275:120154. pmid:37209758
- 84. Balduzzi D, Tononi G. Integrated information in discrete dynamical systems: motivation and theoretical framework. PLoS Comput Biol. 2008;4(6):e1000091. pmid:18551165
- 85. Mediano P, Rosas F, Carhart-Harris R, Seth A, Barrett A. Beyond integrated information: A taxonomy of information dynamics phenomena. arXiv Preprint 2019. arXiv:190902297.
- 86. Schartner M, Seth A, Noirhomme Q, Boly M, Bruno M-A, Laureys S, et al. Complexity of multi-dimensional spontaneous eeg decreases during propofol induced general anaesthesia. PLoS One. 2015;10(8):e0133532. pmid:26252378
- 87. Schartner MM, Pigorini A, Gibbs SA, Arnulfo G, Sarasso S, Barnett L, et al. Global and local complexity of intracranial EEG decreases during NREM sleep. Neurosci Conscious. 2020;2017(1):1–12. pmid:30042832
- 88. Timme NM, Marshall NJ, Bennett N, Ripp M, Lautzenhiser E, Beggs JM. Criticality maximizes complexity in neural tissue. Front Physiol. 2016;7:425. pmid:27729870
- 89.
Christensen K, Moloney NR. Complexity and criticality. Vol. 1. World Scientific Publishing Company; 2005.
- 90. Michel CM, Koenig T. EEG microstates as a tool for studying the temporal dynamics of whole-brain neuronal networks: a review. Neuroimage. 2018;180:577–93. pmid:29196270
- 91. Cabral J, Castaldo F, Vohryzek J, Litvak V, Bick C, Lambiotte R, et al. Metastable oscillatory modes emerge from synchronization in the brain spacetime connectome. Commun Phys. 2022;5(1):184. pmid:38288392
- 92. Luppi AI, Mediano PA, Rosas FE, Harrison DJ, Carhart-Harris RL, Bor D, et al. What it is like to be a bit: an integrated information decomposition account of emergent mental phenomena. Neurosci Conscious. 2021;2021(2):niab027. pmid:34804593
- 93. Chang AY, Biehl M, Yu Y, Kanai R. Information closure theory of consciousness. Front Psychol. 2020;11:505035. pmid:32760320
- 94. Hoel EP, Albantakis L, Marshall W, Tononi G. Can the macro beat the micro? Integrated information across spatiotemporal scales. Neurosci Conscious. 2016;2016(1):niw012. pmid:30788150
- 95. Griebenow R, Klein B, Hoel E. Finding the right scale of a network: efficient identification of causal emergence through spectral clustering. arXiv Preprint. arXiv:190807565. 2019.
- 96. Pearl J. Causal inference in statistics: An overview. Statistics Surveys. 2009.
- 97. Pearl J. Trygve haavelmo and the emergence of causal calculus. 2015.
- 98. Klein B, Hoel E, Swain A, Griebenow R, Levin M. Evolution and emergence: higher order information structure in protein interactomes across the tree of life. Integr Biol. 2021. pmid:34933345
- 99. Jaynes ET. Information theory and statistical mechanics. Phys Rev. 1957;106(4):620.
- 100. McCulloch WS. A heterarchy of values determined by the topology of nervous nets. Bull Math Biophys. 1945;7:89–93. pmid:21006853
- 101.
Faes L, Mijatovic G, Sparacino L, Porta A. Predictive information decomposition as a tool to quantify emergent dynamical behaviors in physiological networks. arXiv:2502.00945. 2025.
- 102. Wibral M, Lizier JT, Priesemann V. Bits from brains for biologically inspired computing. Front Robot AI. 2015;2(MAR).
- 103.
Buzsáki György M. The brain from inside out. Oxford: Oxford University Press. 2019.
- 104. Pessoa L, Medina L, Desfilis E. Refocusing neuroscience: moving away from mental categories and towards complex behaviours. Philos Trans R Soc Lond B Biol Sci. 2022;377(1844):20200534. pmid:34957851
- 105. Bertschinger N, Olbrich E, Ay N, Jost J. Autonomy: an information theoretic perspective. Biosystems. 2007.
- 106.
Humphries MD. Strong and weak principles of neural dimension reduction. arXiv Preprint. 2020. https://arxiv.org/abs/2011.08088
- 107.
Nozari E, Bertolero MA, Stiso J, Caciagli L, Cornblath EJ, He X, et al. Is the brain macroscopically linear? A system identification of resting state dynamics. 2020; p. 2020.12.21.423856. https://doi.org/10.1101/2020.12.21.423856
- 108.
Wold H. A study in the analysis of stationary time series. Almqvist & Wiksell. 1938.
- 109. Hannan EJ, Deistler M. The statistical theory of linear systems. SIAM. 2012.
- 110. Semedo JD, Zandvakili A, Machens CK, Byron MY, Kohn A. Cortical areas interact through a communication subspace. Neuron. 2019;102(1):249–59. pmid:30770252