Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

External drivers of BOLD signal’s non-stationarity

  • Arian Ashourvan ,

    Roles Conceptualization, Formal analysis, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    ashourvan@ku.edu

    Affiliation Department of Psychology, University of Kansas, Lawrence, KS, United States of America

  • Sérgio Pequito,

    Roles Conceptualization, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing

    Affiliation Delft Center for Systems and Control, Delft University of Technology, Delft, Netherlands

  • Maxwell Bertolero,

    Roles Data curation, Writing – original draft

    Affiliation Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States of America

  • Jason Z. Kim,

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States of America

  • Danielle S. Bassett,

    Roles Funding acquisition, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States of America, Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA, United States of America, Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA, United States of America, Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America, Department of Electrical & Systems Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States of America, Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, United States of America

  • Brian Litt

    Roles Conceptualization, Funding acquisition, Investigation, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States of America, Penn Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA, United States of America, Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA, United States of America

Abstract

A fundamental challenge in neuroscience is to uncover the principles governing how the brain interacts with the external environment. However, assumptions about external stimuli fundamentally constrain current computational models. We show in silico that unknown external stimulation can produce error in the estimated linear time-invariant dynamical system. To address these limitations, we propose an approach to retrieve the external (unknown) input parameters and demonstrate that the estimated system parameters during external input quiescence uncover spatiotemporal profiles of external inputs over external stimulation periods more accurately. Finally, we unveil the expected (and unexpected) sensory and task-related extra-cortical input profiles using functional magnetic resonance imaging data acquired from 96 subjects (Human Connectome Project) during the resting-state and task scans. This dynamical systems model of the brain offers information on the structure and dimensionality of the BOLD signal’s external drivers and shines a light on the likely external sources contributing to the BOLD signal’s non-stationarity. Our findings show the role of exogenous inputs in the BOLD dynamics and highlight the importance of accounting for external inputs to unravel the brain’s time-varying functional dynamics.

1 Introduction

Over the past few decades, functional MRI has widened our understanding of the functional organization of intrinsic brain networks and their role in cognition and behavior. Classical univariate (i.e., voxel-wise) analyses of fMRI signal (i.e., blood-oxygenation level-dependent, or BOLD) have been instrumental in probing the specialized function of brain regions. More recent approaches using functional connectivity and network neuroscience portray a complex and multi-scale set of interactions between brain structures. Following this view, a wide array of graph theoretical and complex systems tools have been used to describe BOLD dynamics [13].

Despite these efforts, we still lack a unified mechanistic framework that overcomes three key limitations. First, the features of the BOLD signal that are important for neural activity are unclear. Several prior studies demonstrate a relation between BOLD and slow amplitude features of cortical activity [46], and between BOLD and the hemodynamic response function (HRF) [7, 8]. These studies imply that the low frequency component of the BOLD signal contains information relevant to underlying neural dynamics [9, 10], although it is also clear that the signal contains artifact [11, 12]. Due to the mixture of signal and artifact in the BOLD time series, it is possible that the common practice of band-pass filtering the BOLD signal at low frequencies may exclude functionally relevant signal [13, 14]. Second, many graph theoretic and network analyses are inherently descriptive in nature, and lack the power to give a generative understanding of the relationship between model inputs and outputs (for extensions of these approaches that move beyond description into explanation and prediction, see [15]). Finally, model-based approaches often treat the brain as an isolated system by ignoring external input, or assuming an artificial profile of internal and external noise.

To address these three limitations, we develop a generative framework that explicitly includes exogenous input (e.g., external sensory or subcortical structures’ inputs), and provide evidence that the brain’s activity can be fruitfully understood in the context of its natural drivers. Specifically, we use a multivariate autoregressive model with unknown inputs to capture the spatiotemporal evolution of the BOLD signal driven by extra-cortical inputs. These models have been used to characterize and predict the evolution of several synthetic and biological systems [1619]. For instance, Chang and colleagues (2012) leveraged a multivariate linear dynamical system’s framework and the patients’ intracranial EEG to model the cortical impulse response to the direct electrical stimulation. Many prior studies use this [20] and similar methods such as Granger causality and dynamic causal modeling (DCM) for understanding the directed functional connectivity of BOLD [1, 2123]. While some prior studies account for the effect of exogenous input [1, 24], they typically assume a simple known and abstract form of the input function [19]. Moreover, the inability of models such as DCM to capture signal variations beyond those caused by the external inputs makes the connectivity estimation highly dependent on the assumed number and form of the inputs [25].

In this work, we treat the exogenous inputs to the cortex as unknown parameters of a linear time-invariant (LTI) system, which we estimate following recent developments in linear systems theory [26]. We use these developments to provide new insights into how the brain responds to ongoing task requirements, and to shine a light on factors that contribute to the dynamics of cortical functional connectivity. To demonstrate our approach’s utility, we begin with a proof-of-concept where we consider synthetic examples for which we retrieve the external inputs’ spatiotemporal profiles of a known LTI system. We demonstrate that unknown external inputs result in apparent changes in internal system parameters, and consequently, in estimated external inputs’ error. Also, we show that using internal system parameters estimated from time windows without external stimulation significantly improves our ability to extract external inputs’ profile from periods with external stimulation, expect for simulations with relatively low external inputs and signal-to-noise.

Next, we test the hypothesis that variations in cortical dynamics during different tasks or cognitive states can be accurately modeled as external excitations on fairly stable interactions between cortical regions. Specifically, we recover the unknown external cortical inputs during resting-state and task scans for 96 subjects with the lowest motion artifact from the Human Connectome Project (HCP). Our results demonstrate that using system parameters estimated from resting-state scans enables uncovering the expected spatiotemporal profiles of external sensory (i.e., visual cues) and task-related extra-cortical inputs, while system parameters estimated from task scans result in highly inaccurate input estimations. In addition, an in-depth examination of estimated inputs during task scans reveals the spatiotemporal patterns of other task-related inputs that were not captured by the abstract task regressors.

Lastly, we measure the non-stationarity of estimated external inputs over resting-state scans to examine the assumption of the system’s time-invariance and to identify exogenous determinants of the BOLD signal’s non-stationarity. Recently, the nature of non-stationarity of BOLD signal and dynamic functional connectivity has been a topic of scientific debate, as several recent publications paint seemingly contrasting portraits of the processes’ stationarity underlying the brain’s functional dynamics [2732]. However, to the best of authors’ knowledge, no study examines the BOLD signal’s stationarity in the context of time-varying external inputs and their effects. Our results show that the inputs to several brain regions, most notably over default mode network, estimated from the resting-state scans display significantly high non-stationarity compared to other brain regions. Together, we demonstrate that our framework allows us to uncover spatiotemporal patterns and dimensionality of unknown cortical drivers. These findings offer insight into how a relatively static relation between brain regions and exogenous drivers can give rise to complex cortical dynamics and contribute to their non-stationarity.

2 Materials and methods

2.1 Linear time-invariant (LTI) dynamical systems with external inputs

Each region i of interest (ROI) from which the BOLD signal is collected provided us with a time series described by xi[k] at sampling point k = 0, …, T. A total of n = 100 regions are considered and the collection of these signals is captured by the vector , with k = 0, …, T, which we refer to as the state of the system (i.e., it describes the evolution of the BOLD signal across different regions). The evolution of the system’s state is mainly driven by (i) the cross-dependencies of the signals in different regions (not necessarily adjacent), and (ii) the external inputs that are either excitation noise or inputs arriving from the environment surrounding the regions captured by the state of the system (e.g., stimulus arriving from subcortical structures not accounted for during BOLD signal collection).

Subsequently, a first step towards modeling the evolution of the system’s state is: (1) where described the autonomous dynamics, is the input matrix that describes the impact of inputs (i.e., external drivers) on the system state’s evolution, and is the internal dynamics noise (i.e., internal drivers) at sampling point k. Notice that is the BOLD signal at the different ROIs and is the only known. However, the state of the underlying neural activity is unknown since we did not account for the hemodynamic response function (HRF) in our reduced model. Therefore, the input in the model captures the external drivers of regional BOLD and only indirectly, the underlying neural activity. In order to determine the parameters of the system (1), i.e., (A, B, ), we need to solve an optimization problem that minimizes the distance between the system’s state x[k] and the estimate of that state given by driven by the unknown quantities. Specifically, we have the following optimization problem: Notice that this problem is more challenging than the usual least squares problem considered when the parameters of the system are known [33]. Thus, similar to the method develop by [26], we perform the following steps: (i) we assume that the state z[0] = x[0], and is identically zero, to find an approximation to A;(ii) assuming A is given by the initial approximation, we provide a sparse low-rank structure to matrix B and we find an approximation to both z[0] and , which suffices to obtain z[0], …, z[T] subsequently, ; and (iii) assume and are as approximated in step (ii) and determine an approximation to B. The process consists of executing step (ii) and (iii) iteratively. Our experiments reveal that the estimated parameters converge after a few iterations in both synthetic and fMRI time series (S18 Fig in S1 File). Additionally, to force the inputs to be used as little as possible, since otherwise they could contain all the required information to obtain the sequence (e.g., consider A to be zero and B to be the identity matrix), the optimization objective is rather given by , which penalizes the use of the input with a weight λ > 0.—See section SI1 in S1 File for algorithm details.

We will demonstrate in the following results section that unaccounted external inputs result in error in estimation of system matrix A. Therefore, in a modified version of this algorithm, in step (i) we estimate A from x′[k] measured during an extended window without external stimulation (e.g., resting-state). Next, we repeat steps (ii) and (iii) iteratively—as detailed above. Since we did not know the true dimensionality of the external inputs, we approximated the dimensions of the input matrix B by performing principal component analysis on the residuals of the models. As seen in S19 Fig in S1 File, principal components 1–25 capture more than 80% of variance in the average residuals and more than average 60% of subject-level residuals’ variance across all tasks. In addition, we compared the goodness-of-fit of the LTI model with and without external inputs using Akaike information criterion (AIC) [34]. Our results demonstrate that incorporating external inputs does not results in overfitting and improves the model’s fit—an effect most pronounced in higher dimensional input matrices (S20 Fig in S1 File). Finally, we demonstrate that we identify the external inputs during the motor task similarly at high-dimensional input matrices (S6 Fig in S1 File), as indicated by the high correlation (>0.8) of inputs estimated using input matrix dimensions higher than 25 (S6I Fig in S1 File). Therefore, we select p = 25 for input matrix B to estimate the inputs from task fMRI time series.

2.2 Spectral analysis of an LTI system

Provided an LTI description of the system dynamics (1), the autonomous evolution of the dynamical system can be decomposed in a so-called eigenmode decomposition. Briefly, consider the n eigenmodes (i.e., eigenvalues and the corresponding eigenvectors) associated with A. Each eigenmode corresponds to an eigenvalue-eigenvector pair (λi, vi) satisfying Avi = λi vi, and it describes the oscillatory behavior for a specific direction vi.

Specifically, for any given eigenvalue λi represented in polar coordinates (θi, |λi|), we have that it captures the frequency characterized as where δt corresponds to the sampling frequency, and the time scale given by which can be interpreted as the damping rate.

In particular, we can re-write , where V = [v1, …, vn] and λ = diag(λ1, …, λn) are the matrices of eigenvectors and eigenvalues. Subsequently, we can apply a change of variable as z[k] = V* x[k], where V* is the transpose conjugate, which implies that is a weighted combination described by the ith eigenvector associated with the ith eigenvalue. Hence, this can be understood as the spatial contributions of the n ROIs at a given (spatiotemporal) frequency fi. Additionally, we can revisit the damping rate of the process in such direction vi by reasoning as follows: first, we can recursively obtain |zi[k]| = |λi|t|zi[0]|. Therefore, we have the following three scenarios: (i) |λi|<1; (ii) |λi|>1; and (iii) |λi| = 1. In case (i) and (ii), we can readily see that |zi[k]| → 0 and |zi[k]| → ∞ as k → ∞, respectively. Lastly, in scenario (iii), or practically, when |λi|≈1, we have that the process oscillates between stability and instability, and therefore these dynamics are refer to as meta-stable.

In summary, the dynamical process z(k) describes the spatiotemporal brain BOLD signal evolution. Specifically, the timescales are encoded in the eigenvalues and the spatial contributions of the different ROIs are described by the eigenvectors with a spatiotemporal timescale described by the associated eigenvalues.

2.3 Dataset and preprocessing

We used data from the Human Connectome Project (HCP). As part of the HCP protocol, subjects underwent two separate resting-state scans along with seven task fMRI scans, both of which included two sessions. All data analyzed here came from these scans and was part of the HCP S1200 release. The fMRI protocol (both resting-state and task) includes a multi-band factor of 8, spatial resolution of 2 mm isotropic voxels, and a TR of 0.72 sec (for more details see [35]). Subjects that completed both resting-state scans and all task scans were analyzed. Each of the scanning sessions included both resting-state and task fMRI. First, two 15-minute resting-state scans (eyes open and fixation on a cross-hair) are acquired, for a total of 1 hour of resting-state data over the two-day visit. Second, approximately 30 min of task-fMRI is acquired in each session, including 7 tasks split between the two sessions, for a total of 1 hour of task fMRI (for details see [36]).

Head-motion artifacts result in significant error in the functional connectivity estimates [37]. Therefore, to minimize head-motion artifacts, we selected 100 subjects with the lowest mean frame-wise displacement in our study, where we utilized a cortical parcellation with N = 100 parcels that maximizes the similarity of functional connectivity within each parcel [38]. Next to keep the same subjects across the resting state and task scans, we removed the four patients with missing either task or resting state scans. We preprocessed resting-state and task data using similar pipelines. For resting-state, the ICA-FIX [39, 40] resting-state data provided by the Human Connectome Project were utilized [41], which used ICA to remove nuisance and motion signals. For task data, CompCor [42], with five components from the ventricles and white matter masks, was used to regress out nuisance signals from the time series. In addition, for the task data, the 12 detrended motion estimates provided by the Human Connectome Project were regressed out from the time series. For both task and resting-state, the mean global signal was also removed in an effort to remove the auto-correlated non-physiological noise and reduce the model estimation error [43].

2.4 Statistics

We performed student’s t-test and Welch’s t-test [44] to test the statistical significance of the differences between the distributions of interest. Non-parametric Wilcoxon rank-sum test [45] were utilized for comparisons of distributions with non-normal profiles. We corrected calculated test statistics for multiple comparisons using false discovery rate (FDR) method [46], as well as the more conservative Bonferroni method [47]. To identify the task-specific fluctuations in the average estimated inputs, for each brain regions we compared task-related inputs to those estimated from resting-state time series (paired t−test, p < 0.05, FDR). In addition we also generated phase-randomized null time series from each subjects’ BOLD times series for the task time series. We select the phase-randomized null model since it maintains most of the statistical properties of multivariate time series (e.g., autocorrelation, covariance) [48]. Next, for each brain region, we compared the average empirical and null estimated inputs for each time point (paired t−test, p < 0.05, FDR).

To identify estimated inputs that display changes that correspond to different task conditions in the motor paradigm, we first performed a principal component analysis (PCA) on all estimated inputs (U) concatenated over all subjects. Next, we identified a single input with the highest absolute principal component (PC) loading for every component. We then multiplied the selected inputs with negative PC loadings by −1. Next, we separately fitted a multiple linear regression model for each PC’s inputs (U) using the known task-regressors. We created task-regressors for different conditions by assigning every sample to baseline (0) or one of six events (i.e., visual cue, left hand, right hand, left foot, and right foot movements) based on their temporal proximity to events’ onsets and offsets. We repeated this analysis by shifting task-regressors by different lags (0–12 TRs) to identify the lag that produces the best fit (i.e., highest R2 values) for each region. Finally, we performed t−tests on estimated coefficients at the group-level to identify task conditions similarly echoed in estimated inputs associated with each PC across participants. We also identified brain regions that correspond to the identified inputs by performing group-level region-wise t−tests on input matrix B elements that correspond to inputs U identified by PCs.

We examined the estimated inputs’ non-stationarity using two methods. First, we used a sliding window approach to examine temporal fluctuations of estimated inputs’ means over resting-state scans for all brain regions, measured from the windowed-means’ standard deviation. Second, we used the nonlinear non-stationarity index introduced by [49], with α = 0.9 and β = 1 exponent parameters following their study, where α and β parameters control the relative weighting between the importance of long versus large excursions in time series. Therefore, non-stationarity indexes with our selected parameters give marginally greater weighting to excursions’ height. Finally, to test the group-level significance of both non-stationarity metrics, we first normalized the values across all brain regions. Next, we used the t−test (FDR corrected for multiple comparisons across all brain regions) to establish the statistical significance of the measured non-stationarities across patients. Traditionally, researchers have commonly used the 0.05 as the statistical significance level, though the choice is largely subjective. Therefore to convey the probabilistic nature of the statistical analysis and the proper interpretation of statistical test results, in the manuscript, we refer to results of the commonly accepted statistical threshold of 0.05 as “significant” and the more conservative thresholds of 0.0005 or lower as “highly significant”.

2.5 Ethics statement

All subject recruitment procedures and written informed consents were approved by the Washington University Institutional Review Board (IRB). For more details see [35].

2.6 Retrieving the external inputs to a synthetic LTI system

We use the proposed method to explicitly model the contributions of internal system dynamics and external inputs on the BOLD signal during rest and task. To build intuition, we begin by estimating the internal system parameters and unknown inputs using data simulated from a synthetic LTI model (Eq 1) with four states representing four brain regions. We first simulate the dynamics of our model (Fig 1A), where each region is driven by random internal noise, and only one region is driven by an additional square pulse train (Fig 1B). For details regarding the simulation see section SI2 in S1 File. Next, we estimate internal system parameters (4 × 4 matrix of interactions) and unknown inputs from the simulated time series, and to recover the spatial and temporal profiles of the pulse train input (Fig 1C). Although the estimated inputs (green line) fluctuate time-locked to the ground-truth input, their temporal profiles notably differ. We hypothesize that this divergence arises from the error in system matrices estimated during periods with external stimulations. In Fig 1D, we show that the LTI system parameters receiving time-varying external inputs can falsely appear to change and diverge farther from the ground-truth when examined over periods with external stimulation.

thumbnail
Fig 1. Synthetic LTI system with unknown inputs.

(A) A schematic of the brain as a network, where the nodes represent brain regions, and the edges represent connections between regions. The activity of four observed regions is modeled as a four-dimensional LTI system, and the influence of the unobserved regions and external stimuli into each node as an unknown driver. The synthetic system matrix is designed with eigenmodes oscillating at 0.01 and 0.06 Hz to mimic the frequencies of BOLD signal’s neurophysiological component. (B) Simulated time-evolution of each node’s activity (sampling rate = 1.4 Hz) is color-coded and shown in the presence of drivers, namely the internal noise and the external input (brighter colors). Only the blue node receives external input indicated by the magenta line. Three periods (I–III) are highlighted dashed lines. At period I (3–6 min), there is no external stimulation. At period II (9–12 min), the blue node is stimulated in 25 samples = 18 seconds blocks, interleaved with similarly sized rest periods. At period III (15–18 min), the blue node is stimulated for 7 samples = 5.04 seconds, with inter-stimulus intervals of 3 samples = 2.16 seconds. (C) Left panels show the estimated inputs to the blue node (green line, arbitrary units AU) estimated from a single simulation. The panels on the right show the average input and its standard error over 100 simulations. (D) The average 2-norm and standard error of the difference between the system’s true and estimated matrices of a 3-minute sliding window. (E) The color-coded lines show the average (and standard error) loading of each node on input matrix B.

https://doi.org/10.1371/journal.pone.0257580.g001

Consequently, we hypothesize that system matrices estimated from periods without external inputs would improve our ability to capture the unknown inputs’ profile accurately. Fig 1C shows that using a fixed system matrix estimated from periods without external inputs significantly increases the similarity (correlation) to the ground-truth inputs. We also demonstrate that although estimated inputs contain noise, averaging inputs estimated over 100 simulations results in highly accurate estimations (correlation = 0.99). The significant (Wilcoxon rank-sum test, Bonferroni p < 0.0001) changes in the input matrix B’s loading for estimation windows overlapping the external stimulation periods, reveals the unknown external inputs’ spatial profile (i.e., the blue input node) (Fig 1E). Together, these results demonstrate that external inputs can increase estimation error in system matrices, and consequently, input parameters. More importantly, these results also show that identifying system matrices from periods without external stimulation allow an accurate estimation of unknown external inputs’ spatiotemporal profiles.

Next, we generate synthetic time series by stimulating LTI systems, parameters of which were estimated from subjects’ resting-state BOLD time series. We set external inputs’ magnitude such that the global average stimulus-induced changes in normalized simulated outputs match the largest average task-related changes in a sample (social) task. We confirm that similar to the low-dimensional example in Fig 1, our approach is able to extract synthetic external inputs to high-dimensional LTI models of BOLD signal dynamics (Fig 2). Likewise, employing system parameters estimated from periods without external stimulation results in a significant (t−test, p < 0.05, p = 6.6 × 10−65 and p = 2.9 × 10−66 for 1000 TR- and 250 TR-long estimation windows, respectively) increase in the similarity between the ground-truth and estimated inputs (Fig 2F). The notably higher similarity between the average estimated to ground-truth inputs than that of subject-level estimated inputs suggests that profiles of external inputs are correctly approximated although with noise. Together these results demonstrate the utility of our framework in identifying external inputs to LTI systems, and highlight the importance of accurate estimation of model parameters.

thumbnail
Fig 2. Extracting spatiotemporal profiles of unknown external drivers in simulated brain dynamics.

(A &D) Estimated external inputs (i.e., B×U) to all brain regions from synthetic time series generated from a sample subject’s internal system parameters and (B & E) the average estimated external inputs across all subjects (input matrix B dimension = 7, regularization factor = 0.5). Brain regions (y-axis) are sorted based on resting-state networks identified by [50], namely the visual (Vis), sensory/motor (SM), dorsal attention (DN), ventral attention/salience (VN/Sal), limbic, executive control (ECN), and default mode network (DMN). System parameters in panels D & E are estimated from the stimulation window, however system parameters in panels A & B are estimated from same-length windows without external inputs. (C) Ground-truth synthetic inputs over 1000 samples (TR = 0.72 sec). (F) The similarity between ground-truth and estimated inputs. The system matrix A estimated from windows without external stimulation results in a significantly higher correlation between the vectorized estimated external and ground-truth input matrices (t−test, p < 0.05, p = 6.6 × 10−65 and p = 2.9 × 10−66 for estimation windows with 1000 and 250 samples, respectively), compared to system matrix A estimated from the stimulation windows (indicated by ‘*’ markers). The smaller estimation windows significantly (t−test, p < 0.05, p = 1.15 × 10−45) reduce the estimated and ground-truth inputs’ similarity, only for the system matrix A estimated over stimulation windows (indicated by ‘*’ markers). The correlation values between the ground-truth and group average estimated inputs are indicated by ‘o’ markers.

https://doi.org/10.1371/journal.pone.0257580.g002

So far, we have examined the LTI system’s response in a low recording noise level (signal-to-recording noise = 1000). Next, we examine the accuracy of the retrieved model and input parameters at different recording and internal noise levels. The contributions of the recording and internal noise to the BOLD signal, for the most part, are unknown quantities. However, they play an essential role in our ability to capture external inputs accurately. Simulating the system’s response magnitude and variance (i.e., t-values) at various recording and internal noise levels show how different noise levels can lead to seemingly similar outputs.

Moreover, at high noise levels, the error increases notably in the system parameters estimated from periods without external inputs, and consequently, in the estimated input parameters during stimulation periods. Interestingly, at such high noise levels, the system matrices estimated during stimulation periods more accurately recover external inputs than those estimated during periods without stimulation (S1 Fig in S1 File). These observations suggest that the choice of system matrices and the goodness-of-fit of the estimated inputs can further provide insight into the empirical noise levels. In the following, we consider the proposed methodology in the context of quantifying important spatial and temporal features of the internal system dynamics and external inputs estimated from the HCP resting-state and task fMRI scans.

2.7 Capturing external drivers of BOLD signal

2.7.1 Brain’s large-scale oscillatory modes display heterogeneous spatiotemporal profiles.

We begin by showing that the estimated system parameters during resting-state reliably capture and reproduce known brain functional organization. Further, because these parameters reside within a quantitative dynamical model, we simultaneously capture both spatial (regions that are co-active) and temporal (oscillation frequency) information through the eigenmodes of our estimated system. Specifically, each eigenvector indicates an independent pattern of co-active regions, and its corresponding eigenvalue determines both the oscillation frequency and the change in amplitude of the activation patterns. Intuitively, if we initialize our estimated system state to a pattern of activity corresponding to an eigenvector, then the system states would oscillate and dampen according to the associated eigenvalue’s characteristics (see more details in Materials and Methods section).

thumbnail
Fig 3. Eigenmodes estimated from the full (1200 TR ≈14.5 min) resting-state time series.

(A) Distribution of frequency versus stability of eigenvalues during resting-state. Clustering the eigenvalues based on their eigenvector’s similarity highlights the spectral profile of different systems. All eigenvectors from all subjects were normalized and grouped into 4 clusters using the k-means clustering algorithm. We color-coded the clusters identified across subjects and all resting-state sessions (n = 4). (B) the inset plot shows the eigenvalues’ distribution. (C) The brain overlays represent the spatial distribution of the eigenvector associated with an eigenvalue (displayed with the same color code) that is at the centroid of each cluster.(D) The similarity between eigenvector clusters’ centroids and the resting-state networks. We performed spatial multiple linear regression analyses using all resting-state networks identified by [50], namely the visual (Vis), sensory/motor (SM), dorsal attention (DN), ventral attention/salience (VN/Sal), limbic, executive control (ECN), and default mode network (DMN) as the explanatory variables, to show which resting-state networks overlap with the eigenvector clusters’ centroids shown in the panel. The color-coded matrix shows the estimated normalized (divided by maximum value at each row) coefficients of the regression, calculated separately for every eigenvectors’ cluster’s centroid. The plot on the right shows the p-value and R2 calculated for each cluster centroid.

https://doi.org/10.1371/journal.pone.0257580.g003

To capture the spatial and temporal patterns of activity, we use our method to estimate the internal system parameters from the resting-state time series (1200 TR ≈14.5 min). The high stability of the (i.e., slow damping rate) low-frequency eigenvalues as seen in Fig 3A indicates that the system’s outputs are dominated by lower frequency oscillations. To identify the eigenmodes with similar spatial patterns across subjects, we aggregate all subjects’ eigenvectors and perform k-means clustering analysis. We used the elbow method (optimal k ≈ 4), Calinski-Harabasz, Davies-Bouldin, and Silhouette criteria (optimal k = 2) to identify the optimal clustering resolution—for details, see SI5 and S3 Fig in S1 File. The non-converging results across the different criteria suggest that the community organization of eigenvector clusters does not display a distinct optimal topological scale. We provide the course (k = 2) and finer scale (k = 4) clusters in S2A Fig and Fig 3, respectively. To ensure that the image acquisition type (i.e., phase-encoding direction) or the scanning session does not affect these results, we provide statistical comparisons between the coarse-scale cluster’s stability and frequency in S4 Fig in S1 File. These results show that very similar distributions and clusters are identified regardless of phase-encoding direction or day of scans. Specifically, statistical comparisons (bootstrapping n = 50, 000, p < 0.05) fail to find any difference between cluster’s means. To test the spatial inhomogeneity in the frequency and damping of these clustered eigenvectors, we performed a pairwise comparison between the distribution of eigenvalues corresponding to the eigenvectors in each of the clusters (bootstrap n = 50, 000, Bonferroni corrected p < 0.05). We found significant differences in the frequencies and damping rates between all cluster pairs, except for the comparison between the frequencies in clusters 3 and 4). Together, these findings highlight the spatial heterogeneity in the frequency and damping profiles of brain oscillations.

2.7.2 Task-specific increases in the extra-cortical input’s power.

Up to now, we provided evidence that the system dynamics can capture the spatial and temporal behavior of resting-state brain networks. Next, we try to assess if the task-induced dynamics are driven by the external inputs, retrieved by the proposed method. The sensory inputs to the brain are some of the major drivers of cortical dynamics. Therefore, we hypothesize that the external inputs to the subjects’ brains, as estimated by the proposed method, will mirror real-time changes present in these task regressors (see S5 Fig in S1 File for details regarding the task regressors).

To test this hypothesis, we apply our method to the fMRI activity to estimate the internal system parameters and external inputs for each subject during task performance (i.e., social, gambling, motor, working memory, language, and relational). Then, we compare the average estimated inputs’ frequency spectrum for each task. Statistical tests (Wilcoxon rank-sum test, FDR corrected, p < 0.0005) reveal highly significant unique peaks, matching the expected external task-specific frequencies (Fig 4). Note that the distinct task-induced peaks are identified at low (< 0.1 Hz) and high (> 0.1 Hz) frequencies, even as high as 0.3–0.4 Hz (Fig 4B–4C).

thumbnail
Fig 4. Matching the spectral profile of the known and estimated external inputs.

(A–F) The difference between the average Fourier transform of the estimated inputs to all brain regions during tasks compared to that of other task conditions (see Materials and Methods for details). Top panels display the average (two sessions) spectral profile of the known boxcar regressors for each task (see S5 Fig in S1 File). Note the significant changes in the spectrum at expected task-specific frequency peaks across several brain regions, at low (<0.1 Hz) and high (>0.1 Hz) frequencies represented with red and green arrows, respectively. Frequencies for which brain regions did not pass the significance level (Wilcoxon rank-sum test, FDR p < 0.0005) are represented in black.

https://doi.org/10.1371/journal.pone.0257580.g004

2.7.3 Task-specific profiles of extra-cortical inputs.

Next, we consider an LTI framework to quantify spatial and temporal features of external inputs to the brain using HCP’s motor task dataset. The motor task comprises 3-second long visual cues, where participants are asked to either tap left or right fingers, squeeze left or right toes, or move their tongue over 12-second long periods following the visual cue’s offset. We select the motor task since the high dimensionality of input and various task conditions in this paradigm allows us to evaluate our framework’s ability to estimate external inputs’ complex spatiotemporal structure. We aim to assess if we can retrieve the external inputs that drive task-induced dynamics. We hypothesize that subjects’ estimated external inputs will mirror real-time changes present in known task regressors. Moreover, due to relatively lower levels of structured external stimulations during resting-state scans, we hypothesize that the system parameters estimated from subjects’ full-length resting-state time series will increase the accuracy of external inputs estimated from motor task datasets.

Fig 5 demonstrates estimated inputs (input matrix B dimensions = 25, regularization factor = 0.5) to all brain regions (i.e., B × U) averaged across all subjects during the motor task. These results highlight the brain-wide significant task-specific changes in the estimated inputs when system parameters are estimated from the resting-state time series Fig 5A and 5B. We provide evidence of the robustness of these results to changes in the input matrix B’s dimension (S6 Fig in S1 File). Conversely, the identified inputs using the system parameters estimated from the subjects’ motor task time series notably reduces our ability to capture the task-related changes (Fig 5C–5D).

thumbnail
Fig 5. Average estimated external inputs in the motor task.

Internal system parameters (i.e., A matrices) during full-length resting-state and motor scans were used to estimate the external inputs in panels A & C, respectively. Panels B & D show time points form panels A & C with significantly higher or lower average inputs estimated during motor task than resting-state scans (paired t−test, p < 0.05, false discovery rate (FDR) corrected for multiple comparisons). Top plots in panels A & B show onsets and durations of visual cues and motor task conditions—left foot, left hand, right foot, right hand, and tongue movement blocks.

https://doi.org/10.1371/journal.pone.0257580.g005

We establish these observations’ statistical significance by comparing the external inputs estimated from task datasets against those from subjects’ resting-state scans (paired t−test, p < 0.05, FDR corrected for multiple comparisons). Comparisons against the phase-randomized null time series also provide converging observations (S7 Fig in S1 File). We also use multiple linear regression analyses to assess the estimated inputs’ similarity to the known temporal profile of the task regressors. Our results demonstrate that external inputs estimated using the full-length resting-state system parameters result in significantly (paired t−test, p < 0.05, Bonferroni corrected for multiple comparisons) improved fit (measured by R2 values), compared to system parameters estimated from the motor task (S8 Fig in S1 File). We also find similar results when resting-state system parameters were estimated from a short (250 sample) window that match task scans’ length (S8B Fig in S1 File). Together these results highlight the importance of the modeled system’s accuracy in capturing a reliable picture of the brain’s external inputs.

Next, we examine the temporal (i.e., U matrix) and the spatial (i.e., input matrix B) profiles of the external inputs (estimated using resting-state system parameters), to demonstrate how the estimated inputs reveal the dimensionality and the spatiotemporal dependencies of the task-related inputs. Prior works using univariate and multivariate analyses of HCP task datasets have demonstrated that activation induced by the hand, foot, and tongue movements can be localized over the somatomotor network. Therefore, we expect the dimensionality of the external inputs to roughly match or exceed those of task conditions (i.e., six dimensions). As mentioned in the Materials and Methods section, the principal component analysis reveals that in all HCP task conditions, principal components (PCs) 1–25 explain more than 80% of the variance in the model’s average residuals. Therefore, we choose p = 25 as the input matrix B dimension in Fig 6.

thumbnail
Fig 6. Principal component analysis of estimated external inputs.

(A) Group-level principal components (PCs) 1–15 calculated from concatenated estimated inputs (input matrix B dimension = 25) across all subjects. Top plots show onsets and durations of visual cues and motor task conditions. (B) Percent variance explained by PCs. Insets depict the percent variance explained by PCs 1–25. (C) t-values calculated from coefficients of multiple linear regression models of estimated external inputs associated with each PC (see methods for details). The average coefficients that fail to pass the significance-level across subjects (t−test, p < 0.05, Bonferroni corrected for multiple comparisons) are depicted in gray. (D) Distributions of R2 values of multiple linear regression models in panel B for components with significant coefficients. White circles and color-coded horizontal bars indicate the medians and means of distributions, respectively. Pairwise comparison (Wilcoxon rank-sum test, p < 0.05, FDR corrected for multiple comparisons) between distributions reveal that R2 values for principal components marked by red ‘*’ are significantly higher than those marked by black ‘o’ (except for the non-significant difference between PC 1 and PC 9). (E) Distributions of the number of lags (samples) that results in best fit (i.e., maximum R2) for PCs 1–9. We used the mean (round to nearest integer) of optimal subject-level lags for analysis in panels C and D.

https://doi.org/10.1371/journal.pone.0257580.g006

We performed principal component analysis on external inputs estimated temporal profiles (i.e., U) concatenated across all subjects to identify the input patterns similarly identified over the group. Fig 6A shows the temporal profile of the concatenated inputs’ PCs 1–15. As seen in Fig 6B, the first few PCs (≈ 9) explain a relatively larger portion of the variance. Fig 6A shows the high similarity between known task regressors and PCs’ temporal profiles. We quantify this similarity using subject-level multiple linear regression analysis of the estimated inputs using the known task (motor) regressors. We note apparent time lags between the known and estimated inputs. Therefore, we perform the multiple linear regression analysis using various lags. Fig 6D shows distributions of lags (samples) that yield the highest R2 values for PCs 1–9. Fig 6B shows the group average coefficients estimated from external inputs associated with each PC (i.e., external inputs with highest PC weights). We used the group average optimal lag (based on R2 values) identified in Fig 6E in Fig 6B. Estimated coefficients have significant values, only in PCs 1–9. These results demonstrate that the estimated inputs provide insight into the extra-cortical drivers’ dimensionality.

Next, we examine spatiotemporal profiles of subject-level estimated inputs associated with these components to understand their relationship to the external stimuli. Fig 6D demonstrate that compared to other PCs, the inputs associated with PCs 1–4 and 6 fit task regressors relatively better, indicated by significantly (Wilcoxon rank-sum test, p < 0.05, FDR corrected for multiple comparisons) higher R2 values. Fig 6C reveals that PCs 1–4 and 6 are associated with the visual cue, hand and feet movements (maximum coefficient in left hand), all movements (maximum coefficient in right hand), feet movements (maximum coefficient in left foot), and tongue movements, respectively. S9 Fig in S1 File shows that the brain regions with the highest average absolute input matrix B values corresponding to PCs 2, 4, and 6 reveal the same regions identified in the somatomotor cortices using general linear model analysis of BOLD time series for hand, foot, and tongue movements.

The input matrix B also captures the spatiotemporal relationship between the inputs across different conditions. For instance, S9A Fig in S1 File shows that hand or feet movements are associated with simultaneous positive and negative (e.g., inhibition or deactivation) inputs to the contra- and ipsilateral somatomotor cortices, respectively. Fig 7 also shows that PCs 5, 1, and 3 reveal the temporal order of inputs to visual, dorsal attention, and finally, somatomotor cortices following the onset of visual cue. Note that the spatial and temporal profile of PC 5 demonstrates the inverse relationship between inputs to visual and somatomotor cortices. This unexpected temporal profile contributes to the low similarity of PC 5 to task regressors in Fig 6D. We show that changing the delay between estimated inputs and task regressors changes the coefficient patterns with significant loading (S10 Fig in S1 File). These results demonstrate an early positive relationship of PC 5 input with visual cue blocks, followed by a later positive (negative) relationship with left-hand movements (visual) blocks.

thumbnail
Fig 7. Temporal and spatial profiles of estimated external inputs associated with visual cues.

(A) Color-coded lines show the mean and standard error (shaded area) of estimated inputs with the highest subject-level loadings for PCs 1 (green), 3 (orange), and 5 (blue). Time points with significant (t−test, p < 0.05, FDR corrected for multiple comparisons across time points) divergence from zero are marked with color-coded dots. The black and dashed red lines show the visual cue and motor task blocks, respectively. Color-coded panels (B-D) show the t−test values of brain regions with significant (t−test, p < 0.05, FDR corrected for multiple comparisons across ROIs) loadings on input matrix B rows corresponding to the aforementioned PCs.

https://doi.org/10.1371/journal.pone.0257580.g007

Finally, in Fig 6C we demonstrate that PCs 7, 8, and 9 are primarily associated with the right foot movement blocks. However, the significantly smaller R2 values of these PCs than other PCs in Fig 6C indicates the lower similarity of corresponding estimated inputs’ temporal profiles to those of task regressors. Closer examination of these inputs’ spatiotemporal profiles reveals that in addition to changes related to left-hand movements, these PCs capture the rapid sequence of inputs to frontal and somatomotor cortices following the motor task block’s offset and the baseline (i.e., no task) onset (S11 Fig in S1 File). Together, these results suggest that an LTI model of cortical dynamics can reveal the unknown spatiotemporal profiles of the BOLD signal’s external task-related drivers.

We provide additional analysis and discussion on model parameters and their effect on the reported results in the in S1 File document. We explored sparsity constraints on the system and input parameters in SI5. S12 Fig in S1 File demonstrates that increasing the system matrices’ sparsity reduces the model’s goodness-of-fit (measured using the AIC criterion). In the same vein, the increased spatiotemporal sparsity of the inputs overall reduces the accuracy (measured using the R2 value of the linear regression) of the estimated inputs (S13 Fig in S1 File). Nevertheless, estimated inputs’ group-level PCA reveals that the higher sparsity constraints can improve the accuracy of specific empirically identified input patterns (S14 Fig in S1 File). In addition, we examined the effect of the estimation window’s size on the input’s accuracy in SI6. These results show that a smaller estimation window (3 min) provide comparable results to the full-length window, however overall it increases the accuracy of mean inputs to many brain regions (S8 Fig in S1 File) and several main input patterns (S15 Fig in S1 File). Finally, we explored the sensitivity of the identified input patterns to the factorizations method in the SI7. These results demonstrate that PCA decomposition of the model’s residuals reveals the analogous primary input patterns (S16 Fig in S1 File) uncovered by our spatiotemporal regularization scheme.

2.7.4 Non-stationarity of inputs to resting-state networks.

So far, we showed that adopting a time-invariant model of the intrinsic relationship between large-scale brain regions allows us to extract the unknown external drivers of cortical dynamics. Our results demonstrate that the resting-state paradigm serves as a viable option for a more accurate estimation of internal system parameters. However, sensory and other extra-cortical inputs are still present during resting-state scans, resulting in system parameters and input estimation errors. Despite the estimation error in the external inputs’ profile, we hypothesize that quantifying the non-stationarity of the estimated resting-state inputs provides information on the external factors that contribute to resting-state BOLD signal non-stationarities.

We quantify estimated inputs’ non-stationarity for every brain region (i.e., B × U) from the temporal fluctuations (i.e., standard deviation) of external inputs’ means, measured using a sliding window. Fig 8 shows brain regions that exhibit significantly high input means’ fluctuation across different sliding window sizes (see methods for details). We demonstrate the results for sliding windows of 6, 24, and 50 samples (TR = 0.72 sec) lengths and half window-length shifts. We also measure the non-stationary of external inputs during resting-state scans using the nonlinear measure developed by [49] and find converging results (Fig 8B). We find several brain regions within DMN consistently display high non-stationarity values. Statistical comparisons between the quantified non-stationarity of estimated inputs to identified brain regions in Fig 8 reveal the significantly (Welch’s t-test, p < 0.05, Bonferroni corrected for multiple comparisons) higher non-stationarity of external inputs to identified DMN regions relative to several other resting-state networks (S17 Fig in S1 File). Together, these results reveal that time-varying external inputs may partly contribute to the previously reported resting-state BOLD signal’s non-stationary, and the LTI model offers an avenue to determine the spatiotemporal profiles of these unknown external sources.

thumbnail
Fig 8. Non-stationarity of estimated external inputs over resting-state scans.

(A) Brain overlays on top panels highlight regions with significantly (t − test, p < 0.05, FDR corrected for multiple comparisons) high normalized (z-scored over all brain regions) fluctuations (i.e., standard deviation) in the normalized (z-score) estimated inputs’ means, measured using sliding windows (6, 24, and 50 samples window length, TR = 0.72 sec). (B) Brain overlays on top panels highlight regions with significantly (t − test, p < 0.05, FDR corrected for multiple comparisons) high normalized (z-scored over all brain regions) nonlinear non-stationarity index developed by [49], calculated from the normalized (z-score) estimated inputs. The color-coded regions in the bottom plots in panels A and B highlight the allegiance of brain regions in top panels to the seven resting-state networks identified by [50].

https://doi.org/10.1371/journal.pone.0257580.g008

3 Discussion

Based on the theory of embodied cognition, the evolution and emergent function of the brain can be best understood in the context of the body and its interactions with the environment [5154]. In this view, the information does not exist in an abstract form outside the agent, instead, it is actively created through the agent’s physical interaction with the environment [54]. Therefore, understanding the native structure of the external inputs to the brain, as well as the interaction between the brain and its exogenous drivers, is germane to understanding the functional dynamics of the embodied brain [55].

What are the external drivers of BOLD signal? Current theories suggest that cortical outputs reflect changes in the balance between the strong recurrent local excitation and inhibition connectivity, rather than a feedforward integration of weak subcortical inputs [56]. Changes in this balance heavily affects the local metabolic energy demands and consequently the regulation of cerebral blood flow and the BOLD signal, despite the net excitatory or inhibitory output of the circuits [57]. Inhibition in principle can lead to both increases [57] and decreases [5860] in metabolic demands [61]. Moreover, cortical afferents and microcircuits can function as drivers by transmitting information about the stimuli, or alternatively as modulators by modulating the sensitivity and context-specificity of the response [6264]. Excitatory sensory information, transmitted mostly via glutamatergic or aspartergic drivers, combined with the strong evoked recurrent GABAergic interneurons are a major part of neurotransmission dynamics, which in turn affect the local cerebral blood flow (CBF) [57]. Likewise, regulation of cortical excitability mediated by neuromodulatory neurotransmitters including acetylcholine [65], norepinephrine [6668], serotonin [65], and dopamine [69, 70] can also significantly effect CBF and the BOLD signal.

What do input parameters of an LTI model capture in BOLD fMRI? We show that an LTI system acts predominantly as a high-pass filter and highlights the rapid transient fluctuations in the BOLD signal. We provide evidence that the influence of sensory inputs is identifiable in the estimated inputs to sensory cortices. More importantly, the task-related changes that are temporally decoupled from the sensory stimuli, such as the motor cortex’s activation following the offset of visual cues and onset of behavioral outputs, are also captured as external inputs to the LTI system.

Prior research has reported brain-wide and heterogeneous task-related changes in the BOLD signal power spectrum [10, 71, 72] and estimated system parameters [2, 73]. However, we provide evidence that the time-varying unknown exogenous (i.e., extra-cortical) inputs also likely contribute to non-stationarities in the cortical dynamics. Specifically, we demonstrate in silico that determining the LTI system’s parameters from periods with unknown stimuli can lead to high estimation errors in system and input parameters. We verify these observations empirically by showing that LTI system parameters identified from resting-state, instead of task BOLD time series, result in notably more accurate identification of unknown extra-cortical inputs’ spatiotemporal profiles in task scans. Our results have implications for the common interpretation of correlation-based functional connectivity changes as altered intrinsic relationships between regions. More importantly, our findings highlight the importance of modeling and interpreting the brain’s dynamic functional connectivity and non-stationarity as an open system.

Can the brain during resting-state scans be fully described as a linear and time-invariant system? Prior studies demonstrate that temporal fluctuations in the BOLD signal (< 0.1 Hz) cannot be fully attributed to linear stochastic processes [7476], and suggest that the nonlinearities in the BOLD signal could be attributed to the presence of a strange attractor [75]. Additionally, other neuroimaging studies using paradigms such as “temporal summation” have more directly probed the system and provide evidence of system nonlinearities [7780].

Model-based approaches such as work by [80, 81] have concluded that nonlinear transduction of rCBF to BOLD is sufficient to account for the nonlinear behaviors observed in the BOLD signal. However, care should be taken in the interpretation of these results as in the temporal summation framework, where the profile of input is assumed to be known and is approximated by an abstract stimulus representation. We believe our framework provides a novel avenue for testing the system linearities through the examination of the estimated unknown inputs in summation paradigms. Specifically, the delay between estimated and known external inputs can be further leveraged to tease out the nonlinear components of hemodynamic response function (e.g., vascular) from the neural impulse response function.

Stationary signals are characterized by time-invariant statistical properties, such as mean and variance [82]. To date, several tests have been proposed to examine the non-stationarity of BOLD time series and the presence of dynamic functional connectivity, including test statistics based on the variance of the FC time series [83, 84], the FC time series’ Fourier transform [85], multivariate kurtosis of time series [27, 28], non-linear test statistics [49], and wavelet-based methods [29, 31], among others [32]. These methods commonly compare measured properties between the time series of empirical data and a suitable surrogate or null time series that is designed to lack time-varying properties through non-parametric resampling [86, 87], phase-randomization [85, 88], or generative models [31, 49], and the choice of measured properties and null models profoundly impact on the outcomes of stationarity tests in conflicting reports on BOLD signal [2730].

Notably, the presence of non-stationarity in the outputs does not directly imply the underlying system’s non-stationarity. An LTI system’s outputs, for instance, while receiving non-stationary external inputs, can also display time-varying properties. As mentioned earlier, using internal system parameters of an LTI system estimated over resting-state scans enables more accurate identification of exogenous inputs’ spatiotemporal profile task scans. These results suggest that a large-scale stationarity model of the brain with time-varying external inputs can, in theory, account for a large portion of the observed task-related changes in cortical dynamics. It is worth noting that any possible task-related changes in the underlying system parameters are also captured as external inputs in an LTI framework. Therefore, from the system identification and model-fitting perspective, it is likely that a linear switching system with higher degrees of freedom would improve the fit. Beyond the goodness-of-fit of the model, care should be taken in interpreting the epiphenomenal large-scale models’ parameters and their changes at the micro-scale biophysical level.

However, the impetus for this work is to highlight the estimates’ notable sensitivity to the unknown, and thus, unaccounted external inputs. More practically, when simulated with a wideband unknown external inputs, our results suggest that an open LTI model estimated during resting-state allows us to uncover the influence of these unknown drivers of BOLD dynamics. Nevertheless, participants’ cortices receive external stimulation even during resting-state scans, contributing to estimation inaccuracy and the system’s outputs’ non-stationarity. In this work, we aim to disentangle the non-stationarity of the system from its outputs over resting-state by examining estimated inputs’ non-stationarity. Our results show that external inputs’ non-stationarity over resting-state scans are spatially inhomogeneous, with identified DMN regions showing the highest levels consistently across different analyses. These observations are in line with prior reports of higher dynamic functional connectivity of these brain structures over rest [49]. The identified non-stationary inputs during resting-state scans also imply that we should expect more error in the estimated spectral profile of the aforementioned regions. Therefore, future work should explore leveraging other states of consciousness, such as sleep with lower global cortical activity, to address this limitation. Despite the presence of possible confounding factors such as unaccounted nonlinearities and non-stationarities in the recording noise [89, 90], our framework and observations provide new insight into the external drivers of cortical dynamics and factors that contribute to their non-stationarity. Recent system-identification [91] and control-theoretic [92] work have also demonstrated the utility of a stationary system in explaining BOLD dynamics. Together these findings pave the way for principled model-based control of pathological brain dynamics, such as depression and schizophrenia, using open-loop external or closed-loop neurofeedback stimulation.

Historically, a narrow band of slow frequencies between 0.01 to 0.1 Hz was thought to contain information relevant to underlying neural activity, and that the higher frequency (> 0.1 Hz) BOLD activity considered mainly as an artifact [9, 93]. Our results also demonstrate that the primary oscillatory modes of the LTI model of the resting-state BOLD display similar slow frequencies heterogeneously over the brain. In addition, the hemodynamic response function (HRF) is also expected to dampen the higher frequency neural activity significantly. More recent evidence, however, portrays a broadband picture of BOLD signal fluctuations with frequencies up to 0.25 Hz [10, 13, 94, 95] and even higher [14]. We also provide converging evidence that despite the expected low-pass filtering of HRF, information about the stimulus-related activity can still be extracted from the BOLD signal even as high as ≈ 0.4 Hz. Future work can leverage acquisition protocol with higher sampling rates than HCP and rapid stimuli capable of inducing brain-wide activations to accurately delineate the inputs’ attenuation profile by HRF at higher frequencies. In line with previous reports of intrinsic functional connectivity networks, our clustering analysis reveals the low dimensionality of the system eigenmodes as the eigenvectors can be roughly grouped in a small number of spatial patterns. However, we show that depending on the task, the dimensionality of inputs can be high; for example, in the motor task with multiple conditions, we identified task-specific inputs to different ROIs across motor cortices. Future work should use our proposed framework to identify the highest bound of input dimensionality using higher resolution parcellations or voxel-wise modeling of the BOLD signal.

However, the HRF plays another critical role in biophysical models where it enables the approximation of the latent neural states from the BOLD signal. This is one of the main limitations of our simplified model, as it incorrectly assumes that the BOLD signal in one region (instead of the underlying neural activity) can cause changes in the BOLD signal in the connected regions. This assumption for spatially inhomogeneous HRF functions can, in theory, lead to incorrect identification of the external inputs’ focus and error in the direction and speed of the interactions within functional networks. We believe the overlapping patterns of inputs and the task activation maps identified using the conventional univariate general linear model analysis suggest that the above-mentioned error is likely tolerable. To improve the estimated unknown inputs’ accuracy, future work should leverage the formulated quantitative spatiotemporal [81, 9698] models, or the more recent models informed by the precise mechanisms of neurovascular coupling [99]. Nevertheless, care should be taken in these or other related deconvolution-based inferences [100102], since as mentioned earlier, they rely on the assumption of a known profile of HRF or inputs. Future work can also leverage neural adaptation paradigms to influence the neural response timing and help tease out the neural and vascular components’ contributions to the modeled inputs. Comparing our identified inputs with those extracted from other neuroimaging modalities such as Magnetoencephalography (MEG) that are more direct measurements of the underlying neural activity will also us to further decouple the aforementioned mechanisms.

Structured recording noise such as autocorrelated noise can negatively impact the modeled system [89, 90], and the estimated input. Although we have included global mean signal regression (GSR) [43] as a preprocessing step to account for the shared global noise that is present in many of the functional networks [103105], our model is unable to account for other unknown structured (e.g., autocorrelated) and time-varying recording noise [89, 90]. Moreover, GSR may also introduce artifact, as in addition to the shared noise, it also removes any global activation patterns (e.g., vigilance [106] or arousal [107]) and can alter the correlation structure. These limitations are the source of ongoing controversy around this noise reduction method [108]. Having weighed the potential drawbacks of GSR against the major concerns regarding the significant global artifacts such as the cardiac and respiratory noise, we adopted this preprocessing step. Nevertheless, it would be beneficial to investigate the spectral profile of the global signal and the impact of GSR on the estimated system and inputs’ spectral characteristics.

One of the current limitations of our proposed framework is that the estimated inputs’ accuracy depends on the internal and recording noise levels. We show that group-level analysis and repeated measurement designs are effective strategies to increase signal-to-recording noise and to increase the estimated inputs’ accuracy. In addition, although we can not accurately tease out the contributions of internal noise from other sources of noise, our simulations and experimental results suggest lower levels of internal noise relative to external drivers in task fMRI. We draw this conclusion based on the relatively large input estimation errors associated with system parameters identified during external stimulation.

We used individual subjects’ resting-state datasets to identify the system parameters for uncovering the unknown inputs from the BOLD signal. Although beyond the scope of our current work, it is critical to comprehensively examine the identifiability of the estimated system parameters across different scanning sessions, types, and individuals. Subsequently, it would be interesting to explore further the extent to which our proposed system identification framework can highlight the shared features across subjects or increase the accuracy of the subject-specific mapping of spatiotemporal dynamics.

Finally, it is worth highlighting that model-based data-driven methods such as our proposed framework and the hypothesis-driven methods such as DCM [1] are complementary approaches, suited for interrogation of different aspects of system and output dynamics. For instance, DCM can also be leveraged fruitfully for a more accurate estimation of the system, and consequently, external input parameters using highly controlled experimental designs with known external input profiles. Though, as mentioned before, care should be taken in the interpretation of the results produced by methods that incorporate priors, as the boxcar regressors commonly used to model the profile of external inputs are merely abstractions and do not account for other possible factors such as anticipatory responses, adaptation, or other unknown drivers that shape the profile of external inputs. However, data-driven approaches are particularly advantageous when the brain is driven by extensive complex inputs, for instance, during naturalistic stimuli (e.g., watching a movie), or in general, if we lack a priori information or hypothesis on the structure of external inputs—for instance, during the healthy resting-state or pathological brain activity such as epileptic discharges [109].

4 Conclusion

We show that the proposed framework provides an avenue to uncover the structure of the unknown drivers of BOLD signal fluctuations and shines light on factors that contribute to its apparent non-stationarities. However, more significantly, our results highlight the importance of modeling and interpreting the brain’s dynamic functional connectivity as an open system. Broadly, our approach provides a framework for understanding the brain’s large-scale functional dynamics and non-stationarities, mechanistically via the modeled system and its time-varying drivers.

Supporting information

S1 Table. List of de-identified HCP subjects used in this study.

https://doi.org/10.1371/journal.pone.0257580.s002

(XLSX)

References

  1. 1. Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19(4):1273–1302. pmid:12948688
  2. 2. Deco G, Ponce-Alvarez A, Mantini D, Romani GL, Hagmann P, Corbetta M. Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. Journal of Neuroscience. 2013;33(27):11239–11252. pmid:23825427
  3. 3. Ashourvan A, Gu S, Mattar MG, Vettel JM, Bassett DS. The energy landscape underpinning module dynamics in the human brain connectome. Neuroimage. 2017;157:364–380. pmid:28602945
  4. 4. Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412(6843):150. pmid:11449264
  5. 5. Mateo C, Knutsen PM, Tsai PS, Shih AY, Kleinfeld D. Entrainment of arteriole vasomotor fluctuations by neural activity is a basis of blood-oxygenation-level-dependent ‘resting-state’ connectivity. Neuron. 2017;96(4):936–948. pmid:29107517
  6. 6. Winder AT, Echagarruga C, Zhang Q, Drew PJ. Weak correlations between hemodynamic signals and ongoing neural activity during the resting state. Nature neuroscience. 2017;20(12):1761. pmid:29184204
  7. 7. Damoiseaux J, Rombouts S, Barkhof F, Scheltens P, Stam C, Smith SM, et al. Consistent resting-state networks across healthy subjects. Proceedings of the national academy of sciences. 2006;103(37):13848–13853. pmid:16945915
  8. 8. Fransson P. Spontaneous low-frequency BOLD signal fluctuations: An fMRI investigation of the resting-state default mode of brain function hypothesis. Human brain mapping. 2005;26(1):15–29. pmid:15852468
  9. 9. Cordes D, Haughton VM, Arfanakis K, Carew JD, Turski PA, Moritz CH, et al. Frequencies contributing to functional connectivity in the cerebral cortex in’resting-state’ data. American Journal of Neuroradiology. 2001;22(7):1326–1333.
  10. 10. Duff EP, Johnston LA, Xiong J, Fox PT, Mareels I, Egan GF. The power of spectral density analysis for mapping endogenous BOLD signal fluctuations. Human brain mapping. 2008;29(7):778–790. pmid:18454458
  11. 11. Shmueli K, van Gelderen P, de Zwart JA, Horovitz SG, Fukunaga M, Jansma JM, et al. Low-frequency fluctuations in the cardiac rate as a source of variance in the resting-state fMRI BOLD signal. Neuroimage. 2007;38(2):306–320. pmid:17869543
  12. 12. Birn RM, Diamond JB, Smith MA, Bandettini PA. Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI. Neuroimage. 2006;31(4):1536–1548. pmid:16632379
  13. 13. Chen JE, Glover GH. BOLD fractional contribution to resting-state functional connectivity above 0.1 Hz. Neuroimage. 2015;107:207–218. pmid:25497686
  14. 14. Gohel SR, Biswal BB. Functional integration between brain regions at rest occurs in multiple-frequency bands. Brain connectivity. 2015;5(1):23–34. pmid:24702246
  15. 15. Bassett DS, Zurn P, Gold JI. On the nature and use of models in network neuroscience. Nature Reviews Neuroscience. 2018; p. 1. pmid:30002509
  16. 16. He X, De Roeck G. System identification of mechanical structures by a high-order multivariate autoregressive model. Computers & structures. 1997;64(1-4):341–351.
  17. 17. Papakos V, Fassois S. Multichannel identification of aircraft skeleton structures under unobservable excitation: a vector AR/ARMA framework. Mechanical Systems and Signal Processing. 2003;17(6):1271–1290.
  18. 18. Liberati D, Cerutti S, Di Ponzio E, Ventimiglia V, Zaninelli L. The implementation of an autoregressive model with exogenous input in a single sweep visual evoked potential analysis. Journal of biomedical engineering. 1989;11(4):285–292. pmid:2666748
  19. 19. Chang JY, Pigorini A, Massimini M, Tononi G, Nobili L, Van Veen BD. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain. Frontiers in human neuroscience. 2012;6:317. pmid:23226122
  20. 20. Harrison L, Penny WD, Friston K. Multivariate autoregressive modeling of fMRI time series. Neuroimage. 2003;19(4):1477–1491. pmid:12948704
  21. 21. Goebel R, Roebroeck A, Kim DS, Formisano E. Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping. Magnetic resonance imaging. 2003;21(10):1251–1261. pmid:14725933
  22. 22. Stephan KE, Roebroeck A. A short history of causal modeling of fMRI data. Neuroimage. 2012;62(2):856–863. pmid:22248576
  23. 23. Razi A, Friston KJ. The connected brain: causality, models, and intrinsic dynamics. IEEE Signal Processing Magazine. 2016;33(3):14–35.
  24. 24. Reinsel GC. Elements of multivariate time series analysis. Springer Science & Business Media; 2003.
  25. 25. Roebroeck A, Formisano E, Goebel R. The identification of interacting networks in the brain using fMRI: model selection, causality and deconvolution. Neuroimage. 2011;58(2):296–302. pmid:19786106
  26. 26. Gupta G, Pequito S, Bogdan P. Dealing with Unknown Unknowns: Identification and Selection of Minimal Sensing for Fractional Dynamics with Unknown Inputs. In: Proceedings of the 2018 American Control Conference; 2018.
  27. 27. Laumann TO, Snyder AZ, Mitra A, Gordon EM, Gratton C, Adeyemo B, et al. On the stability of BOLD fMRI correlations. Cerebral cortex. 2016;27(10):4719–4732.
  28. 28. Liegeois R, Laumann TO, Snyder AZ, Zhou J, Yeo BT. Interpreting temporal fluctuations in resting-state functional connectivity MRI. Neuroimage. 2017;. pmid:28916180
  29. 29. Miller RL, Abrol A, Adali T, Levin-Schwarz Y, Calhoun VD. Resting-state fMRI dynamics and null models: Perspectives, sampling variability, and simulations. Frontiers in neuroscience. 2018;12. pmid:30237758
  30. 30. Lurie DJ, Kessler D, Bassett DS, Betzel RF, Breakspear M, Kheilholz S, et al. Questions and controversies in the study of time-varying functional connectivity in resting fMRI. Network Neuroscience. 2020;4(1):30–69. pmid:32043043
  31. 31. Chang C, Glover GH. Time–frequency dynamics of resting-state brain connectivity measured with fMRI. Neuroimage. 2010;50(1):81–98. pmid:20006716
  32. 32. Muhei-aldin O, VanSwearingen J, Karim H, Huppert T, Sparto PJ, Erickson KI, et al. An investigation of fMRI time series stationarity during motor sequence learning foot tapping tasks. Journal of neuroscience methods. 2014;227:75–82. pmid:24530436
  33. 33. Ljung L. System identification. Wiley Encyclopedia of Electrical and Electronics Engineering. 1999; p. 1–19.
  34. 34. Akaike H. Information theory and an extension of the maximum likelihood principle. In: Selected papers of hirotugu akaike. Springer; 1998. p. 199–213.
  35. 35. Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K, et al. The WU-Minn human connectome project: an overview. Neuroimage. 2013;80:62–79. pmid:23684880
  36. 36. Barch DM, Burgess GC, Harms MP, Petersen SE, Schlaggar BL, Corbetta M, et al. Function in the human connectome: task-fMRI and individual differences in behavior. Neuroimage. 2013;80:169–189. pmid:23684877
  37. 37. Power JD, Schlaggar BL, Petersen SE. Recent progress and outstanding issues in motion correction in resting state fMRI. Neuroimage. 2015;105:536–551. pmid:25462692
  38. 38. Schaefer A, Kong R, Gordon EM, Laumann TO, Zuo XN, Holmes AJ, et al. Local-global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI. Cerebral Cortex. 2017;28(9):3095–3114.
  39. 39. Salimi-Khorshidi G, Douaud G, Beckmann CF, Glasser MF, Griffanti L, Smith SM. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers. Neuroimage. 2014;90:449–468. pmid:24389422
  40. 40. Griffanti L, Salimi-Khorshidi G, Beckmann CF, Auerbach EJ, Douaud G, Sexton CE, et al. ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging. Neuroimage. 2014;95:232–247. pmid:24657355
  41. 41. Glasser MF, Sotiropoulos SN, Wilson JA, Coalson TS, Fischl B, Andersson JL, et al. The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage. 2013;80:105–124. pmid:23668970
  42. 42. Behzadi Y, Restom K, Liau J, Liu TT. A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. Neuroimage. 2007;37(1):90–101. pmid:17560126
  43. 43. Desjardins AE, Kiehl KA, Liddle PF. Removal of confounding effects of global signal in functional MRI analyses. Neuroimage. 2001;13(4):751–758. pmid:11305902
  44. 44. Welch BL. The generalization ofstudent’s’ problem when several different population variances are involved. Biometrika. 1947;34(1/2):28–35. pmid:20287819
  45. 45. Wilcoxon F. Individual comparisons by ranking methods. Biometrics bulletin. 1945;1(6):80–83.
  46. 46. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal statistical society: series B (Methodological). 1995;57(1):289–300.
  47. 47. Holm S. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics. 1979; p. 65–70.
  48. 48. Liégeois R, Yeo BT, Van De Ville D. Interpreting null models of resting-state functional MRI dynamics: not throwing the model out with the hypothesis. NeuroImage. 2021;243:118518. pmid:34469853
  49. 49. Zalesky A, Fornito A, Cocchi L, Gollo LL, Breakspear M. Time-resolved resting-state brain networks. Proceedings of the National Academy of Sciences. 2014;111(28):10341–10346. pmid:24982140
  50. 50. Thomas Yeo B, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M, et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of neurophysiology. 2011;106(3):1125–1165.
  51. 51. Thompson E, Varela FJ. Radical embodiment: neural dynamics and consciousness. Trends in cognitive sciences. 2001;5(10):418–425. pmid:11707380
  52. 52. Chiel HJ, Beer RD. The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends in neurosciences. 1997;20(12):553–557. pmid:9416664
  53. 53. Pfeifer R, Bongard J. How the body shapes the way we think: a new view of intelligence. MIT press; 2006.
  54. 54. Calvo P, Gomila T. Handbook of cognitive science: An embodied approach. Elsevier; 2008.
  55. 55. Sporns O. Networks of the Brain. MIT press; 2010.
  56. 56. Douglas RJ, Martin KA. Neuronal circuits of the neocortex. Annu Rev Neurosci. 2004;27:419–451. pmid:15217339
  57. 57. Logothetis NK. What we can do and what we cannot do with fMRI. Nature. 2008;453(7197):869. pmid:18548064
  58. 58. Stefanovic B, Warnking JM, Pike GB. Hemodynamic and metabolic responses to neuronal inhibition. Neuroimage. 2004;22(2):771–778. pmid:15193606
  59. 59. Shmuel A, Yacoub E, Pfeuffer J, Van de Moortele PF, Adriany G, Hu X, et al. Sustained negative BOLD, blood flow and oxygen consumption response and its coupling to the positive response in the human brain. Neuron. 2002;36(6):1195–1210. pmid:12495632
  60. 60. Shmuel A, Augath M, Oeltermann A, Logothetis NK. Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nature neuroscience. 2006;9(4):569. pmid:16547508
  61. 61. Buzsáki G, Kaila K, Raichle M. Inhibition and brain work. Neuron. 2007;56(5):771–783. pmid:18054855
  62. 62. Chance FS, Abbott LF, Reyes AD. Gain modulation from background synaptic input. Neuron. 2002;35(4):773–782. pmid:12194875
  63. 63. Crick F, Koch C. Constraints on cortical and thalamic projections: the no-strong-loops hypothesis. Nature. 1998;391(6664):245. pmid:9440687
  64. 64. Hasselmo ME. Neuromodulation and cortical function: modeling the physiological basis of behavior. Behavioural brain research. 1995;67(1):1–27. pmid:7748496
  65. 65. Klaassens BL, Rombouts SA, Winkler AM, van Gorsel HC, van der Grond J, van Gerven JM. Time related effects on functional brain connectivity after serotonergic and cholinergic neuromodulation. Human brain mapping. 2017;38(1):308–325. pmid:27622387
  66. 66. Shine JM, Aburn MJ, Breakspear M, Poldrack RA. The modulation of neural gain facilitates a transition between functional segregation and integration in the brain. Elife. 2018;7:e31130. pmid:29376825
  67. 67. Hermans EJ, Van Marle HJ, Ossewaarde L, Henckens MJ, Qin S, Van Kesteren MT, et al. Stress-related noradrenergic activity prompts large-scale neural network reconfiguration. Science. 2011;334(6059):1151–1153. pmid:22116887
  68. 68. van den Brink RL, Pfeffer T, Warren CM, Murphy PR, Tona KD, van der Wee NJ, et al. Catecholaminergic neuromodulation shapes intrinsic MRI functional connectivity in the human brain. Journal of Neuroscience. 2016;36(30):7865–7876. pmid:27466332
  69. 69. Alavash M, Lim SJ, Thiel C, Sehm B, Deserno L, Obleser J. Dopaminergic modulation of hemodynamic signal variability and the functional connectome during cognitive performance. Neuroimage. 2018;172:341–356. pmid:29410219
  70. 70. Shafiei G, Zeighami Y, Clark CA, Coull JT, Nagano-Saito A, Leyton M, et al. Dopamine signaling modulates the stability and integration of intrinsic brain networks. Cerebral Cortex. 2018;29(1):397–409.
  71. 71. Fransson P. How default is the default mode of brain function?: Further evidence from intrinsic BOLD signal fluctuations. Neuropsychologia. 2006;44(14):2836–2845. pmid:16879844
  72. 72. Yang H, Long XY, Yang Y, Yan H, Zhu CZ, Zhou XP, et al. Amplitude of low frequency fluctuation within visual areas revealed by resting-state functional MRI. Neuroimage. 2007;36(1):144–152. pmid:17434757
  73. 73. Casorso J, Kong X, Chi W, Van De Ville D, Yeo BT, Liegeois R. Dynamic mode decomposition of resting-state and task fMRI. Neuroimage. 2019;194:42–54. pmid:30904469
  74. 74. Xie X, Cao Z, Weng X. Spatiotemporal nonlinearity in resting-state fMRI of the human brain. Neuroimage. 2008;40(4):1672–1685. pmid:18316208
  75. 75. Gautama T, Mandic DP, Van Hulle MM. Signal nonlinearity in fMRI: a comparison between BOLD and MION. IEEE transactions on medical imaging. 2003;22(5):636–644. pmid:12846432
  76. 76. Minati L, Chiesa P, Tabarelli D, D’Incerti L, Jovicich J. Synchronization, non-linear dynamics and low-frequency fluctuations: analogy between spontaneous brain activity and networked single-transistor chaotic oscillators. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2015;25(3):033107. pmid:25833429
  77. 77. Vazquez AL, Noll DC. Nonlinear aspects of the BOLD response in functional MRI. Neuroimage. 1998;7(2):108–118. pmid:9558643
  78. 78. Miller KL, Luh WM, Liu TT, Martinez A, Obata T, Wong EC, et al. Nonlinear temporal dynamics of the cerebral blood flow response. Human brain mapping. 2001;13(1):1–12. pmid:11284042
  79. 79. Boynton GM, Engel SA, Glover GH, Heeger DJ. Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience. 1996;16(13):4207–4221. pmid:8753882
  80. 80. Mechelli A, Price CJ, Friston KJ. Nonlinear coupling between evoked rCBF and BOLD signals: a simulation study of hemodynamic responses. NeuroImage. 2001;14(4):862–872. pmid:11554805
  81. 81. Friston KJ, Mechelli A, Turner R, Price CJ. Nonlinear responses in fMRI: the Balloon model, Volterra kernels, and other hemodynamics. NeuroImage. 2000;12(4):466–477. pmid:10988040
  82. 82. Chatfield C. The analysis of time series: an introduction. Chapman and Hall/CRC; 2016.
  83. 83. Hindriks R, Adhikari MH, Murayama Y, Ganzetti M, Mantini D, Logothetis NK, et al. Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI? Neuroimage. 2016;127:242–256. pmid:26631813
  84. 84. Sakoğlu Ü, Pearlson GD, Kiehl KA, Wang YM, Michael AM, Calhoun VD. A method for evaluating dynamic functional network connectivity and task-modulation: application to schizophrenia. Magnetic Resonance Materials in Physics, Biology and Medicine. 2010;23(5-6):351–366.
  85. 85. Handwerker DA, Roopchansingh V, Gonzalez-Castillo J, Bandettini PA. Periodic changes in fMRI connectivity. Neuroimage. 2012;63(3):1712–1719. pmid:22796990
  86. 86. Prichard D, Theiler J. Generating surrogate data for time series with several simultaneously measured variables. Physical review letters. 1994;73(7):951. pmid:10057582
  87. 87. Breakspear M, Brammer MJ, Bullmore ET, Das P, Williams LM. Spatiotemporal wavelet resampling for functional neuroimaging data. Human brain mapping. 2004;23(1):1–25. pmid:15281138
  88. 88. Leonardi N, Richiardi J, Gschwind M, Simioni S, Annoni JM, Schluep M, et al. Principal components of functional connectivity: a new approach to study dynamic brain connectivity during rest. NeuroImage. 2013;83:937–950. pmid:23872496
  89. 89. Diedrichsen J, Shadmehr R. Detecting and adjusting for artifacts in fMRI time series data. Neuroimage. 2005;27(3):624–634. pmid:15975828
  90. 90. Oikonomou VP, Tripoliti EE, Fotiadis DI. Bayesian methods for fMRI time-series analysis using a nonstationary model for the noise. IEEE Transactions on Information Technology in Biomedicine. 2010;14(3):664–674. pmid:20123577
  91. 91. Becker CO, Bassett DS, Preciado VM. Large-scale dynamic modeling of task-fMRI signals via subspace system identification. Journal of neural engineering. 2018;15(6):066016. pmid:30088476
  92. 92. Cornblath EJ, Ashourvan A, Kim JZ, Betzel RF, Ciric R, Adebimpe A, et al. Temporal sequences of brain activity at rest are constrained by white matter structure and modulated by cognitive demands. Communications biology. 2020;3(1):1–12. pmid:32444827
  93. 93. Fox MD, Raichle ME. Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nature reviews neuroscience. 2007;8(9):700. pmid:17704812
  94. 94. Niazy RK, Xie J, Miller K, Beckmann CF, Smith SM. Spectral characteristics of resting state networks. In: Progress in brain research. vol. 193. Elsevier; 2011. p. 259–276.
  95. 95. Boubela RN, Kalcher K, Huf W, Kronnerwetter C, Filzmoser P, Moser E. Beyond noise: using temporal ICA to extract meaningful information from high-frequency fMRI signal fluctuations during rest. Frontiers in human neuroscience. 2013;7:168. pmid:23641208
  96. 96. Buxton RB, Wong EC, Frank LR. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magnetic resonance in medicine. 1998;39(6):855–864. pmid:9621908
  97. 97. Robinson PA, Drysdale PM, Van der Merwe H, Kyriakou E, Rigozzi M, Germanoska B, et al. BOLD responses to stimuli: dependence on frequency, stimulus form, amplitude, and repetition rate. Neuroimage. 2006;31(2):585–599. pmid:16466935
  98. 98. Drysdale P, Huber J, Robinson P, Aquino K. Spatiotemporal BOLD dynamics from a poroelastic hemodynamic model. Journal of theoretical biology. 2010;265(4):524–534. pmid:20665966
  99. 99. Pang JC, Robinson PA, Aquino KM, Vasan N. Effects of astrocytic dynamics on spatiotemporal hemodynamics: Modeling and enhanced data analysis. Neuroimage. 2017;147:994–1005. pmid:27751942
  100. 100. Glover GH. Deconvolution of impulse response in event-related BOLD fMRI1. Neuroimage. 1999;9(4):416–429. pmid:10191170
  101. 101. Gitelman DR, Penny WD, Ashburner J, Friston KJ. Modeling regional and psychophysiologic interactions in fMRI: the importance of hemodynamic deconvolution. Neuroimage. 2003;19(1):200–207. pmid:12781739
  102. 102. Karahanoğlu FI, Caballero-Gaudes C, Lazeyras F, Van De Ville D. Total activation: fMRI deconvolution through spatio-temporal regularization. Neuroimage. 2013;73:121–134. pmid:23384519
  103. 103. Murphy K, Birn RM, Handwerker DA, Jones TB, Bandettini PA. The impact of global signal regression on resting state correlations: are anti-correlated networks introduced? Neuroimage. 2009;44(3):893–905. pmid:18976716
  104. 104. Fox MD, Zhang D, Snyder AZ, Raichle ME. The global signal and observed anticorrelated resting state brain networks. Journal of neurophysiology. 2009;101(6):3270–3283. pmid:19339462
  105. 105. Van Dijk KR, Hedden T, Venkataraman A, Evans KC, Lazar SW, Buckner RL. Intrinsic functional connectivity as a tool for human connectomics: theory, properties, and optimization. Journal of neurophysiology. 2009;103(1):297–321. pmid:19889849
  106. 106. Wong CW, Olafsson V, Tal O, Liu TT. The amplitude of the resting-state fMRI global signal is related to EEG vigilance measures. Neuroimage. 2013;83:983–990. pmid:23899724
  107. 107. Chang C, Leopold DA, Schölvinck ML, Mandelkow H, Picchioni D, Liu X, et al. Tracking brain arousal fluctuations with fMRI. Proceedings of the National Academy of Sciences. 2016;113(16):4518–4523. pmid:27051064
  108. 108. Liu TT, Nalci A, Falahpour M. The global signal in fMRI: Nuisance or Information? Neuroimage. 2017;150:213–229. pmid:28213118
  109. 109. Karahanoglu FI, Grouiller F, Gaudes CC, Seeck M, Vulliemoz S, Van De Ville D. Spatial mapping of interictal epileptic discharges in fMRI with total activation. Proceedings—International Symposium on Biomedical Imaging. 2013; p. 1500–1503.