Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Lag-invariant detection of interactions in spatially-extended systems using linear inverse modeling

Abstract

Measurements on physical systems result from the systems’ activity being converted into sensor measurements by a forward model. In a number of cases, inversion of the forward model is extremely sensitive to perturbations such as sensor noise or numerical errors in the forward model. Regularization is then required, which introduces bias in the reconstruction of the systems’ activity. One domain in which this is particularly problematic is the reconstruction of interactions in spatially-extended complex systems such as the human brain. Brain interactions can be reconstructed from non-invasive measurements such as electroencephalography (EEG) or magnetoencephalography (MEG), whose forward models are linear and instantaneous, but have large null-spaces and high condition numbers. This leads to incomplete unmixing of the forward models and hence to spurious interactions. This motivated the development of interaction measures that are exclusively sensitive to lagged, i.e. delayed interactions. The drawback of such measures is that they only detect interactions that have sufficiently large lags and this introduces bias in reconstructed brain networks. We introduce three estimators for linear interactions in spatially-extended systems that are uniformly sensitive to all lags. We derive some basic properties of and relationships between the estimators and evaluate their performance using numerical simulations from a simple benchmark model.

Introduction

Global patterns in complex systems emerge through interactions between large numbers of units. One way of characterizing the behavior of such systems is by applying network theory to the matrix of pair-wise interactions [1]. If inversion of the forward model is ill-posed, this approach leads to bias in the estimated interaction matrix and hence to distortions of reconstructed network topology [2]. Bias in estimated interaction matrices is well-known in the field of MEG and EEG and prevents a clear view on the functional organization of the human brain. Another example are local field potentials (LFP’s), which are used in invasive neurophysiological studies [3]. Although less ill-posed than the EEG/MEG inverse problems, the LFP inverse problem still requires regularization and this leads to bias in the reconstruction of current generators in neural tissue [4]. Although linear, these operators have large null-spaces and are very sensitive to small perturbations, as evidenced by the high condition numbers of their discretizations. In the current study, we propose a method to reduce the bias in estimated interaction matrices that can be applied to observations obtained from arbitrary linear forward models.

Estimation of system interactions from indirect sensor measurements is particularly well-developed in the field of EEG and MEG, which are non-invasive techniques for detecting magnetic fields outside the head (MEG) and electric potentials on the scalp (EEG) set-up by current generators in activated brain tissue [5]. In contrast to blood-oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI), MEG and EEG provide direct measures of neural activation with high temporal resolution and, as such, are indispensable tools in fundamental and clinical human neuroscience. Although direct analysis of MEG and EEG sensor data can be useful in disentangling cognitive processes, seizure detection, and classification of pathologies, it does not provide insight into the spatiotemporal patterns of neural activation underlying the sensor data. This applies in particular to functional interaction analysis [6, 7], which aims to characterize the temporal relationship between active sources [8], and is therefore commonly carried out in source-space [6, 9]. Source-space interaction analysis, however, generally yields biased estimators due to residual mixing of the reconstructed source signals. This undesirable effect is referred to as field spread in the case of MEG and volume-conduction in the case of EEG and is jointly referred to as signal leakage. Signal leakage is the main obstacle in the non-invasive study of human brain interactions and has attracted considerable attention from the research community.

Methods of dealing with signal leakage in source-space interaction analysis can roughly be divided into two categories, corresponding to the type of interaction that is considered: signal or amplitude interactions. Signal interactions refer to interactions between reconstructed source-signals proper, whereas amplitude interactions refer to interactions between the signals’ amplitude fluctuations. Signal interactions characterize interactions between fast and usually oscillatory signals on short time-scales, typically a fraction of the signals’ oscillation period. Examples are the Pearson correlation coefficient, coherence, and phase-synchronization [10]. Amplitude interactions characterize interactions between slowly varying amplitude envelopes on time-scales in the BOLD-fMRI range (0.01-0.1 Hz). Methods of reducing leakage in signal interactions are based on the observation that leakage is instantaneous and thus cannot explain the observation of interactions with non-zero lag. Methods of this type therefore quantify lagged interactions (i.e. interactions with non-zero lag) and discard all zero-lag interactions [1115]. Methods of reducing leakage in amplitude interactions remove all linear instantaneous interactions between the reconstructed source signals, before computing their amplitude envelopes. This is done by applying a whitening transformation to the signals and is referred to as signal orthogonalization. The amplitude envelopes are subsequently low-pass filtered and functional interactions are quantified by the Pearson correlation coefficient [1619]. This approach has enabled the discovery of the electrophysiological basis of human hemodynamic resting-state networks [20, 21].

This study is concerned with signal interactions. Although the above mentioned methods have proven valuable in various applications, their sensitivity decreases with decreasing (absolute) lag, which leads to false negatives and distortions in the topology of reconstructed functional networks [2]. This is a serious limitation, especially given that intra-cortical electrophysiological signals are known to interact with small lags under many experimental conditions and have been hypothesized to be critical for effective neuronal communication and routing of information [22, 23]. The main reason why measures of signal interactions are unable to separate true from spurious zero-lag interactions is that the formal relationship between signal leakage, and the forward and inverse operators is not exploited. In [24, 25] this relationship is exploited for linear inverse operators by linking signal leakage to the algebraic structure of the resolution operator. This method, however, is designed for seed-based interactions and is asymmetric, without having a causal interpretation, which complicates its interpretation. Furthermore, straightforward symmetrization and generalization to the multivariate case leads to a variation on an earlier proposed bias-correction scheme that has been shown to yield only marginal improvements [26]. In [27], a subspace projection method is proposed for suppressing the contribution of source-power to the sensor-space spectral matrix. This yields a bias-corrected spectral matrix, which is subsequently projected to source-space using a non-linear scanning method. The method is based on the observation that contributions to the sensor-space spectral matrix that are due to source-power (spurious interactions) and source coherence (true interactions) lie in different linear subspaces of the vectorspace of n × n complex-valued matrices, where n denotes the number of sensors. It is in this last step that the relationship between signal leakage and the forward matrix is exploited. The source-space spectral matrix is subsequently estimated by carrying out a non-linear scanning procedure in which the corrected sensor-space spectral matrix is compared with the (normalized) cross-product of leadfields of every pair of source locations. The main limitation of this method is the use of a non-linear scanning method, as such methods are known to be effective only for a small number of active sources [28].

The bias-reduction method proposed in [27] can in principle be combined with any inverse operator. This was noted in [27] but not worked out. In the current study we combine the bias-correction proposed in [27] with an arbitrary linear inverse operator to obtain a lag-independent estimator of system interactions that can be applied to reconstruct interactions in distributed source-activity. This requires to invert a linear forward model defined on vectorized matrix spaces. We also propose two novel bias-corrected estimators. The first estimator is obtained by interchanging the order of inversion and bias-correction. Thus, instead of correcting the sensor-space spectral matrix and subsequently projecting the corrected matrix to source-space, the sensor-data is first projected to source-space to yield reconstructed system activity, from which the spectral matrix can be estimated and corrected using a subspace projection in source-space. The second estimator is obtained by first solving the same vectorized inverse problem as above and subsequently correcting the reconstructed spectral matrix using the same source-space projection as above. This last estimator hence combines the ingredients of the other two estimators.

In Background we describe the linear forward model and linear inverse operator used in this study (Linear inverse modeling), define system interactions in the time-frequency domain using spectral matrices and formulate null-hypotheses and the construction of confidence regions (Detection of system interactions), introduce the benchmark model that will be used in evaluating the proposed methodology (Benchmark model), and motivate the use of complex-valued test-statistics for lag-independent detection of system interactions (Test-statistics). In Estimation of system interactions we describe uncorrected source- and sensor-space based estimators of system interactions and derive their filtering properties. In The geometry of signal leakage we provide a geometric characterization signal leakage in linear estimators of system interactions and provide some analytical examples using simple source configurations. In Suppression of signal leakage we use this characterization to construct three corrected interaction estimators and describe how their efficacy can be quantified. In Comparative performance we compare the performance of the corrected estimators with respect to suppression efficacy (Effectiveness), bias-reduction (Bias reduction), and detection power (Detection power).

Background

Linear inverse modeling

In the time-frequency domain, the electromagnetic forward model can be represented by a linear (1) where x(t, ω) is a p-dimensional vector of brain activity, y(t, ω) is an n-dimensional vector of sensor measurements, L is the forward matrix, e(t, ω) is an n-dimensional vector of measurement noise, and t and ω denote time and angular frequency, respectively. A model of this form describes the relation between brain activity in the form of current source density (CSD) at p locations within the brain and induced measurements at n sensors, which can be electric potentials on the scalp as in electroencephalography (EEG), on the cortical surface as in electrocorticography (ECoG), or inside the brain as in local field potentials (LFP) recordings, as well as magnetic flux outside the head as in magnetoencephalography (MEG). These different types of measurement correspond to different forward matrices, which contain the relevant geometric and conduction properties of the volume conductor under study (i.e. the head) and are obtained by discretizing and solving the quasi-static approximation to Maxwells’ equations [5]. Electromagnetic inverse modeling refers to the reconstruction of the CSD or any derived quantities from observed sensor measurements. What makes inverse modeling challenging is the fact that p is generally much larger than n.

A popular way of obtaining a reconstruction x(t, ω) of x(t, ω) is by solving the following penalized least-squares problem: where λ ≥ 0 is the noise regularization parameter [5, 28]. This problem has the unique solution where the inverse operator is given by (2)

In the neuroimaging community, x(t, ω) is referred to as the minimum norm reconstruction of x(t, ω) and several generalizations of it have been developed [28]. The properties of a linear inverse operator are summarized by its resolution matrix. Given a forward matrix L and an inverse operator L, the resolution operator associated with L and is defined as

Its columns are known as point-spread functions and its rows is cross-talk functions [29, 30]. Note that in case of a single active source of unit strength at the j-th location, the reconstruction is equal to the j-th point-spread function. We will need these notions later on when analyzing the functional connectivity estimators.

Detection of system interactions

We focus on the detection of interactions from ongoing recordings of the system’s activity i.e. without perturbing the system. In neuroscience, this corresponds to recording brain activity in the absence of sensory stimulation and cognitive tasks. The proposed methodology, however, can easily be adapted to detect interactions from recorded system responses to perturbations. We assume that ongoing system activity can be reasonably well modeled by a Gaussian stochastic process. Without loss of generality, we assume the process to have expectation zero. Such a process is completely determined by its spectral matrix Sx(ω), which is a function of (angular) frequency ω and is defined as where the brackets denote taking the expectation over time and the superscript † denotes taking the conjugate-transpose. The spectral matrix of a process is positive semi-define and conjugate symmetric, i.e. , where the bar denotes entry-wise complex-conjugation and can hence be thought of as a covariance matrix in the time-frequency domain. Measurement noise is also modeled by a Gaussian stochastic process and for simplicity we will assume that Se(ω) = σ2 I, where I denotes the identity matrix i.e. the measurement noise is white, has the variance at each sensor, and is uncorrelated across sensors. The observed sensor data y(t, ω) therefore is also a stochastic process and has a spectral matrix Sy(ω) given by

In neuroscience, system interactions are referred to as functional connectivity, which is a broad term that encompasses notions based on different frameworks such as stochastic processes, weakly-coupled oscillators, chaos theory, and information theory [10]. We will adopt the framework of stochastic processes and hence characterize system interactions by the (non-diagonal entries of) the spectral matrix Sx(ω). Thus, the interactions between system locations k and l at frequency ω is characterized by the (k, l)-th entry of Sx(ω), which we will denote by γk,l(ω) and is a complex number whose magnitude and phase measure, respectively, the strength and lag (i.e. latency) of the interaction at frequency ω.

This study is concerned with the construction of test-statistics for detecting the presence of system interactions based on observed sensor data. This problem is formalized by the following null-hypothesis: against the two-sided alternative H1: γkl(ω)≠0, for all pairs (k, l). The test-statistics we consider are complex-valued estimators of γk,l, together with their real and imaginary parts and are described in detail in Estimation of system interactions and Suppression of signal leakage. The performance of the test-statistics will be evaluated by computing the distance between H0 and 100(1 − α)% confidence regions. Throughout the text we will refer to this distance as the sensitivity of a test-statistic and to the matrix carrying the sensitivities of all pairs (k, l) as the sensitivity matrix. For complex test-statistics, 100(1 − α)% ellipsoidal confidence regions in the complex-plane are constructed and for its real and imaginary parts, 100(1 − α)% confidence intervals are constructed by simulating the statistics’ sample distribution and calculating the 100(1 − α/2)% and 100 α/2% percentiles. To compare the sensitivity of a complex test-statistic to that attained by the combined use of its real and imaginary parts, we set α = 0.05 for the complex test-statistic and α = 0.025 for its real and imaginary parts. This is done to correct for the fact that in the latter case, two tests need to be performed instead of one. The sample distributions of the different test-statistics are approximated by generating 103 realizations of the simulated sensor data. In practice, these distributions can be approximated using appropriate bootstrap schemes.

Benchmark model

To illustrate the methodology, we use a simple forward model in which electric potentials are measured that are induced by a current source density (CSD). The electric potential V generated by a current source density ρ in an infinite, homogeneous, and isotropic volume conductor is described by Poisson’s equation: where Δ denotes the Laplace operator and where, without loss of generality, we have set the electrical conductivity to 1. In a neurophysiological context, V is the local field potential (LFP) inside nervous tissue that is generated by transmembrane currents which are described macroscopically by the current source density ρ [31, 32]. We consider a one-dimensional source-space segment of length 4 mm and a one-dimensional array of 11 electrodes with an inter-electrode spacing of 0.4 mm and that is located parallel to the source-space segment at a height of h = 0.5 mm (see Fig 1A).

thumbnail
Fig 1. Set-up of the test model.

A. Source-space segment of length 4 mm (horizontal axis at height 0 mm), recording electrodes (black dots at height 0.5 mm), and the locations of two active sources of neuronal activity (red dots). Also shown are the sensitivities of the electrodes to activity of the two sources (two black curves). B. Observed spectral matrix of the recorded electric potentials that are induced by non-interacting sources of unit strength and in the presence of measurement noise (σ = 0.1).

https://doi.org/10.1371/journal.pone.0242715.g001

The electric potential at location y that is generated by a current monopole ρ(x′) = δ(x′ − x) of unit strength at location x is given by the free-space fundamental solution to Poisson’s equation:

The CSD is modeled by two monopoles located at locations -1 and 1 mm and correspond to the red dots in Fig 1A. The source-space is discretized using a spacing of 0.1 mm and hence gives rise to a 10 × 81 forward matrix L that describes the relation between the (unobserved) CSD and the (observed) electric potentials. The (k, l)-th entry of L is V(xl, yk), where xl and yk denote the locations of the k-th electrode and the l-th source-point, respectively. The forward model has the form of Eq (1).

The two active sources are modeled by stationary oscillatory stochastic processes with spectral matrix where 0 ≤ γ ≤ 1 is the interaction strength (i.e. coherence) and 0 ≤ ϕ < 2π is the lag. Thus, the sources have the same amplitude and interact with strength γ and a latency that is a fraction ϕ/2π of their average oscillation period. Measurement noise with spectral matrix Se = σ2 I is added, where σ2 is measured in units of the maximal eigenvalue of LLT. Table 1 lists the model parameters and their nominal values. Fig 1B shows the observed spectral matrix Sy that is generated by source activity with ϕ = 0, γ = 0, and σ = 0.1. Note that although the two active sources can be discerned, due to the mixing of source activity by the forward model (two black curves), it is not immediately clear if, and to what extent, the sources are interacting.

thumbnail
Table 1. Model parameters, their symbols, and nominal values/ranges.

https://doi.org/10.1371/journal.pone.0242715.t001

Test-statistics

A commonly used statistic in neuroscience for testing the null hypothesis H0: γk,l = 0 of the absence of interactions between brain locations k and l is the imaginary part of some estimator of γk,l. Fig 2A shows Timag as a function of lag . It shows that Timag is highest for imaginary interactions i.e. when system activity at locations k and l is coherent with lag one-fourth of the average oscillation cycle, and decreases to zero for purely real (i.e. instantaneous) interactions. Similarly, the sensitivity of the real part is highest for instantaneous connectivity and decreases to zero for purely imaginary interactions (see Fig 2A). As a consequence, interactions with small/large lags might remain undetected. In neuroscience this leads to bias in measures that are derived from estimated interaction matrices such as network-theoretical measures that are often used to characterize the brain’s functional organization [1, 33, 34].

thumbnail
Fig 2. Real, imaginary, and complex test-statistics.

A. Real-and imaginary part of the imaginary test-statistic as a function of lag. The strength of interaction was set to γk,l = 1 so that the curves range between zero and one. B. Sensitivity of the real, imaginary, and complex test-statistics at the true interaction-pair (k0, l0), as a function of lag C. Sensitivity of the imaginary test-statistic for all interaction-pairs and for a lag of 0 degrees. D. Sensitivity of the real test-statistic for all interaction-pairs and for a lag of 0 degrees. E. Sensitivity of the imaginary test-statistic for all interaction-pairs and for a lag of 90 degrees. F. Sensitivity of the real test-statistic for all interaction-pairs and for a lag of 90 degrees.

https://doi.org/10.1371/journal.pone.0242715.g002

To illustrate the dependence of the real and imaginary test-statistics on interaction lag, we computed their sensitivity as a function of lag, for being the (k0, l0)-th entry of the sample spectral matrix of the reconstructed source activity and k0 and l0 denote the true source locations. The number of samples was set to N = 50, the correlation level was set to γk,l = 0.3, and no measurement noise was added. Source activity was reconstructed using the inverse operator (see Eq (2)) where λ was set to a small value (λ = 10−20). All other parameters were chosen as in Table 1. The sensitivities are shown in Fig 2B. Observe that the sensitivity of Timag is zero for lags smaller than about 45 degrees, which means that interactions at lags less than 45 degrees remain undetected. For lags larger than 45 degrees, its sensitivity starts to increase and is maximal at a lag of 90 degrees.Treal essentially behaves in the opposite way: it is highest at lag zero and decreases towards zero with increasing lag. In particular, interactions with lags larger than about 80 degrees remain undetected. Important to note is that the non-zero values of Treal for lags between 45 and 80 degrees are in fact spurious and when these are corrected for, Treal essentially behaves the same as Timag but reflected in the 45 degrees axis.

The properties of Timag and Treal considered above are local in the sense that we only considered their behavior at a single pair of system locations. Although locally they behave the same, up to reflection, globally they behave very differently and this is in fact the reason why imaginary test-statistics have become popular in neuroscience [1115]. We first consider the global behavior of the imaginary test-statistic. We consider the extreme cases of ϕ = 0 degrees and ϕ = 90 degrees. The sensitivities for all source-pairs are shown in Fig 2C (0 degrees) and Fig 2E (90 degrees). In the case of ϕ = 0 degrees, none of the source-pairs show significant interaction. Thus, on the one hand, the true interaction at (k0, l0) remains undetected, but on the other hand, no spurious interactions are present at any other source-pair. In the case of ϕ = 90 degrees, sensitivity is highest at the location (k0, l0) of true interaction and is otherwise low. Thus, also in this case, spurious interactions are (almost) absent. The global behavior of Timag therefore allows for the detection of interactions as long as the lag is not too small. Fig 2D shows that the global behavior of Treal is very different. In particular, for both ϕ = 0 and ϕ = 90 degrees, their are many source pairs that exhibit spurious interactions, especially around the true source locations. Moreover, although for ϕ = 0 degrees, the test-statistic is weakly sensitive to the true interaction, in practice it will not be detected because of the presence of spurious interactions. Thus, Treal does not allow for the detection of interactions, irrespective of the lag. Fig 2B also shows that, locally, the complex-valued test-statistic is roughly uniformly sensitive to interactions i.e. it is non-zero and roughly equal for all lags. Unfortunately, the global behavior of Tcompl does not allow for the detection of true interactions because of the high level of spurious connectivity in its real part.

Subspace projections for suppressing spurious interactions

The goal of this study is to reduce the spurious connectivity in Tcompl so that it allows for the detection of system interactions at all lags. Spurious connectivity will be reduced by correcting the test-statistics using appropriately defined subspace projections. We consider three types of corrections and the associated corrected source-space spectral matrices. The matrices differ in the order in which the inverse operator and the projection operators are applied and in which space the subspace projection is applied (source- or sensor-space). The first type is obtained by first projecting the time-frequency coefficients of the sensor data to source-space by using an arbitrary linear inverse operator. Subsequently, the reconstructed time-frequency coefficients of the source-activity are used to estimated the activity’s spectral matrix. This is done using the sample spectral matrix since this is an unbiased estimator. Lastly, the estimated spectral matrix is corrected by applying a suitably defined subspace projection. Note that this projection is defined in the vectorspace of source-space spectral matrices. Fig 3A provides an illustration. The second type is obtained by switching the order of the inverse and subspace projection operators as illustrated in Fig 3B. The second type is also obtained by switching the order, but applying the subspace projection in sensor-space instead of source-space as illustrated in Fig 3C. The third of the three types of corrections described above has been proposed in [27]. Note that whereas the source-based estimator makes use of the reconstructed source-activity, the two sensor-based estimators avoid this step by applying an inverse operator in the space of spectral matrices instead of in signal space. The difference between the two sensor-based estimators is in which space the subspace projection is applied (source- or sensor-space). We will provide a comparative performance analysis of these three types of corrected estimators.

thumbnail
Fig 3. Subspace projections for suppression of spurious interactions.

A. Source-based estimator with correction in source-space. First the time-frequency coefficients of the sensor data are projected to source-space using an arbitrary linear inverse operator. Next the spectral matrix of the reconstructed time-frequency coefficients are estimated and subsequently corrected by applying a subspace projection. B. Sensor-based estimator with correction in source-space. First the spectral matrix of the sensor-space time-frequency coefficients is estimated. Next, the estimated sensor-space spectral matrix is projected to source-space using the tensor product of an arbitrary linear inverse operator with itself and subsequently corrected by applying a subspace projection. C. Sensor-based estimator with correction in sensor-space. First the spectral matrix of the sensor-space time-frequency coefficients is estimated. Next, the estimated sensor-space spectral matrix is corrected by applying a subspace projection and subsequently projected to source-space using the tensor product of an arbitrary linear inverse operator with itself.

https://doi.org/10.1371/journal.pone.0242715.g003

Estimation of system interactions

Source-space based estimation

Measurements in the time-frequency domain at a particular frequency ω yield a n × k complex-valued data matrix Y, where n and k denote the number of sensors and samples, respectively. The observed data is related to source activity X and sensor noise E through where X and E are complex-valued p × k and n × k matrices, respectively. If X could be observed directly, the best estimator for the spectral matrix Sx is the sample spectral matrix When only Y can be observed directly, a natural way to estimate Sx is to first reconstruct the source activity X be applying an inverse operator to the observed sensor data and subsequently to compute the sample spectral matrix of the reconstructed source activity. Let X = L Y be the reconstructed source activity, where L is some linear inverse operator, and define the estimator where denotes the sample spectral matrix of the observed sensor activity. Note that is Hermitian as required for a proper estimator of Sx. We will refer to this estimator as source-based to emphasize that it is defined in terms of the (reconstructed) source activity. It will be convenient to write it in vectorized form as where ⊗ denotes the (Kronecker) tensor product. The estimator can be expressed in terms of X and E by first writing (3) where denotes the sample spectral matrix of the measurement noise and denotes the sample cross-spectral matrix between source activity and measurement noise. Consequently, where R1 = (L LL L), and F1 = LL are the associated resolution and inverse operators, respectively. Since source activity and measurement noise are uncorrelated and and are unbiased estimators of Sx and Se, respectively, the expectation of is

We now compute the filtering properties of , which will be used in the next section to establish a relationship between the source- and sensor-based estimators. Let rn be the rank of L and let L = UDVT be the singular value decomposition of L, where U is an n × r orthonormal matrix, V is an p × r orthonormal matrix, and D is the r × r diagonal matrix that holds the non-zero singular values δ1δ2 ≥ ⋯ ≥ δr of L in decreasing order. Note that U and V are bases for sensor and source space, respectively. Using the singular basis of the forward matrix, the resolution operator R1 can be represented as where the filter coefficients are given by

The filter coefficients range between zero and one and measure to what extent a contribution of the form of the true source spectral matrix is retained in the reconstructed spectral matrix. In particular, the diagonal entries of the filter matrix (i.e. the gains) are

Sensor-space based estimation

The source-space based estimator is obtained by first projecting the observed sensor data to source-space and subsequently computing the sample spectral matrix of the reconstructed source-activity. An alternative way of estimating the source-space spectral matrix is to first estimate the sensor-space spectral matrix and subsequently to project it to source-space [27]. We will refer to this estimator as sensor-based and denote it by . It is obtained by writing the sample sensor-space spectral matrix in vectorized form and subsequently projecting it to source-space by using the inverse operator F2 = (LL): where

Alternatively, this estimator can also be derived by solving the following convex optimization problem: where the minimization is done over Sx. The gradient of this objective function has a unique minimum in .

The sensor-based estimator can be written in terms of X and E as where R2 = F2(LL) is the associated resolution operator. Furthermore, its expectation it given by

Comparing the expressions for and , we see that both are obtained by applying a linear inverse operator to the vectorized sensor-data : For the source-based estimator, the inverse operator is LL and for the sensor-based estimator, the inverse operator is (LL). It is easily verified that in the absence of regularization (i.e. λ = 0), both estimators reduce to the Moore-Penrose inverse of LL. In the limit of strong regularization, the inverse operators corresponding and reduce to λ−1(LL) and λ−2(LL), respectively.

The general relation between and is obtained by comparing their filtering coefficients. The resolution operator R2 corresponding to can be represented as with filter coefficients

In the absence of regularization, the filter coefficients of both estimators are equal to one, so that all contributions of the form are completely retained. The gains are

Because of the extra term in the numerator of , the source-based estimator provides stronger damping than the sensor-based estimator. The expressions for the filter coefficients also show that there is a natural correspondence between λ1 and , which allows the estimators to be directly compared by setting .

The geometry of signal leakage

Signal leakage refers to bias in reconstructions of brain activity that is due to the incomplete unmixing of sensor signals by application of an inverse operator. When measuring electric potentials as in EEG, ECoG, and LFP, signal leakage is called volume conduction and when measuring magnetic fluxes such as in MEG, signal leakage is called field spread [6]. Here we provide a geometric characterization of signal leakage that pertains to the functional connectivity estimators defined in textbfEstimation of system interactions and that applies to any linear inverse operator L. This characterization will form the basis for defining functional connectivity estimators that actively suppress signal leakage.

The expectations of the estimators defined in Estimation of system interactions are of the form (4) for j = 1, 2. The second term on the right-hand-side of Eq (4) corresponds to signal leakage due to measurement noise that is projected to source-space by the inverse operator. The first term on the right-hand-side contains true source-space functional connectivity in vectorized form, but in a distorted way due to multiplication by RR. For a square matrix M, let M+ and M denote the matrices obtained from M by setting its off- and on-diagonal entries, respectively, to zero. Thus, M can be decomposed as M = M+ + M and the same is true for its vectorization. We can hence write the expectations as (5)

We now interpret each of the three terms on the right-hand-side of Eq (5).

Note that the first term on the right-hand-side of Eq (5) only depends on the diagonal entries of Sx, i.e. on the power of (active) sources and not on their interactions. In this sense, all non-zero off-diagonal entries correspond to spurious connectivity. We refer to this term as the leakage term. Eq (5) shows that the leakage term lies in the subspace of that is spanned by the 1 + (i − 1)(p + 1)-th columns of Rj for 1 ≤ ip. We write Bj for the matrix that contains these columns of Rj and refer to the column space of Bj as the leakage subspace. The leakage term can be written explicitly as where denotes the power of the i-th source and denotes the i-th point-spread function of Rj.

The second term on the right-hand-side of Eq (5) does not depend on source power but only on the interactions between (active) sources. Its off-diagonal entries are non-spurious in the sense that they vanish in the absence of interactions. Its entries are generally biased, however, except when ri = ei for 1 ≤ ip, which is impossible since n < p. We refer to this term as the interaction term. Eq (5) shows that it lies in the subspace of that is spanned by the columns of Rj that are not in Bj. We refer to this space as the connectivity subspace. The interaction term can be written explicitly as: where γii denotes the (i, i′)-th entry of Sx i.e. the interaction between source i and i′, and where the sum runs over all ordered pairs (i, i′) with i < i′.

Lastly, the third term on the right-hand-side of Eq (5) is independent of source activity and only depends on the measurement noise and as such is entirely spurious. We refer to this term as the (projected) noise term. Eq (5) shows that it is contained in the column space of Fj, which we will refer to as the (projected) noise subspace. It can be written explicitly as where μii denotes the (i, i′)-th entry of Se and denotes the i-column of Fj, which can be thought of as a noise point-spread function of the i-th sensor. The sum runs over all ordered pairs (i, i′) with ii′.

To illustrate the effects of signal leakage on the estimated spectral matrix , we consider two simple source configurations in the absence of measurement noise. We first consider a pair of point sources at k and l, with variances and , respectively, that interact with strength γ. Thus, the leakage and interaction terms are given by and respectively. The squared norm of the estimated leakage term therefore is where the brackets 〈, 〉 denote the standard inner product, and the squared norm of the estimated connectivity term is given by

By comparing these expressions and using the fact that , which follows from the positive semi-definiteness of Sx, we see that the inequality is implied by which is trivially true. Therefore, for two interacting point sources, signal leakage is always stronger than (estimated) interaction strength.

As a second example we consider a spatially extended source covering m locations. If the source is not too large, then rkrl for all locations k, l within the source. Furthermore, if the source is homogeneous, say with variance σ2, then the leakage term is approximately equal to 2 rr, where r denotes the shared point-spread function within the source (rrkrl). Moreover, if the interactions within the source are homogeneous, say with shared strength γ, then the connectivity term is approximately equal to γm(m − 1)rr. Thus, the strength of signal leakage, relative to that of the estimated connectivity, is approximately equal to This shows that for spatially extended sources (i.e. m large), the estimated intra-source connectivity is dominated by interaction effects and signal leakage can hence be neglected. In particular, suppressing signal leakage is not necessary in this situation.

Suppression of signal leakage

Suppression in source-space

In The geometry of signal leakage we showed that can be decomposed into three terms that lie in different linear subspaces of and correspond to leakage, interaction, and projected noise, respectively. We now define a sequence of linear operators acting on that increasingly suppress the leakage term, while retaining the interaction term as much as possible. To construct this sequence, we exploit the fact that the leakage subspace is spanned by the columns of the matrix Bj. Thus, if is a singular value decomposition of Bj and the dimension of the leakage subspace is dj, the first dj columns of Uj form an orthonormal basis for the leakage subspace. For 0 ≤ kdj, we define the linear operator on by where is the matrix carrying the first k columns of Uj. Note that is the projection onto the orthogonal complement of the span of the first k left-singular basis vectors of Bj. In the case k = 0, the projection reduces to the identity operator and in the case k = dj, the projection nullifies any vector that lies in the leakage subspace.

We now define a sequence of corrected interaction estimators by for 0 ≤ kdj. Note that and that larger projection ranks k provide increasingly stronger suppression of the leakage term. In particular, for k = dj, the leakage term is completely suppressed. The expected value of the corrected estimator is given by where is the corrected resolution operator and is the corrected inverse operator.

An interesting property of the operator is that it does not alter the imaginary part of the interaction term . This follows from the fact that the subspaces of symmetric and anti-symmetric matrices are orthogonal to each other. Indeed, the column space of is contained in the space of symmetric matrices, so the space onto which projects contains the space of anti-symmetric matrices. Since the imaginary part of the interaction term is anti-symmetric, it is invariant under the action of . Thus, does not suppress the imaginary, i.e. lagged part of the system interactions. In the rest of the text, we will refer to this type of suppression as suppression in source-space. Also note that a global scaling of the interaction-strength scales the projected interaction term by the same factor and leaves the projected leakage term unchanged. Specifically, let μ ≥ 0 be a scaling factor and consider the scaled source-space spectral matrix , then

Suppression in sensor-space

As proposed in [27], a similar correction can be applied in sensor-space. Instead of first projecting the sensor-data to source-space and subsequently computing and correcting the sample source-space spectral matrix, first the sample spectral matrix of the observed sensor-data is computed and corrected, and subsequently projected to source-space. From Eq (3) it follows that the expected value of the vectorized sample sensor-space spectral matrix is given by which can be decomposed as similar to the decomposition of in Eq (5). This allows to define sensor-level equivalents of the leakage, interaction, and projected noise subspaces defined in The geometry of signal leakage as well as a sensor-level equivalent of the leakage correction . Concretely, let B be the matrix carrying the 1 + (i − 1)(p + 1)-th columns of LL for 1 ≤ ip, let B = UΛVT be a singular value decomposition of B, and let d3 denote the dimension of the (sensor-level) leakage subspace. Then the first d3 columns of U form an orthonormal basis for the (sensor-level) leakage subspace and we hence can define the projection on by where U(k) is the matrix carrying the first k columns of U. Completely analogous to the definition in Suppression in source space, we define a sequence of corrected sensor-space spectral estimators by for 1 ≤ kd3, and subsequently project these to source-space to obtain the following corrected estimator for the source-space spectral matrix: where the inverse operator is given by

Note that so that and correspond to different ways to correct . The expected value of is given by where is the associated resolution operator. Thus, the expected value of the sensor-space corrected estimator has the same form as the source-space corrected estimators and therefore has the same leakage structure in source-space. We do observe that, in contrast to the source-space corrected estimators, the norm of does not need to be a decreasing function of k. In particular, although for k = d3 the sensor-space leakage term is completely suppressed, this need not be true for the source-space leakage term . Like the sensor-space corrected estimators, the imaginary part of the system interactions is not suppressed. In the rest of the text, we will refer to this type of suppression as suppression in sensor-space.

Effectiveness of suppression

The effectiveness of the projections defined in the previous two sections in suppressing signal leakage in source-space interactions does not only depend on the extent to which the leakage term is suppressed, but also on the extent to which the interaction term is suppressed. Indeed, because the leakage, interaction, and noise terms are not separately observable, the projections need to be applied to their sum, i.e. the estimated spectral matrix. Consequently, they might not only suppress leakage, but also partly suppress interactions and projected sensor noise. The strengths of the corrected leakage and interaction terms, relative to those of the respective uncorrected terms can be quantified using the generalized Rayleigh quotient. Let S be any source-space spectral matrix. We consider the generalized Rayleigh quotient with positive semi-definite matrices and Gj where is the Gram matrix of the corrected resolution operator . Thus, for any source-space spectral matrix S, the generalized Rayleigh quotient is given by

One minus the Rayleigh quotient quantifies the extent to which the expected value of the uncorrected estimator is reduced by the correction . Below we will refer to this as the suppression level. The suppression levels of the leakage and interaction terms are defined as and , respectively. The fact that does not suppress the imaginary part of the system interactions can be expressed as for all k. For the source-space corrections (i.e. j = 1, 2) the range of is confined to the unit interval because then reduces to . Furthermore, complete suppression of S by a rank-k correction is equivalent to being contained in the column space of and no suppression is equivalent to being contained in the orthogonal complement of . In the ideal case, the leakage term is completely suppressed and the interaction term is entirely retained, i.e. and for some k. This can also be interpreted geometrically by noting that the angle αk(S) between Rj Vec(S) and the column space of Uk equals which shows that the suppression level and αk(S) are related through a monotonically decreasing function.

Comparative performance

Effectiveness

We calculated the suppression levels of the interaction term as a function of the regularization-level λ for five increasing interaction lags of 0, 20, 40, 50, and 70 degrees. Suppression of interactions with a lag of 90 degrees is zero because the imaginary part of interactions is invariant under the action of the correction projection. All other parameter values were set as in Table 1. The projection ranks were set to their maximal values so that the leakage terms are completely suppressed. Fig 4A shows the results for the source-based estimator. Observe the presence of three regimes: An under-regularization regime (λ < −2) in which the interaction is not suppressed, a regime in which the level of regularization is appropriate (−2 < λ < 1) and in which the level of suppression sensitively depends on the value of λ, and an over-regularization regime (λ > 1) in which the interaction is completely suppressed. This behavior can be observed for all lags and for the sensor-based estimators as well (see Fig 4B and 4C). The absence of suppression in the under-regularization regime implies that the interaction term is contained in the orthogonal complement of the leakage subspace. As λ increases, the angle between the interaction term and the leakage subspace decreases and this is reflected in stronger suppression. For large values of λ, the angle approaches zero so that the interaction term will be contained in the leakage subspace and hence will be completely suppressed. These results make clear that the effectiveness of the correction decreases with increased regularization-levels. In practice this means that the correction is less effectiveness for noisy data. It is therefore not advised to use regularization-levels that have been optimized for the estimation of source power instead of source interactions. For [35] have demonstrated that the optimal regularization-level for interactions is lower than that for source-power, at least for the ridge inverse operator (see also [36]).

thumbnail
Fig 4. Suppression levels.

A. Suppression levels of the source-based estimator as a function of regularization-level and for three interaction lags (0, 20, 40, 50, and 70 degrees). B. Same as in A. but for the sensor-based estimator with suppression in source-space. C. Same as in A. but for the sensor-based estimator with suppression in sensor-space. D. Suppression of projected sensor noise for all three estimators and as a function of regularization-level. In panels A, B, and C, the interaction lags increase in the direction of the arrows. In all panels, the parameters were set as in Table 1.

https://doi.org/10.1371/journal.pone.0242715.g004

Fig 4 also shows that for all three estimators and for all regularization-levels, suppression levels decrease with increasing interaction lag ϕ. This is a consequence of the imaginary part of the interactions being unaffected by the corrections and can be understood in the following way. Let s = γcosϕ + sinϕ be the interaction with strength γ and lag ϕ. The effect of the correction on s is that its real part is suppressed, and its imaginary part is retained. Hence, applying the correction to s gives cosϕ + sinϕ, for certain 0 ≤ r ≤ 1, where r = 1 and r = 0 correspond to no and complete suppression, respectively. The suppression level is now given by which is a decreasing function of the interaction lag ϕ. Since the suppression levels are (roughly) increasing functions of the regularization-level, not exceeding a given upper-limit on suppression requires weaker regularization-levels for smaller lags. For example, for the source-based estimator (see Fig 4A) not exceeding a suppression level of 0.3 requires a regularization-level of at most λ = 0, if the lag is 70 degrees. However, for instantaneous interactions, it requires a regularization-level of at most λ = −1.5. In practice this means that the detection of interactions with small lags requires data that is less noisy.

Fig 4D shows the suppression levels of the projected sensor noise for all three estimators. The projected noise term is independent of system activity and its suppression level is therefore independent of the lag. Observe that for weak and strong regularization-levels the noise is completely suppressed, implying that for these levels, the projected noise term is contained in the leakage subspace. For intermediate regularization-levels, the noise term is somewhat less suppressed, particularly in the case of the sensor-based estimator with correction applied in sensor-space. For the other two estimators, the noise is always suppressed stronger than the interaction term is, hence correction in source-space increases the signal-to-noise ratio of the estimated source-space spectral matrix.

Bias reduction

Here we assess to what extent the biases of the estimators are reduced by the leakage corrections. We here understand “bias” to be any function of the to-be-estimated parameter and the expected value of the estimator. Different functions capture different aspects of the estimators’ expected value, some of which might be more relevant more a particular application than others. We quantified the bias by the ratio of the estimators’ absolute expected value, averaged over all but the true interaction-pair, to itself plus the absolute expected value at the true interaction-pair (k0, l0). In calculating the numerator, pairs of the form (j, j) were excluded since these pertain to source power and not to source interaction. Furthermore, to take into account the limited resolution of the inverse operators, we relaxed the definition of “true interaction-pair” to include the eight surrounding pairs (k0±1, l0±1). The bias takes on values between zero and one, where zero corresponds to the estimators’ expected value being zero at all but the true interaction-pair and one corresponds to the estimators’ expected value being zero at the true interaction-pair. We computed the bias as a function of noise-level. Below, we refer to “performance” as one minus the bias. As in the previous section, the projection ranks of the corrected estimators were set to their maximal values so that signal leakage is completely suppressed. To isolate the effect of the correction from the issue of selecting an appropriate regularization-level, we let the regularization-level range between −8 and 0 in steps of 0.5 and selected the value that maximized performance. This was done for both the uncorrected and corrected estimators. The interaction lag was set to ϕ = 0 degrees and all other parameter values were chosen as in Table 1.

Fig 5A (blue and red bars, respectively) shows the performance of the uncorrected source- and sensor-based estimators as a function of noise-level. Performance decreases with increasing noise-level, which is to be expected since the norm of the projected sensor noise is proportional to the noise-level. Also notice that the performance of the sensor-based estimator decreases slower with increasing noise-level than that of the source-based estimator. The reason for this lies in the different filtering properties of the respective inverse operators that are amplified for increasing regularization levels. These differences cause the sensor-based estimator to have a somewhat higher spatial resolution than the source-based estimator. This is illustrated in Fig 5B and 5C, which show the expected values of the source- and sensor-based estimators, respectively, for σ = 0.05 and λ = 10−2.2. The true interaction-pair is denoted by the white circles. Note that the interaction is a bit better visible in the sensor-based estimator, which has a somewhat higher resolution. In particular, the reconstructed sources and their interactions are a bit more localized than in case of the source-based estimator. On the other hand, the sensor-based reconstruction seems to suffer a bit more from artifacts than the source-based estimator, which could affect statistical significance testing. Lastly, we observe that the biases of both estimators are relatively large. For example, a bias of 0.5 means that the average amplitude of the reconstruction at the true source-pair and that at other source-pairs are equal. In practice this means that the interaction will not be detected by pair-wise tests. The main reason for the high biases is the presence of signal leakage around the seed locations. The performance of the corrected estimators are shown in Fig 5D. Observe that the performance of all three corrected estimators is several times higher than that of both uncorrected estimators. The differences are largest in the absence of noise, but can present across noise-levels. To illustrate the differences, we calculated the expectation values of the corrected estimators for σ = 0.05 and λ = 10−3. They are shown in Fig 5 (panels E, F, and G). The interaction stands out clearly, due to the complete suppression of signal leakage.

thumbnail
Fig 5. Bias-reduction through leakage correction.

A. Performance on the test-model of the uncorrected estimators as a function of noise-level. Performance is defined as one minus the bias and ranges between zero and one. B. Expected value of the source-based estimator for σ = 0.05 and λ = −2.2. C. Same as B but for the sensor-based estimator. D. Performance on the test-model of the corrected estimators as a function of noise-level. E. Expected value of the corrected source-based estimator for σ = 0.05 and λ = −3. F. Same as E but for the sensor-based estimator with correction in source-space. G. Same as E but for the sensor-based estimator with correction in sensor-space. In panels B, C, E, F, and G, the true interaction-pair is designated by the white-circles.

https://doi.org/10.1371/journal.pone.0242715.g005

Detection power

To assess the ability of the corrected source- and sensor-based estimators to detect system interactions, we used them as statistics for testing the null-hypothesis of no interaction (see Detection of system interactions). We thus computed the sensitivity matrices of the test-statistics , , and corresponding to the source- and sensor-based estimators, where k denotes the projection rank, which was set either to k = 0, corresponding to the uncorrected estimators, or to its maximal value k = 21. Detection power was quantified as where M denotes the number of ordered interaction-pairs whose sensitivity is lower than the sensitivity at the true interaction-pair (k0, l0) and p is the number of sensors. Detection power takes on values between 1/(p(p − 1)/2 ≈ 0.05 and 1. If the sensitivity at the true interaction-pair is the highest of all p(p − 1)/2 ordered source-pairs, M = p(p − 1)/2 − 1 and detection power is 1 and if the sensitivity at the true source-pair is the lowest, M = 0 and detection power is 1/(p(p − 1)/2. The significance-level for computing the sensitivity matrix was set to α = 0.025 for the real and imaginary test-statistics and to α = 0.05 for the complex test-statistic. Thus, when carrying out independent and uncorrected pairs-wise tests for significant interactions across all ordered pairs, the significant pairs (at the chosen significance level) correspond to the pairs with non-zero detection-power. Furthermore, the number of samples was set to N = 100, the noise-level to σ = 0.01, and the regularization-level to λ = 10−2. The interaction lag was varied between 0 and 90 degrees. All other parameters were chosen as in Table 1.

Fig 6A (red curve) shows the detection power of the real part of the uncorrected source-based test-statistic as a function of lag. It is 0 for lags larger than about 65 degrees, which means that such interactions remain undetected. Between 0 and 60 degrees, its sensitivity gradually decreases. Furthermore, the detection power is at most about 0.8, which is rather low, for it means that 20% of the false positive interaction pairs have lower p-values than the true interaction pair. Fig 6B and 6C show that the real part of the uncorrected sensor-based test-statistic essentially behaves the same, although the detection power is higher (up to about 0.9). Fig 6D, 6E and 6F (red curves) show the detection power of the imaginary part of the uncorrected test-statistics. The red curves are hidden behind the green curves because the imaginary part of the test-statistics is invariant under the correction projection. Note that the imaginary parts only detect interactions with lags larger than about 40 degrees. However, for larger lags, the detection power is 1, hence the true interaction-pair has the smallest p-value of all significant interactions. Thus, for lags larger than 40 degrees, the imaginary part of both the source- and sensor-based test-statistics will detect the true interaction with probability 100 × (1 − α/2). Fig 6G, 6H and 6I (red curves) show the detection power of the complex-valued uncorrected test-statistics. Its sensitivity is high for all lags (at least 0.8), particularly for the sensor-based test-statistic. That is to say, the complex-valued test-statistic is lag invariant.

thumbnail
Fig 6. Detection power.

A. Detection power of the real uncorrected (red) and corrected (green) source-based test-statistic. D. Detection power of the imaginary uncorrected (red) and corrected (green) source-based test-statistic. G. Detection power of the complex uncorrected (red) and corrected (green) source-based test-statistic. Panels B, E, and H: Same format as panels A, D, and G, respectively, but for the sensor-based test-statistic with correction in source-space. Panels C, F, and I: Same format as panels A, D, and G, respectively, but for the sensor-based test-statistic with correction in sensor-space.

https://doi.org/10.1371/journal.pone.0242715.g006

The detection power of the respective corrected test-statistics is also shown in Fig 6 (green curves). The real part has detection power 1 for lags up to about 50 degrees and is 0 for larger lags. The detection power of the imaginary part roughly behaves in the opposite way; it is 0 for lags up to about 30 degrees and is 1 for larger lags. The real and imaginary parts are hence complementary. The complex-valued test-statistics, on the other hand, have detection power 1 for all lags. We conclude that the corrected complex-valued test-statistics detect the true system interaction with probability 100 × (1 − α) irrespective of the lag.

Reconstruction of functional networks

In the previous sections we have considered the case of two active and interacting sources. In practical applications, however, more than two sources might be active and interacting, thus forming a functional network. In this section we consider the detection power of the uncorrected and corrected estimators to detect interactions in functional networks and how detection power depends on the strength of the interactions. Since the difference in performance between the three correction methods is much smaller than that between the uncorrected and corrected estimators and because the sensor-based estimators performed slightly better than the source-based estimators, we restrict the analysis to the sensor-based estimator with correction in source-space and concentrate on the effect of the correction. We considered a network of K evenly-spaced sources that are ordered from left to right in the one-dimensional source-space (see Section Benchmark model). The network structure is characterized by the spectral matrix vv, where is defined as with ϕn = 2π(n − 1)/K, for n = 1, ⋯, K. The diagonal entries of the spectral matrix are set to 1 to that γ is the shared coherence between the sources. Note that the phases ϕ1, ⋯, ϕK are evenly-spaced on the unit circle and that ϕ1 = 0 and ϕK = 2πϕ2 and hence model a traveling wave of activity that propagates from left to right and has wavelength equal to the distance between the first and the last source. This model thus comprises a fully-connected network with K different delays that are spatially organized as a propagating wave. The number of sources was set to K = 4. The network structure is shown in Fig 7A, 7B and 7C, which display, respectively, the real and imaginary parts and the absolute value of the spectral matrix. For this illustration γ was set to 0.5. The absolute value of the spectral matrix (right panel) shows a fully-connected network of K = 4 sources. The K = 4, the delays between neighboring sources equals π/4. Thus, the real part of the spectral matrix (left panel) only shows the interaction between the first and the third source and between the second and the fourth source and the imaginary part of the spectral matrix (middle panel) only shows the interaction between all neighboring sources and, in addition, between the first and the third source. Thus, the real and imaginary parts “see” different interactions and only the absolute value shows the full network topology.

thumbnail
Fig 7. Reconstruction of functional networks.

A. Real part of the true spectral matrix. B. Imaginary part of the true spectral matrix. C. Absolute value of the true spectral matrix. D. Sensitivity matrices of the real (top row), imaginary (middle row), and complex (bottom row) uncorrected sensor-based test-statistic as a function of interaction strength γ ranging from 0 to 1 in steps in 0.1. E. Same as D. but for the corrected test-statistics.

https://doi.org/10.1371/journal.pone.0242715.g007

The number of samples was set to N = 100, the noise-level to σ = 0.01 and the regularization-level to λ = 10−2. All other parameters, except the interaction parameters, were chosen as in Table 1. The interaction strength γ ranged from 0 to 1 in steps of 0.1. A complete power analysis as carried out in the previous section involves conducting a separate analysis for each pair of sources, which is cumbersome and not very instructive. It is more illustrative to display the actual sensitivity matrices. The sensitivity matrices for the uncorrected test-statistics are shown in Fig 7D. We make two observations. First, in the absence of interaction (i.e. γ = 0), the real and complex test-statistics have high sensitivity at and around the true source locations. When statistical tests are carried out this leads to false positives around the source locations. These spurious interactions have been referred to as first-order [37] and are due to the mere presence of active sources. No spurious interactions are present in the imaginary statistic because first-order spurious interactions are instantaneous. Second, when the interaction strength in increased, the sensitivity of all three test-statistics increases at and around the true interactions. Also the complex test-statistic, however, is sensitive to both real and imaginary interactions. The spurious interactions surrounding the true interactions have been referred to as second-order [37] or ghost interactions [38, 39]. They are independent of source-power and are due to the existence of true interactions. Note that all three test-statistics are equally affected by second-order spurious interactions. Incidentally, the local nature of second-order spurious interactions has been exploited in [39] to mitigate their effects. Fig 6E shows the sensitivity matrices of the corrected test-statistics. As in the previous sections, the projection rank was set to its maximal value of k = 21 so that the leakage term is completely suppressed. The figure shows that, in the absence of interaction, the sensitivity of the real and complex test-statistics is now zero hence no spurious interactions will be detected. Furthermore, when the interaction strength is increased, the sensitivity of all three test-statistic increases at and around the true interactions. Again, only the complex test-statistic is sensitive to both real and imaginary interactions. We conclude that, in this simple scenario, the corrected complex test-statistic is sensitive for all interactions, i.e. irrespective of their lag, and does not suffer from first-order spurious interactions. A price that is paid for the correction, however, are the artifacts in between the true source interactions, which appear as yellow dots (see right most panel in the top row of Fig 6E) and which are a consequence of the interaction term not being orthogonal to the leakage term (see The geometry of signal leakage).

Discussion and conclusions

Summary and relevance

Detection of functional interactions in linearly-mixed systems is complicated by signal leakage, which refers to the incomplete unmixing of source signals due to the ill-posedness of the inverse problem under study [6]. If the forward model is (nearly) instantaneous, such as the forward models used for EEG/MEG [5] and local field potential (LFP) recordings [3], signal leakage is usually dealt with by discarding all instantaneous interactions, whether true or spurious. In practice, this is done by discarding the real part of complex-valued test-statistics and only retaining their imaginary part [1115]. The drawback of this approach is that the sensitivity of the resulting test-statistics decreases with decreasing interaction lag, leading to false negatives and distortions in the topology of reconstructed functional networks [2]. More recent approaches exploit the relationship between signal leakage and the algebraic structure of linear inverse operators [24, 25, 27]. In [27] signal leakage is suppressed by correcting the sensor-space spectral matrix and subsequently projecting it to source-space using a non-linear dipole scanning algorithm [28]. To be applicable to more general source configurations, however, the correction method needs to be combined with more general inverse methods. In the current study we showed how to combine the correction with an arbitrary linear inverse operator and proposed two alternative methods in which the order of correction and projection is swapped, leading to novel sensor- and source-based estimators of system interactions. We demonstrated that all three estimators enable lag-independent detection of system interactions. As such, our study contributes to the development of reconstruction methods for system interactions from linearly-mixed observations and we expect applications in neuroscience and the physical sciences.

The geometry of signal leakage

We have provided a general characterization of signal leakage in source-space as characterized by spectral matrices that applies to all linear inverse operators. In particular, (the expected value of) the sample estimator of the source-space spectral matrix can be decomposed into three parts that are contained in different linear subspaces of the vector space of complex-valued (Hermitian) matrices. The first and second parts only depend on source-power and interactions, respectively, and we have referred to the corresponding subspaces as the leakage and interaction subspaces, respectively. The third part only depends on projected measurement noise and we have referred to the corresponding subspace as the noise subspace. In [27] a similar characterization is described for the sample estimators of the sensor-space spectral matrix. The difference between the source- and sensor-based characterizations of signal leakage is that, in the first case, the leakage and interaction subspaces are defined in terms of the resolution operator associated with a given inverse operator, whereas in the second case, the resolution operator is replaced by the forward matrix. These source- and sensor-space characterizations of signal leakage formalize the notions used in [3739] in the special case in which connectivity is defined in terms of spectral matrices. In particular, the leakage term formalizes the equivalent notions of “first-order interactions” [37] and “spurious interactions” [38, 39]. The interaction term can be further decomposed into a term corresponding to true connectivity and a term corresponding to a type of spurious interaction that it independent of source power and which has been referred to as “second-order” [37] or “ghost” [38, 39] interactions. The current study focused on suppressing first-order interactions, whereas [38, 39] proposed a method to “bundle” second-order interactions and thereby reducing the number of false positives. However, our characterization of second-order interactions in terms of tensor-products of point-spread functions might be helpful in bundling these interactions.

Source- versus sensor-based estimation of system interactions

We have described how leakage can be suppressed by applying projection operators to estimated source- or sensor-space spectral matrices. The source-space spectral matrices themselves can be estimated in two ways. The most obvious way is to project the observed sensor data to source-space and subsequently estimate the source-space spectral matrix from the reconstructed source-activity. An alternative way is to first estimate the sensor-space spectral matrix and subsequently projecting it to source-space by an inverse operator obtained by vectorizing a bilinear forward model in matrix space. This is essentially what has been proposed in [27]. We provided some insight into this estimator by giving an alternative characterization as the minimizer of a regularized cost function in matrix space. We have derived the filtering coefficients of both estimators and established a natural correspondence, which we exploited in the evaluation of their performance. In particular, the first estimator regularized with level λ can be associated with the second estimator regularized with level λ2. Corresponding filtering coefficients remain somewhat different, however, and this is reflected in the second estimator having a slightly higher spatial resolution.

Leakage suppression in source- and sensor-space

We have formulated and evaluated three methods for suppressing signal leakage in reconstructed system interactions, each of which involves the application of a suitable projection operator. The projection operator can be applied in source-space, to either the source- or sensor-based estimator of the source-space spectral matrix, or it can be applied in sensor-space, the latter corresponds to the correction method proposed in [27]. The sensor-based corrected estimators were more efficient in suppressing spurious interactions than the source-based corrected estimator and correction in source-space was more efficient than correction in sensor-space (Fig 4). The reductions in bias due to the corrections were comparable for the three methods (Fig 5). We subsequently used the corrected estimators as statistics for testing for significant interactions. Application to the model system demonstrated that the corrected test-statistics were able to detect system interactions, irrespective of the lag, with higher power than the uncorrected test-statistics. The detection power was the same for all three corrected test-statistics (Fig 6). Lastly, we showed that the corrected, but not the uncorrected, test-statistics are able to reconstruct functional networks comprising more than two sources (Fig 7). Taken together, we conclude that, out of the three test-statistics, the sensor-based test-statistic with correction in sensor-space is to be preferred, although the differences are small. We thus advice to use this test-statistic in practical applications to EEG/MEG and LFP recordings.

Scope and limitations

This study focussed on theoretical aspects of reconstructing system interactions in linearly-mixed systems and did not address practical aspects. Two of these aspects are the selection of the regularization-level of the inverse operator and the rank of the correction operators. With regard to selecting appropriate values for the regularization parameters, several techniques can be explored including cross-validation [40], L-curve methods [41], residual analysis [42], and empirical Bayes [43]. Which method is most suited for which type of system and forward model is largely an empirical question that can be addressed through realistic numerical simulations. An interesting observation in this context is that the appropriate level of regularization depends on which aspect of the system under study is of interest. Thus, in [35] it was observed that estimation of brain interactions from MEG data requires less strong regularization than the reconstruction of the activity time-courses. In [36] this observation is formally investigated. Selection of the optimal rank of the projection operators remains an open question that can potentially yield interaction estimators with less bias and more powerful statistical hypothesis tests. In [27], the optimal projection rank was chosen on the basis of simulated data with a specific source configuration and parameter settings. For example, two sources were assumed to be active with equal strengths. However, no evidence was given to show that the obtained projection ranks are (close to) optimal for other source configurations and a data-driven selection method would therefore be preferable. The difficulty lies in the fact that the leakage, interaction, and noise terms are not separately observable, but only their superposition is. A data-driven method might be based on the sequence of estimates of the interaction matrices.

To determine to what extent the proposed estimators can successfully be applied to experimental EEG/MEG and LFP data requires characterizing and quantifying the effects of different factors that are present in such recordings such as cortical background activity, measurement noise, and inaccuracies in forward modeling, such as sensor positions, dipole orientations, and conduction properties of the volume conductor. Furthermore, although in this study we have considered functional networks comprising several sources, simulations of larger functional networks are required to obtain a better understanding of which types of network topologies can be successfully reconstructed and which ones are more challenging. Realistic simulations that take into account all of the above effects, however, cannot currently be carried out by standard implementations of the estimators due to its high memory demands. For example, implementation of the source-based estimator for a source-space of p = 104 voxels requires computing the singular value decomposition of the matrix Bj, whose dimensions are p2 × p, which is not infeasible with standard algorithms. Similar considerations apply to the sensor-based estimator. In [27], these computational problems were circumvented by down-sampling the human cortex by a factor ten. Although this approach is suitable when using adaptive spatial filters for the inverse modeling, they are not appropriate when working with non-adaptive spatial filters as used in the current study, because such filters are unable to actively suppress source-activity from undesired locations. Thus, a natural follow-up study would be to use iterative reconstruction methods in combination with realistic EEG/MEG volume-conductor models and simulated functional networks to characterize the performance of the proposed methods.

Acknowledgments

The author would like to thank Wessel van Wieringen for valuable comments on the manuscript.

References

  1. 1. Bassett DS, Bullmore E. Small-World Brain Networks. The Neuroscientist. 2006;12(6). pmid:17079517
  2. 2. Rizkallah J, Amoud H, Fraschini M, Wendling F, Hassan M. Removing zero-lag functional connections can alter EEG-source space networks at rest. bioRxiv. 2019; https://doi.org/10.1101/617118.
  3. 3. Buzsáki G, Anastassiou CA, Koch C. The origin of extracellular fields and currents—EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience. 2012;13(June):407–420.
  4. 4. Hindriks R, Micheli C, Mantini D, Deco G. Human resting-state electrophysiological networks in the alpha frequency band: Evidence from magnetoencephalographic source imaging. bioRxiv. 2017;.
  5. 5. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV. Magnetoencephalography, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics. 1993;65(2):413–497.
  6. 6. Schoffelen JM, Gross J. Source connectivity analysis with MEG and EEG. Human Brain Mapping. 2009;30(6):1857–1865.
  7. 7. Pereira SS, Hindriks R, Muhlberg S, Maris E, van Ede F, Griffa A, et al. Effect of Field Spread on Resting-State Magneto Encephalography Functional Network Analysis. Brain Connectivity. 2017;7(9):541–557.
  8. 8. Bastos AM, Schoffelen JM. A Tutorial Review of Functional Connectivity Analysis Methods and Their Interpretational Pitfalls. Frontiers in Systems Neuroscience. 2016;9(January):1–23.
  9. 9. Palva S, Palva JM. Discovering oscillatory interaction networks with M / EEG: challenges and breakthroughs. Trends in Cognitive Sciences. 2012;16(4):219–230.
  10. 10. Pereda E, Quian R, Bhattacharya J. Nonlinear multivariate analysis of neurophysiological signals. Progress in Neurobiology. 2005;77:1–37.
  11. 11. Nolte G, Bai O, Wheaton L, Mari Z, Vorbach S, Hallett M. Identifying true brain interaction from EEG data using the imaginary part of coherency. Clinical Neurophysiology. 2004;115(10):2292–2307. pmid:15351371
  12. 12. Nolte G, Meinecke FC, Ziehe A, Müller KR. Identifying interactions in mixed and noisy complex systems. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics. 2006;73(5):1–6.
  13. 13. Stam CJ, Nolte G, Daffertshofer A. Phase lag index: Assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Human Brain Mapping. 2007;28(11):1178–1193.
  14. 14. Hindriks R, Bijma F, van Dijk BW, Stam CJ, van der Werf YY, van Someren EJW, et al. Data-driven modeling of phase interactions between spontaneous MEG oscillations. Human Brain Mapping. 2011;32(7):1161–1178.
  15. 15. Basti A, Pizzella V, Chella F, Romani GL, Nolte G, Marzetti L. Disclosing large-scale directed functional connections in MEG with the multivariate phase slope index. NeuroImage. 2018;175(February):161–175.
  16. 16. Colclough GL, Brookes MJ, Smith SM, Woolrich MW. A symmetric multivariate leakage correction for MEG connectomes. NeuroImage. 2015;117:439–448.
  17. 17. O’Neill GC, Barratt EL, Hunt BAE, Tewarie PK, Brookes MJ. Measuring electrophysiological connectivity by power envelope correlation: A technical review on MEG methods. Physics in Medicine and Biology. 2015;60(21):R271–R295.
  18. 18. Gohel B, Lee P, Kim M, Kim K, Jeong Y. MEG Based Functional Connectivity: Application of ICA to Alleviate Signal Leakage. IRBM. 2017;38(3):127–137.
  19. 19. Pascual-marqui RD, Biscay RJ, Bosch-bayard J, Faber P, Kochi K, Milz P, et al. Innovations orthogonalization: a solution to the major pitfalls of EEG / MEG “leakage correction”. http://dx.doi.org/101101/178657. 2017; p. 1–25.
  20. 20. Brookes MJ, Woolrich M, Luckhoo H, Price D, Hale JR, Stephenson MC, et al. Investigating the electrophysiological basis of resting state networks using magnetoencephalography. Proceedings of the National Academy of Sciences. 2011;108(40):16783–16788.
  21. 21. Hipp JF, Hawellek DJ, Corbetta M, Siegel M, Engel AK. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nature Neuroscience. 2012;15(6).
  22. 22. Fries P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in Cognitive Sciences. 2005;9(10).
  23. 23. Fries P. Perspective Rhythms for Cognition: Communication through Coherence. Neuron. 2015;88(1):220–235.
  24. 24. Wens V. Investigating complex networks with inverse models: Analytical aspects of spatial leakage and connectivity estimation. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics. 2015;91(012823).
  25. 25. Wens V, Marty B, Mary A, Bourguignon M, Op de Beeck M, Goldman S, et al. A geometric correction scheme for spatial leakage effects in MEG/EEG seed-based functional connectivity mapping. Human Brain Mapping. 2015;36(11):4604–4621.
  26. 26. Shen Y, Xu P, Li B. Bias-corrected regularized solution to inverse ill-posed models. Journal of Geodesy. 2012;86:597–608.
  27. 27. Ossadtchi A, Altukhov D, Jerbi K. NeuroImage Phase shift invariant imaging of coherent sources (PSIICOS) from MEG data. NeuroImage. 2018;183(July):950–971.
  28. 28. Grech R, Cassar T, Muscat J, Camilleri KP, Fabri SG, Zervakis M, et al. Review on solving the inverse problem in EEG source analysis. Journal of NeuroEngineering and Rehabilitation. 2008;5:1–33.
  29. 29. Hauk O, Wakeman DG, Henson R. NeuroImage Comparison of noise-normalized minimum norm estimates for MEG analysis using multiple resolution metrics. NeuroImage. 2011;54(3):1966–1974.
  30. 30. Hauk O, Stenroos M, Treder M. Towards an objective evaluation of EEG/MEG source estimation methods: The Linear Tool Kit. http://dx.doi.org/101101/672956. 2019;.
  31. 31. Nicholson C, Freeman JA. Theory of Current Analysis Tensor for and Determination Anuran of Conductivity Cerebellum. Journal of Clinical Neurophysiology. 1975;. pmid:805215
  32. 32. Gratiy SL, Halnes G, Denman D, Hawrylycz MJ, Koch C, Einevoll GT, et al. From Maxwell’ s equations to the theory of current-source density analysis. European Journal of Neuroscience. 2017;45(2012):1013–1023.
  33. 33. Haan WD, Pijnenburg YAL, Strijers RLM, Van Y, Made D, Flier WMVD, et al. Functional neural network analysis in frontotemporal dementia and Alzheimer’ s disease using EEG and graph theory. BMC Neuroscience. 2009;12:1–12.
  34. 34. Hassan M, Dufor O, Merlet I, Berrou C, Wendling F. EEG source connectivity analysis: From dense array recordings to brain networks. PLoS ONE. 2014;9(8).
  35. 35. Hincapié AS, Kujala J, Mattout J, Daligault S, Delpuech C, Mery D, et al. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters. Hindawi. 2016;2016:12–18.
  36. 36. Vallarino E, Sommariva S, Piana M, Sorrentino A. On the two-step estimation of the cross–power spectrum for dynamical inverse problems. arXIV. 2019; p. 1–16.
  37. 37. Drakesmith M, El-deredy W, Welbourne S. Reconstructing Coherent Networks from Electroencephalography and Magnetoencephalography with Reduced Contamination from Volume Conduction or Magnetic Field Spread. PLoS ONE. 2013;C(12).
  38. 38. Palva JM, Wang SH, Palva S, Zhigalov A, Monto S, Brookes MJ, et al. Ghost interactions in MEG/EEG source space: A note of caution on inter-areal coupling measures. NeuroImage. 2018;.
  39. 39. Wang SH, Lobier M, Siebenhühner F, Puoliväli T, Palva S, Palva JM. Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses. NeuroImage. 2018;173(November 2017):610–622.
  40. 40. Arlot S, Celisse A. A survey of cross-validation procedures for model selection. Statistics Surveys. 2010;4:40–79.
  41. 41. Hansen PC, O’Leary DP. The use of the L-curve in the regularization of discrete ill-posed problems. SIAM J Sci Comput. 1993;14(6):1487–1503.
  42. 42. Hansen PC. Discrete Inverse Problems: Insight and Algorithms. SIAM; 2010. Available from: http://arxiv.org/abs/1801.02549.
  43. 43. Mitsuhata Y. Adjustment of regularization in ill-posed linear inverse problems by the empirical Bayes approach. Geophysical Prospecting. 2004;52:213–239.