Skip to main content
Advertisement
  • Loading metrics

Partial information decomposition reveals that synergistic neural integration is greater downstream of recurrent information flow in organotypic cortical cultures

  • Samantha P. Sherrill ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    samfaber@indiana.edu (SPS); ehnewman@iu.edu (ELN)

    Affiliation Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America

  • Nicholas M. Timme,

    Roles Data curation, Resources, Software, Validation, Writing – review & editing

    Affiliation Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana, United States of America

  • John M. Beggs,

    Roles Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Resources, Software, Supervision, Validation, Writing – review & editing

    Affiliation Department of Physics & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America

  • Ehren L. Newman

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    samfaber@indiana.edu (SPS); ehnewman@iu.edu (ELN)

    Affiliation Department of Psychological and Brain Sciences & Program in Neuroscience, Indiana University Bloomington, Bloomington, Indiana, United States of America

Abstract

The directionality of network information flow dictates how networks process information. A central component of information processing in both biological and artificial neural networks is their ability to perform synergistic integration–a type of computation. We established previously that synergistic integration varies directly with the strength of feedforward information flow. However, the relationships between both recurrent and feedback information flow and synergistic integration remain unknown. To address this, we analyzed the spiking activity of hundreds of neurons in organotypic cultures of mouse cortex. We asked how empirically observed synergistic integration–determined from partial information decomposition–varied with local functional network structure that was categorized into motifs with varying recurrent and feedback information flow. We found that synergistic integration was elevated in motifs with greater recurrent information flow beyond that expected from the local feedforward information flow. Feedback information flow was interrelated with feedforward information flow and was associated with decreased synergistic integration. Our results indicate that synergistic integration is distinctly influenced by the directionality of local information flow.

Author summary

Networks compute information. That is, they modify inputs to generate distinct outputs. These computations are an important component of network information processing. Knowing how the routing of information in a network influences computation is therefore crucial. Here we asked how a key form of computation—synergistic integration—is related to the direction of local information flow in networks of spiking cortical neurons. Specifically, we asked how information flow between input neurons (i.e., recurrent information flow) and information flow from output neurons to input neurons (i.e., feedback information flow) was related to the amount of synergistic integration performed by output neurons. We found that greater synergistic integration occurred where there was more recurrent information flow. And, lesser synergistic integration occurred where there was more feedback information flow relative to feedforward information flow. These results show that computation, in the form of synergistic integration, is distinctly influenced by the directionality of local information flow. Such work is valuable for predicting where and how network computation occurs and for designing networks with desired computational abilities.

Introduction

Feedforward, recurrent and feedback connections are important for information processing in both artificial and biological neural networks [15]. Whether these connections represent the strength of a synapse, or the amount of information transmission between two nodes (i.e. information flow), the directionality of these connections–feedforward, recurrent (lateral) or feedback–influences how the network processes information. A component of information processing that is central to both biological and artificial neural networks is their ability to perform synergistic integration, a form of computation. Feedforward information flow has been previously shown to be a strong predictor of synergistic integration in biological cortical circuits [6]. However, the influence of recurrent and feedback information flow on synergistic integration is unclear. Understanding how each of these directed functional connections influences the computational properties of neural networks is a critical step in understanding how neural networks compute. Here, we examine this in the context of cortical networks, using a motif-style, information theoretic analysis of high-density in vitro recordings of spiking neurons.

Synergistic integration refers to the process by which non-overlapping information from multiple inputs considered simultaneously provides additional information about a target neuron (beyond the information provided uniquely by each input, and beyond the overlapping information provided by all inputs, i.e., the redundancy). In this form, it can be considered a proxy for a type of non-trivial computation [6]. Partial information decomposition (PID) offers an approach to quantifying each type of information (unique, redundant, synergistic) [7] enabling estimation of the synergistic integration that emerges when a given neuron integrates input from two other neurons after factoring out the other types of information [8,9]. This approach has been used effectively before [6,8,10,11]. Here, we leveraged this approach to determine how recurrent and feedback information flow relate to synergistic integration.

Recurrence—whether physical in the form of neuronal structures or functional in the form of information flow—is believed to be important for higher order functions including memory processes (e.g. recollection, recognition). This is largely due to its generation of attractor-like, pattern completion activity [1220]. Such activity, studies show, involves the combination of diverse features to form representations and also contributes to the interpretation and categorization of representations [2127]. These studies further show that greater interpretability of images and object categories occurs at latencies beyond those of known feedforward processes, implying the presence of recurrent activity. Relatedly, in artificial neural networks, recurrent connections serve to expand computational power by extending operations in time, requiring a smaller network to carry out the same operations as a larger, purely feedforward network [3,28]. Thus, controlling for the size of a network, the use of recurrence can improve network operation.

Feedback—whether physical or functional, local or long-range—is believed to implement top-down, goal-driven attention and perception, which involves the preferential activation of lower level neurons by higher level neurons (e.g., [2931]; for reviews, see [32,33]). Due to its top-down nature, feedback also plays a role in the gating and rerouting of information flow, as well as error prediction (for related reviews see [3438]). Relatedly, feedback is associated with increased surround suppression, reducing the range of stimuli to which lower-level neurons respond [39,40]. From this perspective, feedback reduces the variance with which lower-level neurons can account for variance in higher-level neurons. Individual neurons can receive feedback from downstream neurons and send feedback to upstream neurons. Here we consider the impact of feedback from a neuron to its upstream sources on its integration of input from those sources.

Here, we tested how recurrence and feedback relate to synergistic integration in functional networks observed in cortex (Fig 1). Across motifs, we found that recurrence was positively related to synergistic integration. Feedback, conversely, was negatively related to synergistic integration, but this negative relationship was accounted for by concurrent shifts in feedforward information flow.

thumbnail
Fig 1. Methodological approach taken to ask how synergy is related to recurrent and feedback information flow in organotypic cultures of mouse cortex.

(A) Hour-long recordings of spiking activity were collected in vitro from organotypic cultures of mouse somatosensory cortex using a high density 512-channel multielectrode array. (B) Spike sorting yielded spike trains of hundreds of well-isolated individual neurons per recording. (C) Transfer entropy was used to identify significant information flow between neuron pairs. This comprised the effective connections (edges) in a functional network. The resulting effective networks were analyzed to identify all triads consisting of two edges connecting to a common receiver. (D) For each triad, we quantified the amount of synergy via partial information decomposition (PID). We also identified all possible relevant motifs and arranged them according to the number of recurrent and feedback edges they contained. (E-F) We sought to answer two questions. (E) Is synergy positively related, negatively related, or unrelated to the number of recurrent edges? (F) Is synergy positively related, negatively related, or unrelated to the number of feedback edges? Triads consist of two transmitter neurons (blue), each with feedforward edges (gray arrows) connecting to a receiver neuron (red). Black arrows depict recurrent and feedback edges on the left and right, respectively.

https://doi.org/10.1371/journal.pcbi.1009196.g001

Results

We asked how the number of recurrent and feedback edges in triadic motifs was related to the amount of synergistic integration by analyzing hour long recordings of spiking activity from organotypic cultures of mouse somatosensory cortex (n = 25), as summarized in Fig 1. Recordings yielded between 98 and 594 well-isolated neurons (mean = 309). The average firing rate among neurons was 2.1 Hz [2.0 Hz, 2.2 Hz] and neurons exhibited rhythmic bursts of activity (Fig 1B) as characterized previously [4142]. We identified effective connections between neurons in each recording as those that had significant transfer entropy such that the observed value was greater than 99.9% of values obtained from a jittering procedure (i.e. p<0.001). We then identified all synergistic 3-node motifs. Synergistic motifs were those which included two transmitter nodes sending inputs to the same receiver-node. Motifs without this structure were excluded because we were only concerned with the motifs’ ability to perform synergistic integration. The set of motifs included in our analyses are shown in Fig 2. We quantified the amount of synergistic integration performed by the receiver based on its inputs using ‘synergy,’ a term derived from partial information decomposition. Consistent with previous work [6,8,10], synergy was normalized to reflect the proportion of the receiving neuron entropy for which it accounted and to control for variable entropy across triads and networks. Across triads, we asked whether normalized synergy was positively or negatively related to the number of recurrent and feedback edges in the corresponding motif. This analysis was repeated at three timescales relevant to the delay of synaptic transmission, which has been reported to span 1–20 ms [4344]. The three timescales (0.05–3 ms, 1.6–6.4 ms, and 3.5–14 ms), covering a range of 0.05–14 ms, were determined by the granularity of the data binning and the delay between bins. More detail regarding the structure of the timescales is given in Fig A in S1 File. All summary statistics are reported as medians or means, as indicated, followed by 95% bootstrap confidence intervals in brackets.

thumbnail
Fig 2. Set of synergistic 3-node motifs.

Synergistic motifs were those in which both transmitters (blue dots) sent input (gray arrows) to the same receiver (red dots). Motifs were arranged in order (1–10) of the number of feedback and recurrent edges they contained (black arrows); either 0, 1 or 2.

https://doi.org/10.1371/journal.pcbi.1009196.g002

Recurrence predicts increased normalized synergy, feedback predicts decreased normalized synergy

To examine the relationships between normalized synergy and the number of both recurrent and feedback edges, we used a motif-style analysis [45]. We first quantified the mean normalized synergy for each of the 10 synergistic motifs in every network (Fig 3). We then compared the mean normalized synergy in recurrent motifs (those with more recurrent than feedback edges) to the mean normalized synergy in feedback motifs (those with more feedback than recurrent edges). We observed significantly greater normalized synergy in recurrent motifs (Fig 3, orange) compared to feedback motifs (Fig 3, green) (mean = 0.011 vs. 0.007, zs.r. = 6.31, n = 75, p<1x10-9). To determine how recurrent and feedback edges affect normalized synergy relative to baseline levels, we compared the observed synergy in each motif to the normalized synergy in the default motif (with 0 recurrent and 0 feedback edges; Fig 3). We found that triads with recurrent motifs had significantly greater normalized synergy than those with the default motif (zs.r. = 5.80, n = 75, p<1x10-8). Conversely, triads with feedback motifs had significantly less normalized synergy than those with the default motif (zs.r. = -2.61, n = 75, p = 0.009). In addition to the comparison to baseline normalized synergy, we compared the observed synergy values to those observed when motif labels were randomly permuted across triads. We observed the same qualitative pattern of results here as in the comparison to baseline synergy levels. We found that recurrent motifs had significantly greater normalized synergy than expected by chance (zs.r. = 5.93, n = 75, p<1x10-8). Conversely, feedback motifs had significantly less normalized synergy than expected by chance (zs.r. = -3.96, n = 75, p<1x10-4).

thumbnail
Fig 3. Normalized synergy was greater in recurrent motifs than in feedback motifs.

Point clouds show the mean synergy value for each of the 75 networks analyzed for each type of motif. For distributions in which not all networks exhibited the motif, n<75. Central tendency and error bars depict the median and the 95% bootstrap confidence interval around the median. The motifs are graphically depicted below the x-axis and are organized by the number of feedback and recurrent edges. Motifs with more recurrent than feedback edges are indicated in orange. Motifs with more feedback than recurrent edges are indicated in green. Inset: Synergy values from motifs with greater recurrence or greater feedback were aggregated to directly compare the mean synergy. In both panels, the median (dotted line) and 95% bootstrap confidence interval (blue region) for baseline synergy values in default motifs (with 0 recurrent and 0 feedback edges) is shown. Significance indicators: ‘+’ and ‘-’ indicates p<0.01 by a two-tailed test wherein ‘+’ indicates significantly more than baseline and ‘-’ indicates significantly less than baseline; *** p<1x10-9.

https://doi.org/10.1371/journal.pcbi.1009196.g003

To assess how normalized synergy varies across the multiple levels of recurrence and feedback, we grouped the 10 synergistic motifs into 9 categories based on the number of feedback and recurrent edges they contained (Fig 4A). A two-factor (recurrent and feedback), repeated measures ANOVA, with three levels of each factor (0,1 or 2 edges), was conducted to examine the main effects of recurrence and feedback on synergy and to test for an interaction effect. The main effect of recurrence was significant (F(2,148) = 7, p = 0.001). The mean normalized synergy increased as the number of recurrent edges increased (0.0080 [0.0069 0.0093] vs. 0.012 [0.010 0.014] vs. 0.014 [0.012 0.017] for 0, 1, and 2 edges, respectively; Fig 4B), reflected by a significant positive correlation between normalized synergy and number of recurrent edges (Spearman r = 0.25, n = 675, p<1x10-8). The main effect of feedback was also significant (F(2,148) = 50, p<1x10-17). The mean normalized synergy decreased as the number of feedback edges increased (0.012 [0.011 0.014] vs. 0.011 [0.009 0.013] vs. 0.008 [0.007 0.010] for 0, 1, and 2 edges, respectively; Fig 4C), reflected by a significant negative correlation between normalized synergy and number of feedback edges (Spearman r = -0.22, n = 675, p<1x10-6). There was also a significant interaction effect between recurrence and feedback on normalized synergy (F(4,296) = 6.7, p<1x10-4). Specifically, normalized synergy was highest in motifs with the most recurrent edges and the fewest feedback edges, and normalized synergy was lowest in motifs with the fewest recurrent edges and the most feedback edges. To test the reliability at the single network level, we examined those networks with a sufficient number of triads across all levels of connectivity to permit meaningful analysis (15 networks, see Materials and Methods for details). We found that 12 of 15 exhibited a significant main effect of the number of recurrent edges and 12 of 15 exhibited a significant main effect of the number of feedback edges. Post-hoc analysis of correlation between normalized synergy and recurrent edge count was significantly positive in 15 of 15 networks and post-hoc analysis of correlation between normalized synergy and feedback edge count was significantly negative in 10 of 15 networks and significantly positive in 3 of 15 networks. Thus, the per-network analyses agree with those observed across all networks, indicating the trends are reliable within networks and between networks. Taken together, these results show that, across these networks, motifs with greater upstream recurrence have greater normalized synergy, and motifs with greater feedback have lesser normalized synergy.

thumbnail
Fig 4. Normalized synergy increased with greater recurrence and decreased with greater feedback.

(A) Motifs are ordered based on the number of recurrent edges (columns) and feedback edges (rows). The background heatmap, wherein brighter colors reflect larger normalized synergy values, replots the central tendency of the values shown in Fig 3. (B) Curves representing rows shown in A, plotted with errorbars computed across networks, show that synergy increased as the number of recurrent edges increased. (C) Curves representing columns shown in A, plotted with errorbars computed across networks, show that synergy decreased as the number of feedback edges increased. Errorbars are 95% bootstrap confidence intervals around the mean.

https://doi.org/10.1371/journal.pcbi.1009196.g004

To understand if/how the normalization of synergy by the receiver entropy affected these results, we separately examined how raw synergy (i.e., non-normalized synergy) varied across motifs (Fig 5). As with normalized synergy, raw synergy was positively correlated with the number of recurrent edges (Spearman r = 0.15, n = 675, p<0.001). A two-way ANOVA examining how raw synergy values varied across levels of recurrence and feedback found the main effect of recurrence trending toward significance (F(2,148) = 2.6, p = 0.08; Fig 5B). The results for feedback, however, were qualitatively different when examining raw synergy versus normalized synergy. Raw synergy also increased with the number of feedback edges (Spearman r = 0.10, n = 675, p = 0.01) rather than the decrease that was observed for normalized synergy. The main effect of feedback in the two-way ANOVA was significant (F(2,148) = 4.7, p = 0.01; Fig 5C). We again found a significant interaction between recurrence and feedback levels (F(4,296) = 7.8, p<1x10-5) such that the contribution of recurrence and feedback to raw synergy failed to sum linearly. Thus, consistent with our previous findings [6] these results indicate that raw synergy generally increases with greater edge density.

thumbnail
Fig 5. Raw synergy increased with greater recurrence and greater feedback.

(A) Mean synergy increased with the number of recurrent and feedback edges in motifs. (B) Curves representing rows shown in A, plotted with errorbars computed across networks, show that synergy increased as the number of recurrent edges increased, although the difference in means across levels was not significant. (C) Curves representing columns shown in A, plotted with errorbars computed across networks, show that synergy increased as the number of feedback edges increased. Errorbars are 95% bootstrap confidence intervals around the mean.

https://doi.org/10.1371/journal.pcbi.1009196.g005

The fact that raw synergy and normalized synergy varied in opposite directions with respect to the number of feedback edges reveals that the normalizing term, receiver entropy, mediated the relationship between feedback and synergy. Receiver entropy differed significantly across motifs (Fig 6). A two-way ANOVA found significant main effects of recurrence (F(2,148) = 32.7, p<1x10-11) and feedback (F(2,148) = 15.1, p<1x10-5) and a significant interaction effect (F(4,296) = 13.19 p<1x10-9). Receiver entropy was negatively correlated with recurrence level (Spearman r = -0.16, n = 675, p<0.001; Fig 6B) and strongly positively correlated with feedback level (Spearman r = 0.41, n = 675, p<1x10-22; Fig 6C). The finding that there was greater receiver entropy in motifs with more feedback edges accounts for the different pattern of results observed for raw and normalized synergy across feedback levels. Motifs with greater feedback, these results suggest, emerge around receiver nodes with greater entropy. While these motifs generate more raw synergy, that synergy accounts for a lesser proportion of the entropy of the respective receiver (i.e., less normalized synergy). Recurrence, however, did not display such a dependence. Whether normalized by receiver entropy or not, synergy was positively related to the number of recurrent edges in the motif.

thumbnail
Fig 6. Receiver entropy decreased with greater recurrence and increased with greater feedback.

(A) Mean receiver entropy decreased with the number of recurrent edges and increased with the number of feedback edges in motifs. (B) Curves representing rows shown in A, plotted with errorbars computed across networks, show that receiver entropy decreased as the number of recurrent edges increased. (C) Curves representing columns shown in A, plotted with errorbars computed across networks, show that receiver entropy increased as the number of feedback edges increased. Errorbars are 95% bootstrap confidence intervals around the mean.

https://doi.org/10.1371/journal.pcbi.1009196.g006

To further examine the influence of receiver entropy on the relationships between both recurrence and feedback and synergy, we regressed out the variance in raw synergy associated with receiver entropy and asked if the residual raw synergy was still related to the motif structure using an ANOVA. The positive relationship between raw synergy and motif recurrent edge number was not affected by regressing out variance associated with receiver entropy (F(2,148) = 14.2, p<1x10-5; Spearman r = 0.34, n = 675, p<1x10-15). However, the relationship between feedback edge number and raw synergy, after regressing out variance associated with receiver entropy, was no longer significant (F(2,148) = 2.03, p = 0.14; Spearman r = -0.034, n = 675, p = 0.43), suggesting that the variance in synergy associated with receiver entropy and feedback edge count was entangled. Here again, synergy was positively related to recurrent edge count.

To investigate the potential role of sender entropy in explaining the relationships between synergy and recurrence and synergy and feedback, we observed the distribution of sender entropies (averaged within a triad) across motifs. Sender entropy differed significantly across motifs (Fig 7). A two-way ANOVA found significant main effects of recurrence (F(2,148) = 3.62, p = 0.03) and feedback (F(2,148) = 19.7, p<1x10-7) and a significant interaction effect (F(4,296) = 10.4 p<1x10-7). Sender entropy was moderately, positively correlated with recurrence level (Spearman r = 0.23, n = 675, p<1x10-7; Fig 7B) and mildly, positively correlated with feedback level (Spearman r = 0.10, n = 675, p = 0.03; Fig 7C). The finding that there was greater sender entropy in motifs with more recurrent and feedback edges suggested that sender entropy might play a role in the relationships between these and synergy. To determine the extent of this role, we regressed out the variance in synergy associated with sender entropy and asked if the residual synergy (both normalized and raw) was still related to the motif structure (Table 1). The residual normalized synergy remained significantly positively related to the number of recurrent edges (F(2,148) = 15.3, p<1x10-6; Spearman r = 0.29, n = 675, p<1x10-11) and significantly negatively related to the number of feedback edges (F(2,148) = 25.8, p<1x10-9; Spearman r = -0.37, n = 675, p<1x10-18). Thus, these relationships were not affected by regressing out variance associated with sender entropy. As with raw synergy, residual raw synergy was positively related with the number of recurrent edges (Spearman r = 0.11, n = 675, p<0.01) and the number of feedback edges (Spearman r = 0.16, n = 675, p<0.001). The two-way ANOVA showed the main effect of recurrence trending toward significance (F(2,148) = 2.7, p = 0.07) and the main effect of feedback was still significant (F(2,148) = 7.0, p<0.01). Thus, sender entropy accounted for neither the synergy-recurrence nor the synergy-feedback relationship.

thumbnail
Fig 7. Sender entropy increased with greater recurrence and greater feedback.

(A) Mean sender entropy increased with the number of recurrent edges and with the number of feedback edges in motifs. (B) Curves representing rows shown in A, plotted with errorbars computed across networks, show that sender entropy increased as the number of recurrent edges increased. (C) Curves representing columns shown in A, plotted with errorbars computed across networks, show that sender entropy increased as the number of feedback edges increased. Errorbars are 95% bootstrap confidence intervals around the mean.

https://doi.org/10.1371/journal.pcbi.1009196.g007

thumbnail
Table 1. Relationships between residual synergy and recurrence and feedback, after regressing out sender entropy.

Results for both normalized synergy (Syn/entreceiver -Entsender) and raw synergy (Syn—Entsender) are shown. Columns 1–3 (df, F, pANOVA) show the results of a repeated measures ANOVA for the residual synergy predicted by the number of recurrent and feedback edges. Columns 4–5 (rho, prho) show the results of Spearman rank correlations between residual synergy and the number of recurrent and feedback edges. P-values significant at the α = 0.05 level are in bolded font.

https://doi.org/10.1371/journal.pcbi.1009196.t001

Next, we sought to determine whether the observed relationship between normalized synergy and motif structure could be the product of systematic variance across motifs in either the strength of feedforward edge weight or the mutual information between the senders. Both are predictive of normalized synergy [6,10]. Here, mutual information was normalized by the minimum of the two sender entropies, i.e., the maximum possible mutual information. To address each potential confound, we regressed out the variance associated with each factor and asked if the residual normalized synergy was nonetheless correlated to the motif structure (Table 2). The residual normalized synergy remained significantly positively related to the number of recurrent edges whether we regressed out variance associated with feedforward edge weight (F(2,148) = 9.4, p<1x10-3; Spearman r = 0.37, n = 675, p<1x10-18) or the mutual information between the senders (F(2,148) = 29.4, p<1x10-10; Spearman r = 0.49, n = 675, p<1x10-33). Thus, the positive relationship between normalized synergy and motif recurrent edge number is not accounted for by covariance in either feedforward edge weight or mutual information between senders.

thumbnail
Table 2. Relationships between residual normalized synergy and recurrence and feedback, after regressing out potential confounding covariates: strength of feedforward edge weights (Syn/entreceiver -FF), and mutual information between senders (Syn/entreceiver -MI). Columns 1–3 (df, F, pANOVA) show the results of a repeated measures ANOVA for the residual normalized synergy predicted by the number of recurrent and feedback edges. Columns 4–5 (rho, prho) show the results of Spearman rank correlations between residual normalized synergy and the number of recurrent and feedback edges. P-values significant at the α = 0.05 level are in bolded font.

https://doi.org/10.1371/journal.pcbi.1009196.t002

The negative relationship between normalized synergy and motif feedback edge number, likewise, was not affected by regressing out variance associated with mutual information between senders (F(2,148) = 14.5, p<1x10-5; Spearman r = -0.28, n = 675, p<1x10-10). The residual normalized synergy, after regressing out variance associated with feedforward edge weight, maintained a significant negative correlation with motif feedback edge count (Spearman r = -0.11, n = 675, p = 0.01) but was no longer a significant main effect in the two-way ANOVA (F(2,148) = 1.0, p = 0.36). This loss of significance also occurred without the normalization by receiver entropy (F(2,148) = 0.7, p = 0.50). The sensitivity of the relationship between synergy and motif feedback edge count to regressing out variance associated with feedforward edge weight indicates variance in synergy associated with the number of feedback edges could be accounted for by variance in feedforward edge strength. Indeed, in a post hoc test, we found that feedforward edge weight was significantly negatively correlated with the number of feedback edges (Spearman r = -0.22, n = 675, p<1x10-6). As described further in the discussion, the negative correlation between feedforward and feedback information flow leaves ambiguity to be resolved by future work regarding the separate effects of feedback and feedforward information flows on synergistic integration. What can be said is that synergy was greater when the ratio of feedforward to feedback information flow was larger (Spearman r = 0.50, n = 675, p<1x10-35).

Extending our analysis of how normalized synergy is related to the number of recurrent and feedback edges to ask if it is similarly related to the strength of these edges, we performed a multiple linear regression for each network, using the strength of feedforward, recurrent, and feedback edge weights as predictors of normalized synergy. We then analyzed the distribution of beta coefficients for each term (Fig 8). As expected from prior work [6], feedforward edge weight was a significant predictor of normalized synergy (zs.r. = 7.52, n = 75, p<1x10-13). Recurrent edge weight was also a significant predictor of normalized synergy (zs.r. = 6.12, n = 75, p<1x10-9). Conversely, feedback edge weight was not a significant predictor of normalized synergy (zs.r. = -0.93, n = 75, p = 0.35). To determine if motif structure could account for variance beyond what could be accounted for by the edge weights, we performed an ANOVA on the normalized synergy residuals across networks. We observed no significant effects. Residual normalized synergy was no longer significantly correlated to the number of recurrent edges (Spearman r = 0.07, n = 675, p = 0.09) and there was no main effect of recurrence (F(2,148) = 0.88, p = 0.42). Residual normalized synergy was also no longer significantly correlated to the number of feedback edges (Spearman r = -0.04, n = 675, p = 0.32) and there was no main effect of feedback (F(2,148) = 0.88, p = 0.42). There was also no significant interaction effect (F(4,296) = 1.42, p = 0.23). These results showed that the motif structure does not account for variance in synergy beyond the motif edge weights. A potential explanation for why the number of edges and the strength of edges may explain similar variance in normalized synergy is that stronger edges are more likely to be significant. Quantifying the probability of observing a significant edge for each decile of transfer entropy (edge weight) in each network revealed a significant, positive trend across deciles (zs.r. = 7.52, n = 75, p<1x10-13). It is likely, however, that binarizing edges by significance resulted in the loss of predictive signal. Overall, these results again provided evidence that recurrence, but not feedback, is positively related to synergistic integration when quantified as the weight of the motif edges as well as the number of edges.

thumbnail
Fig 8. The strength of feedforward and recurrent edges was predictive of normalized synergy.

Histograms of beta weights for multiple linear regressions performed on the network-level revealed that feedforward and recurrent edge weights were reliable predictors of normalized synergy, across networks. Feedback edge weight was not a reliable predictor of normalized synergy.

https://doi.org/10.1371/journal.pcbi.1009196.g008

Despite the positive relationship between edge number and edge weight, edge number did not capture all the variance contained in edge weight. Regressing out the variance associated with the number of recurrent edges and performing the multiple linear regression between edge weights and residual normalized synergy revealed that recurrent edge weight was still a significant, albeit negative, predictor of residual normalized synergy (zs.r. = -5.03, n = 75, p<1x10-6). That there was a significant residual predictive relationship indicates that recurrent edge number does not capture all the same variance as recurrent edge weight (see discussion). Feedforward edge weight was still a significant predictor of residual normalized synergy (zs.r. = 7.52, n = 75, p<1x10-13), and feedback edge weight was still not a significant predictor of residual normalized synergy (zs.r. = -1.42, n = 75, p = 0.16).

The above results show that the number of recurrent edges among upstream neurons contributed a novel source of synergy beyond the variance accounted for by the receiver entropy, the sender entropy and the weight of the feedforward edges. This result was not sensitive to the type of normalization performed (Figs F and G in S1 File). The observed effects on synergy were consistent with known relationships between synergy, redundancy, and multivariate TE at these timescales (Figs H-L in S1 File). However, we did find that the result was dependent upon the use of PID. Interaction Information (see S1 File), a non-PID based analysis that does not factor redundancy into the calculation of synergy, was not sensitive to the effects of connectivity (Figs M and N in S1 File). Among PID approaches, however, the result was not dependent upon the PID approach used (Imin vs. Ibroja; Figs O-R in S1 File), both show the same pattern of effects reported here. The observed effects also held when the three timescales were analyzed separately (Fig S in S1 File). Finally, the result held when triads were re-classified into motifs after computing conditional TE among triad members (Figs T and U in S1 File).

Recurrent and feedback motifs are rare but overrepresented

To gain perspective as to how our findings regarding the influence of recurrence and feedback on synergy relate to network-wide processing, we asked how prevalent each type of effective connectivity was in our networks. To do this, we calculated the percentage of network-wide triads accounted for by each motif (Fig 9).

thumbnail
Fig 9. Recurrent and feedback motifs were rare but occurred more than expected given the network-wide edge density.

(A) Percent of network triads accounted for by each motif type. Motifs with greater edge density were more rare. Values indicate means across all networks. (B) Log10 scaled observed percentages of triads compared to expected percentages of triads per motif. Expected percentages obtained by raising the probability of observing a connection to the power of the number of edges in the motif, for each network. (C) Log10 scaled observed and expected percentages of triads in (B), grouped by the number of edges they contain. (D) Linearly scaled observed and expected percentages of triads in (A), grouped by the number of recurrent and feedback edges they contain. Significance indicators: ‘+’ indicates significantly more than expected and ‘-’ indicates significantly less than expected. For all significant values, p<1x10-6. Motif indicator: † Recurrent, ‡ Feedback.

https://doi.org/10.1371/journal.pcbi.1009196.g009

Consistent with the sparsity of these networks (average edge density: 1.14% [0.83% 1.54%]), the incidence of each motif decreased rapidly as a function of the number of edges contained in the motif. The first (i.e., default) motif, containing only 2 feedforward edges, was most prevalent and accounted for 70.12% [67.15% 73.07%] of the observed synergistic 3-node triads (Fig 9A, 9B and 9C). Motifs with 3 edges, whether recurrent or feedback, accounted for 23.94% [21.76% 26.14%] of the synergistic 3-node triads (Fig 9B and 9C). This is significantly greater than the 2.12% [1.60% 2.83%] that would be expected by chance given random networks with the same sparsity (t = 21.24, n = 75, p < 1x10-32). Motifs with 4, 5, and 6 edges were similarly over-represented from what would have been expected in random networks, but progressively decreased in prevalence (4 edge motifs: 4.93% [4.03% 5.92%] vs. 0.13% [0.06% 0.26%], t = 10.27, n = 75, p < 1x10-15; 5 edge motifs: 0.66% [0.47% 0.88%] vs. 0.0039% [0.0008% 0.01%], t = 6.39, n = 75, p < 1x10-7; 6 edge motifs: 0.36% [0.12% 0.87%] vs. 0.0002% [0.0000% 0.0004%], t = 1.99, n = 75, p = 0.051; Fig 9C). These results agree with findings in similar networks generated from the same data [46]. Finally, recurrent and feedback motifs occurred significantly more than would have been expected in random networks (recurrent: 13.79% [11.52% 16.28%] vs. 1.10% [0.82% 1.48%], t = 10.68, n = 75, p < 1x10-15; feedback: 13.47% [11.41% 15.70%] vs. 1.10% [0.82% 1.48%], t = 10.97, n = 75, p < 1x10-16; Fig 9D). Importantly, all motifs with recurrent and feedback edges, with the exception of the 6-edge motif, occurred more frequently than expected given network connection densities. Thus, the sparsity of our networks did not preclude our ability to detect recurrent and feedback motifs.

To test for evidence of selection bias toward or away from motifs with recurrent or feedback edges, we tested whether one type was more or less prevalent among the motifs containing a given number of edges. The null distribution would be an equal number of each. Among 3-edge synergistic motifs, the extra edge was recurrent in 50.7% [44.2% 57.1%] of the triads. This was not significantly different from 50% (t = 0.22, n = 75, p = 0.83). Likewise, across triads with 4 and 5 edges, we found no evidence of bias toward one type of effective connectivity versus the other. In 4-edge motifs with both additional edges being the same, both additional edges were recurrent in 43.1% [35.1% 51.1%] of triads (t = -1.67, n = 75, p = 0.10) and both additional edges were feedback in 56.9% [48.7% 64.7%] of triads (t = 1.67, n = 75, p = 0.10). In 4 edge motifs with both additional edges being different, 1 additional edge was recurrent and 1 was feedback in 49.1% [43.3% 54.5%] of triads (t = -0.29, n = 75, p = 0.76). For 5-edge motifs, the additional edge was recurrent in 50% [50% 50%], t = 0, n = 75, p = 1; 6-edge motifs were not included in this as they contain the same number of feedback and recurrent edges by definition). Consistent with this, there was also no difference in the overall likelihood of observing a recurrent versus a feedback edge (7.5% [6.4% 8.7%] vs. 7.6% [6.6%, 8.8%]; zs.r. = -0.07, n = 75, p = 0.94).

Given the similar incidence of motifs containing recurrent and feedback edges, but significant differences in the synergy observed for each motif type, synergistic triads containing recurrent edges could be expected to account for a larger percentage of the network-wide synergy (Fig 10). Network-wide synergy was the sum of synergy, taken across all triads in a network. Indeed, recurrent motifs comprised 13.79% [11.52% 16.28%] of triads and accounted for 20.43% [17.26% 23.85%] of network-wide synergy. Feedback motifs comprised 13.47% [11.41% 15.70%] of triads and only 10.12% [8.37% 12.11%] of network-wide synergy (Fig 10A, inset). Thus, although recurrent and feedback motifs accounted for similar percentages of network triads (zs.r. = 0.09, n = 75, p = 0.92), recurrent motifs accounted for a significantly higher percentage of network synergy than feedback motifs (zs.r. = 4.18, n = 75, p<1x10-4).

thumbnail
Fig 10. Recurrent and feedback motifs accounted for more and less network-wide synergy than expected, respectively.

(A) Both recurrent motifs and feedback motifs were relatively rare, and they accounted for a relatively small proportion of overall synergy. Red bars from Fig 9B are replotted on a linear scale here for comparison. Inset: Recurrent motifs were as common as feedback motifs, but they accounted for significantly more synergy than feedback motifs. Red bars from Fig 9D are replotted here for comparison. (B) The ratio of percent synergy to percent triads is shown per motif. Values above one indicate that the motif accounts for greater network-wide synergy than it does triads. Values less than one indicate that the motif accounts for less network-wide synergy than it does triads. (C) Recurrent motifs accounted for more synergy than expected given their frequency. Conversely, feedback motifs accounted for less synergy than expected given their frequency. Significance was determined by asking whether the distribution of ratios for each motif came from a distribution whose mean is equal to 1 (t-test). Significance indicators: ‘+’ indicates significantly more than expected and ‘-’ indicates significantly less than expected. For all significant values, p<0.001. Central tendency shown in each figure is mean and error bars are 95% bootstrap confidence intervals around the mean. Mean was selected over median to ensure that percentages sum to 100. Significance indicator: *** p<0.001. Motif indicator: † Recurrent, ‡ Feedback.

https://doi.org/10.1371/journal.pcbi.1009196.g010

To determine whether motifs accounted for more synergy than expected given their frequency, we calculated the ratio of percent synergy to percent triads for each motif (Fig 10B and 10C). Values greater than one indicated that the motif accounted for more synergy than expected given its frequency. Values less than one indicated that the motif accounted for less synergy than expected given its frequency. We observed that recurrent motifs accounted for significantly greater network-wide synergy than expected given their frequency (zs.r. = 6.28, n = 75, p<1x10-9), and feedback motifs accounted for significantly less network-wide synergy than expected given their frequency (zs.r. = -4.35, n = 75, p<1x10-4; Fig 10C).

Discussion

Understanding the relationships between directed information flow (feedforward, feedback, and upstream recurrence) and synergistic processing in cortical networks is essential for understanding how neural networks compute. We previously showed that synergistic processing varies directly with feedforward information flow [6]. Here, we examined the influence of recurrent and feedback information flow on synergistic information processing in organotypic cortical cultures. Using information theoretic and network analyses of the spiking activity of hundreds of simultaneously recorded neurons from organotypic cultures of mouse somatosensory cortex, we showed for the first time that recurrent and feedback information flow in functional local microcircuits vary systematically with the amount of PID-derived synergy observed in those microcircuits. Specifically, we found that greater recurrence in motifs predicted greater synergy while greater feedback predicted lesser synergy (Fig 11). Interestingly, the strength of feedforward effective connections, a covariate of synergy [6], could account for much of the variance associated with the feedback-synergy relationship. It could not, however, account for the recurrence-synergy relationship (Table 2). Thus, recurrence predicted synergistic processing above and beyond that predicted by the strength of inputs. Additionally, we found that, although recurrent motifs were somewhat rare in our networks—comprising 14% of all synergistic motifs—they account for 20% of the total network-wide synergy. Feedback motifs—comprising 13% of all synergistic motifs—were roughly matched for prevalence with recurrent motifs, but only accounted for 10% of the total network-wide synergy. Thus, with similar prevalence, recurrent motifs accounted for twice as much synergy as feedback motifs.

thumbnail
Fig 11. Summary of findings regarding how synergy is related to recurrent and feedback information flow in organotypic cultures of mouse cortex.

(A-B) Synergy had a positive relationship with the number of recurrent edges and a negative relationship with the number of feedback edges. That is, synergy was elevated where there was greater upstream recurrent information flow. Synergy was diminished where there was greater feedback information flow.

https://doi.org/10.1371/journal.pcbi.1009196.g011

Relationship to previous work

Our finding that synergy increased with greater recurrence is consistent with previous work showing that recurrent connections—both functional and structural—are necessary for pattern completion tasks, both in biological [12,13,1517,19,20] and artificial networks [14,19]. Such tasks involve the integration of multiple, distinct features to generate a coherent representation, a process that involves some form of synergistic processing. Our finding that synergy decreased with greater feedback agrees with theoretical frameworks [3336] and experimental studies [2931,37,47] suggesting that feedback processes serve to reduce the variance with which lower-level neurons can account for variance in higher-level neurons, thereby reducing the strength of feedforward information flow, and resulting in reduced synergy.

Our finding that increased recurrent connectivity corresponded to greater synergistic processing is also consistent with previous analyses of the topological determinants of synergistic processing in functional networks of cortical cultures. For example, one such analysis found that synergistic processing was directly related to the ‘out-degree’ of effective connectivity of the upstream neurons [8]. That is, the more neurons that a given upstream neuron made effective connections with, the greater the resulting synergy was in the recipient neurons. Similarly, we have previously shown that neurons in functional rich clubs of cortical micro-circuits (i.e., highly-intercommunicating neurons) do about twice as much synergistic processing as neurons outside of the rich clubs [6]. We have also shown previously that greater similarity (i.e. synchrony) of transmitters, such as might be generated by strong intercommunication, predicts greater synergy at synaptic timescales [10].

The strength of feedforward information flow was an important consideration here when analyzing the relationship between the number of recurrent/feedback edges in a motif and synergy. We have shown previously that the strength of feedforward information flow is a strong, positive predictor of the amount of synergy [6]. Here, we performed a control analysis where this relationship was first regressed out of the synergy values before asking whether recurrence and/or feedback were predictive of synergy. The positive relationship between recurrence and synergy persisted after regressing out the influence of feedforward information flow, indicating that recurrent information flow reflects a novel source of explanatory power over synergy. We hypothesize that this additional synergy emerges because recurrent information flow increases the capacity of the transmitter neurons to jointly predict the behavior of the receiver, resulting in more synergy than if it just increased the amount of bivariate transfer entropy. The relationship between the number of feedback edges and synergy, however, was lost after regressing out variance associated with feedforward information flow. This suggests that feedforward and feedback information flow account for common variance in the observed synergy as discussed in greater detail in the next section.

Variability of the synergy-feedback relationship

While the relationship between synergy and motif recurrent edge count was robust across all analyses we performed, the relationship between synergy and motif feedback edge count varied across analyses, exhibiting both negative and positive relationships with synergy. First, we found that it mattered whether raw synergy (i.e., total synergy without scaling of any sort) or normalized synergy (i.e., raw synergy normalized by the entropy of the receiving neuron so as to indicate the percentage of the entropy accounted for) was examined. Raw synergy was positively correlated with motif feedback edge count while normalized synergy was negatively correlated with motif feedback edge count. We found that this reversal could be accounted for by the fact that the receiver entropy, the normalizing term, itself varied significantly as a function of motif feedback edge count. The receivers in motifs with more feedback edges had more entropy. Thus, while greater raw synergy emerges in these motifs, it accounts for a lower percentage of the receiver entropy.

Second, we found that regressing out variance in synergy, whether normalized or not, associated with feedforward information flow resulted in a loss of significance of the relationship to motif feedback edge count. As noted above, this result indicates that variance in synergy associated with the number of feedback edges was also associated with variance in feedforward information flow. Two scenarios could yield this result. In both, feedforward and feedback information flows have opposing effects on synergy. These scenarios differ with regard to whether feedforward and feedback flows are independent. If independent, then feedforward and feedback processing do not directly affect each other, but simply have contrasting effects on synergy. Our finding of a significant negative correlation between these terms, however, argues against such a scenario. Rather, this negative correlation suggests that feedforward information flow affects the amount of feedback information flow and/or vice versa.

Influence of sender properties on the synergy-recurrence relationship

The relationship between number of recurrent edges and synergy was clearly influenced by some combination of sender entropy, mutual information between senders and transfer entropy between senders (recurrent edge weight). The fact that the sender entropy alone could not account for the relationship suggests that the relative dynamics of the senders are important for synergy. Yet, mutual information also could not fully account for this relationship. We hypothesize that both mutual information and transfer entropy between senders are important features of recurrent edges that produce synergy, yet the degree to which one or the other is responsible for yielding greater synergy will likely vary according to network properties like excitability and correlation regime [10]. Future work should aim to determine this. Given that transfer entropy between senders could account for the relationship (i.e., leave no significant covariance) between normalized synergy and number of recurrent edges, it likely accounted for similar variance in synergy. This shared variance can be partly explained by the fact that stronger edges, with larger transfer entropy, were more likely to be significant in these networks. Yet, the number of recurrent edges did not account for all of the same variance as recurrent edge weight—regressing out edge number left a significant, negative residual relationship. This is not surprising, given that recurrent edge number takes on one of three values (0, 1, or 2) whereas edge weight and synergy vary across multiple orders of magnitude. That the sign of the relationship between recurrent edge weight and synergy switched from positive (for normalized synergy) to negative (for residual normalized synergy) is difficult to interpret. This relationship depends upon which variance in normalized synergy was aligned to edge number by the regression. This is sensitive to what are arguably un-interesting features of the data (e.g. the number of observations for each number of edges) that affect the summed squared error, thereby influencing which variance was fit by the regression. Thus, from this, we conclude only that recurrent edge number does not capture the same variance as recurrent edge weight.

Influence of network edge density

Network edge density was another important consideration in studying the influence of recurrent and feedback edges on synergy. Our networks were sparse, consistent with densities observed in previous studies of biological neural networks [43,4851]. With this sparsity, it is possible that our results might have been skewed by the lack of connectivity, which would translate to a lack of observations for motifs with greater connectivity (i.e. recurrent and feedback motifs). We investigated the influence of sparsity on our results by asking how the expected frequency of motifs, given the probability of a single connection, compared to the frequency of motifs that we observed in our networks. We found that our networks had significantly more instances of both recurrent and feedback motifs than expected by chance, given the baseline probability of observing a significant edge. Thus, we concluded that the sparsity of our networks did not preclude our ability to observe these motifs. Moreover, the fact that recurrent and feedback motifs occurred more than expected by chance may indicate that such motifs, which evolve from network dynamics, are important for network processing.

Use of organotypic cultures

This work required the ability to record the spiking activity of hundreds of neurons simultaneously. This was made possible by our use of organotypic cultures. While organotypic cultures naturally differ from intact in vivo tissue, organotypic cultures nonetheless exhibit synaptic structure and electrophysiological activity very similar to that found in vivo [5258]. For example, the distribution of firing rates observed in cultures is lognormal, as seen in vivo [59], and the strengths of functional connections are lognormally distributed, similar to the distribution of synaptic strengths observed in patch clamp recordings (reviewed in [60,61]). However, neural cultures exhibit bursty dynamics that evolve as a function of the age of the culture [6265]. Here, recordings were collected between 2 and 4 weeks after culture preparation, corresponding to an age at which ‘broad network bursts’ (2s) are commonly observed [63]. Evidence regarding the existence of multi-second ‘network bursts’ in the intact brain, outside of inactive conditions [66], is limited. Recordings from intact brains in situ do show, however, that brief local bursts occur regularly and are considered behaviorally relevant [6770]. Observed differences between cultures and the intact brain regarding the spatiotemporal properties of bursts likely reflect differences in physiological condition [7173]. Yet, even with these differences, the most parsimonious hypothesis is that the relationships observed between functional dynamics and informational metrics (e.g., synergy) observed in neural cultures, as done here, are informative for understanding brain function. Nonetheless, additional work will need to be done to understand how the relationships between synergy and recurrence and synergy and feedback observed in vitro differ from what may exist in vivo, particularly in the context of behavior.

Co-occuring spiking dynamics

The results obtained here were the product of the recorded spiking dynamics. The full nature and scope of this dependence is a topic of active investigation [6,8,10,59,67]. To facilitate recognition of factors of key relevance as new work with convergent or divergent results emerge, we review briefly here the spiking dynamic properties, as characterized previously, for this dataset. Neurons in our recordings had a mean firing rate of 2.1 Hz [2.0 Hz, 2.2 Hz] and generated rhythmic bursts of activity [41], with cross-correlation peaks in the frequency bands corresponding to theta (4–12 Hz), beta (12–30 Hz), gamma (30–80 Hz), and high frequency (100–1000 Hz) oscillations [42]. Bursts typically lasted between 1 and 10 seconds, and were typically separated by 10 seconds, though significantly shorter durations and intervals occurred [41]. Despite exhibiting bursts, neuron spiking was relatively sparse, with 80% of neurons firing at <3 Hz. These spiking dynamics are well aligned with those observed in vivo [67,74].

The sparsity of neuron firing was likely an important factor in combination with the methods used here. The low neuron firing rates together with the construction of our timescales, which binned neuron spiking into 1 ms, 1.6 ms, and 3.5 ms bins–1000 bins/sec, 625 bins/sec, and 286 bins/sec, respectively–resulted in spike trains in which a large majority of bins contained 0 spikes. An impact of this sparsity is that the TE calculations had greater sensitivity for detecting excitatory interactions. Therefore, we cannot speak to the influence of inhibitory activity in producing the results shown here. Future work should aim to determine the relative influence of excitation and inhibition on synergistic integration.

Relevance of spontaneous activity

While stimulus-driven activity has been favored in research for its ability to provide insight into neural coding mechanisms, such studies assume that the brain is primarily reflexive and that internal dynamics are not informative with regard to information processing. However, internally-driven spontaneous activity of neurons, or activity that does not track external variables in observable ways, has been repeatedly shown to be no less cognitively interesting than stimulus-linked activity [75,76]; for reviews see [77,78]. Not only is spontaneous activity predominant throughout the brain, but it also drives critical processes such as neuronal development [11,79,80].

Limitations of our PID approach

Information decomposition is an active area of research. There are many different methods of partial information decomposition (PID) e.g., [9,8183], but there is not yet a single widely agreed upon “correct” method which applies generally. Here, we used the original Imin PID as described by [7]. We chose this measure because it is capable of detecting linear and nonlinear interactions and it has been shown to be effective for our datatype [6,8,10]. In addition, unlike other methods [84,85], PID of multivariate Transfer Entropy can decompose the interaction into non-negative and non-overlapping terms. However, there is reasonable concern that Imin PID overestimates the redundancy term and, consequently, synergy [8183]. To address this issue, we also implemented an alternate form of PID known as the Ibroja method [81], which estimates synergy (and redundancy) based on an operational definition of unique information, rather than redundant information. This approach yielded the same qualitative pattern of results (Figs O-R in S1 File). Because the synergy-recurrence relationship holds when assessed using two distinct methods for decomposing multivariate information, we expect that the relationship will hold for additional implementations of PID.

The present work did not examine interactions larger than triads due to the multi-fold increase in the computational burden that arises in considering higher order synergy terms. In addition to the combinatorial explosion of increased numbers of inputs, the number of PID terms increases rapidly as the number of variables increases. However, based on bounds calculated for the highest order synergy term by [8], it was determined that the information gained by including an additional input beyond two either remained constant or decreased. From this, they inferred that lower order (two-input) operations dominated. Nonetheless, further investigation of this point will be worthwhile as improvements in computational wherewithal enable it.

Jointly increasing synergy and redundancy as a hallmark of enhanced higher order information processing

We have previously shown that synergy and redundancy grow simultaneously in triads at timescales relevant to synaptic transmission [10]. This growth was accompanied by a concordant increase in the multivariate Transfer Entropy (mvTE), representing the total information processed by the receiver based on both inputs. Conversely, at extra-synaptic timescales, where synergy shrank while redundancy grew, mvTE growth was stunted.

In the present work, we focused on the relationship between the number of recurrent edges and synergy. However, examination of the PID-redundancy in these same triads, revealed that it too increased with the number of recurrent edges, whether determined via Imin PID (Figs H and J and L in S1 File), or Ibroja PID (Figs Q and R in S1 File). MvTE also increased with the number of recurrent edges (Figs I and K in S1 File). This follows the pattern of results from our previous work, suggesting that simultaneous growth of synergy and redundancy leads to enhanced higher order information processing.

In contrast to this, the interaction information, a non-PID approach that does not separate out redundancy and synergy, failed to reveal an effect of the number of recurrent edges (Figs M and N in S1 File). This can be understood based on the similar changes in redundancy and synergy that the PID analyses identified. That is, if redundancy grows as fast (or faster) than synergy, then the net outcome that is measured by interaction information, will not change. This is what was observed here. It remains poorly understood what the implications are that PID synergy can change without change in interaction information. Future work should aim to determine the relative importance and influence of synergy and redundancy for/on information processing.

Summary and conclusion

The present work examined the influence of recurrent and feedback connectivity within triads of effectively connected neurons recorded from organotypic cultures on PID-determined synergy and redundancy. Across numerous variants, we reliably observed that PID-determined synergy and redundancy were greater in triads that had greater recurrent connectivity (both in terms of number of edges and strength of connectivity between upstream senders). The results were mixed with respect to the effect of the amount of feedback connectivity from receiver back to senders, painting a more complex portrait of the impact of these connections that was not resolved here. The correlated changes in synergy and redundancy were such that the interaction information (a term that is sensitive to the net balance of synergy and redundancy) did not change systematically as a function of triad connectivity, emphasizing that the results are specific to synergy as quantified when redundancy can be accounted for separately. Although the frequencies of motifs with relatively more recurrent or feedback information flow were matched, they accounted for more and less synergy than expected, respectively. These results add to a growing body of work regarding the interdependence of synergistic integration and functional network topology. Taken together, these findings provide increasing evidence of the influence of recurrent and feedback information flow on overall neural network information processing.

Materials & methods

Ethics statement

All procedures were performed in strict accordance with guidelines from the National Institutes of Health, and approved by the Animal Care and Use Committees of Indiana University and the University of California, Santa Cruz.

To answer the question of how computation is related to feedback and recurrence in cortical circuits, we combined network analysis with information theoretic tools to analyze the spiking activity of hundreds of neurons recorded from organotypic cultures of mouse somatosensory cortex. Here we provide an overview of our methods and focus on those steps that are most relevant for interpreting our results. A comprehensive description of all our methods can be found in S1 File.

Electrophysiological recordings

All results reported here were derived from the analysis of electrophysiological recordings of spontaneous activity from 25 organotypic cultures prepared from slices of mouse somatosensory cortex between 2 and 4 weeks after culture preparation. One hour long recordings were performed at 20 kHz sampling using a 512-channel array of 5 μm diameter electrodes arranged in a triangular lattice with an inter-electrode distance of 60 μm (spanning approximately 0.9 mm by 1.9 mm). Once the data were collected, spikes were sorted using a PCA approach [4142,86] to form spike trains of between 98 and 594 (median = 310) well isolated individual neurons depending on the recording.

Network construction

Networks of effective connectivity, representing global activity in recordings, were constructed following the methods described by [8,41]. Briefly, weighted effective connections between pairs of neurons were established using transfer entropy (TE) [87]. To consider synaptic interactions, we computed TE at three timescales spanning 0.05–14 ms, discretized into overlapping bins of 0.05–3 ms, 1.6–6.4 ms, and 3.5–14 ms, resulting in 75 different networks. Only significant TE, determined through comparison to the TE values obtained with jittered spike trains (α = 0.001; 5000 jitters), were used in the construction of the networks. TE values were normalized by the total entropy of the receiving neuron so as to reflect the proportion of the receiver neuron’s capacity that can be accounted for by the transmitting neuron. Note, due to the sparse firing of our recordings, transfer entropy is biased towards detecting excitatory, rather than inhibitory, interactions. This is because transfer entropy grows with the probability of observing spike events. And, in sparse spike time series it is statistically easier to detect an increase in the number of spikes (an excitatory effect) than it is to detect a decrease in the number of spikes (an inhibitory effect). Thus, here we assume connections are predominantly excitatory.

Identifying motifs

Synergistic motifs were identified using code inspired by the Matlab Brain Connectivity toolbox [88]. The code was written to categorize all synergistic triads–those in which two transmitters send edges to the same receiver node–according to the set of ten possible synergistic motifs, containing up to four additional edges. Because we were only interested in synergistic motifs, we did not consider the entire set of 3-node motifs. In addition, although motifs 5 and 6 (in this paper) would normally be considered conformationally equivalent, here they are distinct due to the consideration of transmitter and receiver node roles.

Quantifying synergistic integration

Synergistic integration was measured as synergy. Synergy measures the additional information regarding the future state of the receiver, gained by considering the prior state of the senders jointly, beyond what they offered individually, after accounting for the redundancy between the sending neurons and the past state of the receiver itself. Synergy was calculated according to the partial information decomposition (PID) approach described by [7], including use of the Imin term to calculate redundancy. PID compares the measured bivariate TE between neurons TE(J→I) and TE(K→I) with the measured multivariate TE (the triad-level information transmission) among neurons TE({J,K}→I) to estimate terms that reflect the unique information carried by each neuron, the redundancy between neurons, and the synergy between neurons. Redundancy was computed as per equations 8–10 in S1 File, in which it represents the minimum information (Imin) that J or K provides about each state of I, averaged over all states, and conditioned on the past state of I. Thus, it is the overlapping information—the minimum of that provided by J or K—and can therefore be viewed as redundancy. Synergy was then computed via:

All information terms (synergy, redundancy, and multivariate transfer entropy) were normalized by the entropy of the receiving neuron in order to reflect the proportion of receiver variance for which they accounted and to control for variable entropy across triads and networks. We prefer this normalization to both raw values and other normalizations due to its improved interpretability—rather than raw bits, we analyzed the proportion of receiver variance accounted for. In addition, this particular normalization anchored the analysis with respect to the computing neuron (receiver), which was particularly crucial in the context of this motif-style analysis.

Statistics

All results are reported as medians or means followed by the 95% bootstrap confidence limits (computed using 10,000 iterations) reported inside of square brackets. Accordingly, figures depict the medians or means with errorbars reflecting the 95% bootstrap confidence limits. Comparisons between conditions or against null models were performed using the nonparametric Wilcoxon signed-rank test, unless specified otherwise. The threshold for significance was set at 0.05, unless indicated otherwise in the text.

ANOVAs were run as two-factor, repeated measures tests on data that was averaged per motif-type (in order to provide balanced group sizes) and aggregated across networks. To examine the reliability of these results in individual networks with significant variability, unbalanced, network-level ANOVAs were also performed. Here, only networks with at least 30 observations in each group (i.e. 0, 1, or 2 recurrent edges and 0, 1, or 2 feedback edges) were included. Due to network sparsity, this resulted in 15 out of 75 networks being included. The distributions of F-statistics and p-values for these ANOVAs and post-hoc correlation analyses were assessed to determine the relevance of the original results at the single network level.

Supporting information

S1 File. Supplemental methods and results.

Fig A. Overview of time series binning structure used in transfer entropy calculations. Transfer entropy was used to quantify a directed, functional connection from neuron J to neuron I which represents how well the current state (t) of neuron I can be predicted by the past state (‘t-d) of neuron J, beyond what is known from the past state of neuron I itself. Three synaptic timescales were considered, each with corresponding delays (d). These timescales considered transfer entropy from 0.05–3 ms, 1.6–6.4 ms, and 3.5–14 ms. Fig B. Distributions of node distances for motifs 1–9 are not significantly different from the overall distribution of distances. Distribution of distances for motif 10 is significantly different from the overall distribution of distances, due to a greater prevalence of smaller distances. Fig C. Spatial distribution of motifs 1–9 is not significantly different from the spatial distribution of the rest of the network. The spatial distribution—on the recording array—of motifs (colored nodes and edges) relative to the rest of the network (gray nodes and edges) is shown for a representative network. The spatial distribution covers a smaller range for motif 10, of which there are relatively few cases. Fig D. TE peaks between 1–14 ms. Mean distribution of TE over time for all effective connections from two representative networks. Left: The black line shows the mean TE over all effective connections from two representative networks. The shaded region shows the 95% confidence interval. The vertical dashed red line indicates the upper bound of the synaptic timescales. Across connections, the peak TE occurs below this bound at short latencies. Right: Histogram of the delay to the maximum TE over connections. The height of each bar shows the proportion of connections for which the peak TE was found to occur at the delay indicated along the x-axis. Most connections had max TE at short delays as shown in the inset panel which zooms in to the first 50 ms of the x-axis. These plots show that most connections had a peak TE at less than 14 ms. Fig E. The Partial Information Decomposition. In this study, we analyzed two-input computations which were determined using the Partial Information Decomposition to dissect multivariate transfer entropy (occurring among three neurons, with two transmitter neurons each sending significant information to a receiver neuron) into synergistic, redundant, and unique information terms. The synergistic information component was used to represent the amount of computation carried out by the receiver. Fig F. Synergy normalized by multivariate transfer entropy increased with the number of recurrent edges. (Left) Motifs are ordered based on the number of recurrent edges (columns) and feedback edges (rows). The heatmap depicts brighter colors where there are larger normalized synergy values. (Middle) Curves representing rows in (Left), plotted with errorbars computed across networks, show that synergy increased as the number of recurrent edges increased. (Right) Curves representing columns shown in (Left), plotted with errorbars computed across networks, show that synergy decreased as the number of feedback edges increased. Errorbars are 95% bootstrap confidence intervals around the mean. Table. Relationships between normalized synergy and recurrence and feedback. Columns 1–3 (df, F, pANOVA) show the results of a repeated measures ANOVA for the normalized synergy predicted by the number of recurrent and feedback edges. Columns 4–5 (rho, prho) show the results of Spearman rank correlations between normalized synergy and the number of recurrent and feedback edges. P-values significant at the α = 0.05 level are in bolded font. Fig G. Synergy normalized by the feedforward edge weight increased with the number of recurrent edges. Results plotted as in Fig F. Fig H. Redundancy normalized by receiver entropy increased with the number of recurrent edges. Results plotted as in Fig F. Fig I. Multivariate transfer entropy normalized by receiver entropy increased with the number of recurrent edges. Results plotted as in Fig F. Fig J. Raw redundancy increased with the number of recurrent edges. Results plotted as in Fig F. Fig K. Raw multivariate transfer entropy increased with the number of recurrent edges. Results plotted as in Fig F. Fig L. Redundancy normalized by the strength of feedforward edges increased with the number of recurrent edges. Results plotted as in Fig F. Fig M. Interaction information normalized by receiver entropy increased with the number of feedback edges. Results plotted as in Fig F. Fig N. Raw interaction information was not significantly related to the number of recurrent and feedback edges. Results plotted as in Fig F. Fig O. Ibroja PID synergy normalized by receiver entropy increased with the number of recurrent edges and decreased with the number of feedback edges. Results plotted as in Fig F. Fig P. Raw Ibroja PID synergy increased with the number of recurrent edges. Results plotted as in Fig F. Fig Q. Ibroja PID redundancy normalized by receiver entropy increased with the number of recurrent edges and decreased with the number of feedback edges. Results plotted as in Fig F. Fig R. Raw Ibroja PID redundancy was not significantly related to the number of recurrent and feedback edges. Results plotted as in Fig F. Fig S. Synergy normalized by receiver entropy increases with the number of recurrent edges at each timescale separately. Results plotted as in Fig F. Fig T. Proportion of motifs in triads determined with pairwise TE and conditional TE. Motif distributions obtained from the two methods are significantly different. Fig U. Synergy normalized by receiver entropy increases with the number of recurrent edges in triads classified using conditional transfer entropy. Results plotted as in Fig F. Fig V. The probability of observing a significant transfer entropy (TE) edge increases with the strength of TE for all networks. TE values were sorted into deciles for each network, ranging from the bottom 10% of values to the top 10% of values, left to right.

https://doi.org/10.1371/journal.pcbi.1009196.s001

(DOCX)

Acknowledgments

We thank Blanca Gutierrez Guzman for helpful comments and discussion.

References

  1. 1. Ahissar E, Kleinfeld D. Closed-loop neuronal computations: focus on vibrissa somatosensation in rat. Cerebral Cortex. 2003 Jan 1;13(1):53–62. pmid:12466215
  2. 2. Basheer IA, Hajmeer M. Artificial neural networks: fundamentals, computing, design, and application. Journal of microbiological methods. 2000 Dec 1;43(1):3–1. pmid:11084225
  3. 3. Kriegeskorte N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual review of vision science. 2015 Nov 24;1:417–46. pmid:28532370
  4. 4. Lamme VA, Roelfsema PR. The distinct modes of vision offered by feedforward and recurrent processing. Trends in neurosciences. 2000 Nov 1;23(11):571–9. pmid:11074267
  5. 5. Shushruth S, Mangapathy P, Ichida JM, Bressloff PC, Schwabe L, Angelucci A. Strong recurrent networks compute the orientation tuning of surround modulation in the primate primary visual cortex. Journal of Neuroscience. 2012 Jan 4;32(1):308–21. pmid:22219292
  6. 6. Faber SP, Timme NM, Beggs JM, Newman EL. Computation is concentrated in rich clubs of local cortical networks. Network Neuroscience. 2019 Feb;3(2):384–404. pmid:30793088
  7. 7. Williams PL, Beer RD. Generalized measures of information transfer. arXiv preprint arXiv:1102.1507. 2011 Feb 8. Available from: https://arxiv.org/abs/1102.1507
  8. 8. Timme NM, Ito S, Myroshnychenko M, Nigam S, Shimono M, Yeh FC, et al. High-degree neurons feed cortical computations. PLoS computational biology. 2016 May 9;12(5):e1004858. pmid:27159884
  9. 9. Wibral M, Priesemann V, Kay JW, Lizier JT, Phillips WA. Partial information decomposition as a unified approach to the specification of neural goal functions. Brain and cognition. 2017 Mar 1;112:25–38. pmid:26475739
  10. 10. Sherrill SP, Timme NM, Beggs JM, Newman EL. Correlated activity favors synergistic processing in local cortical networks in vitro at synaptically-relevant timescales. Network Neuroscience. 2020:1–20. pmid:32885121
  11. 11. Wibral M, Finn C, Wollstadt P, Lizier JT, Priesemann V. Quantifying information modification in developing neural networks via partial information decomposition. Entropy. 2017 Sep;19(9):494.
  12. 12. Douglas RJ, Koch C, Mahowald M, Martin KA, Suarez HH. Recurrent excitation in neocortical circuits. Science. 1995 Aug 18;269(5226):981–5. pmid:7638624
  13. 13. Douglas RJ, Martin KA. Recurrent neuronal circuits in the neocortex. Current biology. 2007 Jul 3;17(13):R496–500. pmid:17610826
  14. 14. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences. 1982 Apr 1;79(8):2554–8. pmid:6953413
  15. 15. Leutgeb S, Leutgeb JK, Moser MB, Moser EI. Place cells, spatial maps and the population code for memory. Current opinion in neurobiology. 2005 Dec 1;15(6):738–46. pmid:16263261
  16. 16. Neunuebel JP, Knierim JJ. CA3 retrieves coherent representations from degraded input: direct evidence for CA3 pattern completion and dentate gyrus pattern separation. Neuron. 2014 Jan 22;81(2):416–27. pmid:24462102
  17. 17. Rolls ET. An attractor network in the hippocampus: theory and neurophysiology. Learning & memory. 2007 Nov 1;14(11):714–31. pmid:18007016
  18. 18. Rolls E. The mechanisms for pattern completion and pattern separation in the hippocampus. Frontiers in systems neuroscience. 2013 Oct 30;7:74. pmid:24198767
  19. 19. Tang H, Schrimpf M, Lotter W, Moerman C, Paredes A, Caro JO, et al. Recurrent computations for visual pattern completion. Proceedings of the National Academy of Sciences. 2018 Aug 28;115(35):8835–40. pmid:30104363
  20. 20. Treves A, Rolls ET, Simmen M. Time for retrieval in recurrent associative memories. Physica D: Nonlinear Phenomena. 1997 Sep 1;107(2–4):392–400.
  21. 21. Brincat SL, Connor CE. Dynamic shape synthesis in posterior inferotemporal cortex. Neuron. 2006 Jan 5;49(1):17–24. pmid:16387636
  22. 22. Carlson T, Tovar DA, Alink A, Kriegeskorte N. Representational dynamics of object vision: the first 1000 ms. Journal of vision. 2013 Aug 1;13(10):1–1. pmid:23908380
  23. 23. Cichy RM, Pantazis D, Oliva A. Resolving human object recognition in space and time. Nature neuroscience. 2014 Mar;17(3):455. pmid:24464044
  24. 24. Clarke A, Devereux BJ, Randall B, Tyler LK. Predicting the time course of individual objects with MEG. Cerebral Cortex. 2015 Oct 1;25(10):3602–12. pmid:25209607
  25. 25. Freiwald WA, Tsao DY. Functional compartmentalization and viewpoint generalization within the macaque face-processing system. Science. 2010 Nov 5;330(6005):845–51. pmid:21051642
  26. 26. Sugase Y, Yamane S, Ueno S, Kawano K. Global and fine information coded by single neurons in the temporal visual cortex. Nature. 1999 Aug;400(6747):869–73. pmid:10476965
  27. 27. Tang H, Buia C, Madhavan R, Crone NE, Madsen JR, Anderson WS, et al. Spatiotemporal dynamics underlying object completion in human ventral visual cortex. Neuron. 2014 Aug 6;83(3):736–48. pmid:25043420
  28. 28. Spoerer CJ, Kietzmann TC, Mehrer J, Charest I, Kriegeskorte N. Recurrent networks can recycle neural resources to flexibly trade speed for accuracy in visual recognition. BioRxiv. 2020 Jan 1:677237.
  29. 29. Boly M, Garrido MI, Gosseries O, Bruno MA, Boveroux P, Schnakers C, et al. Preserved feedforward but impaired top-down processes in the vegetative state. Science. 2011 May 13;332(6031):858–62. pmid:21566197
  30. 30. Kwon SE, Yang H, Minamisawa G, O’Connor DH. Sensory and decision-related activity propagate in a cortical feedback loop during touch perception. Nature neuroscience. 2016 Sep;19(9):1243–9. pmid:27437910
  31. 31. Manita S, Suzuki T, Homma C, Matsumoto T, Odagawa M, Yamada K, et al. A top-down cortical circuit for accurate sensory perception. Neuron. 2015 Jun 3;86(5):1304–16. pmid:26004915
  32. 32. Gilbert CD, Li W. Top-down influences on visual processing. Nature Reviews Neuroscience. 2013 May;14(5):350–63. pmid:23595013
  33. 33. Sikkens T, Bosman CA, Olcese U. The role of top-down modulation in shaping sensory processing across brain states: implications for consciousness. Frontiers in systems neuroscience. 2019;13:31. pmid:31680883
  34. 34. Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and brain sciences. 2013 Jun;36(3):181–204.
  35. 35. Bastos AM, Usrey WM, Adams RA, Mangun GR, Fries P, Friston KJ. Canonical microcircuits for predictive coding. Neuron. 2012 Nov 21;76(4):695–711. pmid:23177956
  36. 36. Gilbert CD, Sigman M. Brain states: top-down influences in sensory processing. Neuron. 2007 Jun 7;54(5):677–96. pmid:17553419
  37. 37. Grace AA. Gating of information flow within the limbic system and the pathophysiology of schizophrenia. Brain Research Reviews. 2000 Mar 1;31(2–3):330–41. pmid:10719160
  38. 38. Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Random synaptic feedback weights support error backpropagation for deep learning. Nature communications. 2016 Nov 8;7(1):1–0. pmid:27824044
  39. 39. Nassi JJ, Lomber SG, Born RT. Corticocortical feedback contributes to surround suppression in V1 of the alert primate. Journal of Neuroscience. 2013 May 8;33(19):8504–17. pmid:23658187
  40. 40. Nurminen L, Merlin S, Bijanzadeh M, Federer F, Angelucci A. Top-down feedback controls spatial summation and response amplitude in primate visual cortex. Nature communications. 2018 Jun 11;9(1):1–3. pmid:29317637
  41. 41. Timme N, Ito S, Myroshnychenko M, Yeh FC, Hiolski E, Hottowy P, et al. Multiplex networks of cortical and hippocampal neurons revealed at different timescales. PloS one. 2014 Dec 23;9(12):e115764. pmid:25536059
  42. 42. Ito S, Yeh FC, Hiolski E, Rydygier P, Gunning DE, Hottowy P, et al. Large-scale, high-resolution multielectrode-array recording depicts functional network differences of cortical and hippocampal cultures. PloS one. 2014 Aug 15;9(8):e105324. pmid:25126851
  43. 43. Mason A, Nicoll A, Stratford K. Synaptic transmission between individual pyramidal neurons of the rat visual cortex in vitro. Journal of Neuroscience. 1991 Jan 1;11(1):72–84. pmid:1846012
  44. 44. Swadlow HA. Efferent neurons and suspected interneurons in motor cortex of the awake rabbit: axonal properties, sensory receptive fields, and subthreshold synaptic inputs. Journal of neurophysiology. 1994 Feb 1;71(2):437–53. pmid:8176419
  45. 45. Dechery JB, MacLean JN. Functional triplet motifs underlie accurate predictions of single-trial responses in populations of tuned and untuned V1 neurons. PLoS computational biology. 2018 May 4;14(5):e1006153. pmid:29727448
  46. 46. Shimono M, Beggs JM. Functional clusters, hubs, and communities in the cortical microconnectome. Cerebral Cortex. 2015 Oct 1;25(10):3743–57. pmid:25336598
  47. 47. Bastos AM, Vezoli J, Bosman CA, Schoffelen JM, Oostenveld R, Dowdall JR, et al. Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron. 2015 Jan 21;85(2):390–401. pmid:25556836
  48. 48. Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology. 1959 Oct;148(3):574. pmid:14403679
  49. 49. Olshausen BA, Field DJ. Sparse coding of sensory inputs. Current opinion in neurobiology. 2004 Aug 1;14(4):481–7. pmid:15321069
  50. 50. Markram H, Lübke J, Frotscher M, Roth A, Sakmann B. Physiology and anatomy of synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex. The Journal of physiology. 1997 Apr 15;500(2):409–40. pmid:9147328
  51. 51. Thom M, Palm G. Sparse activity and sparse connectivity in supervised learning. Journal of Machine Learning Research. 2013;14(Apr):1091–143.
  52. 52. Beggs JM, Plenz D. Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. Journal of neuroscience. 2004 Jun 2;24(22):5216–29. pmid:15175392
  53. 53. Bolz J, Novak N, Götz M, Bonhoeffer T. Formation of target-specific neuronal projections in organotypic slice cultures from rat visual cortex. Nature. 1990 Jul 26;346(6282):359–62. pmid:1695716
  54. 54. Caeser M, Bonhoeffer T, Bolz J. Cellular organization and development of slice cultures from rat visual cortex. Experimental brain research. 1989 Sep 1;77(2):234–44. pmid:2477270
  55. 55. Götz M, Bolz J. Formation and preservation of cortical layers in slice cultures. Journal of neurobiology. 1992 Sep;23(7):783–802. pmid:1431845
  56. 56. Ikegaya Y, Aaron G, Cossart R, Aronov D, Lampl I, Ferster D, et al. Synfire chains and cortical songs: temporal modules of cortical activity. Science. 2004 Apr 23;304(5670):559–64. pmid:15105494
  57. 57. Klostermann O, Wahle P. Patterns of spontaneous activity and morphology of interneuron types in organotypic cortex and thalamus–cortex cultures. Neuroscience. 1999 Jun 1;92(4):1243–59. pmid:10426481
  58. 58. Plenz D, Aertsen A. Neural dynamics in cortex-striatum co-cultures—II. Spatiotemporal characteristics of neuronal activity. Neuroscience. 1996 Feb 1;70(4):893–924. pmid:8848173
  59. 59. Nigam S, Shimono M, Ito S, Yeh FC, Timme N, Myroshnychenko M, et al. Rich-club organization in effective connectivity among cortical neurons. Journal of Neuroscience. 2016 Jan 20;36(3):670–84. pmid:26791200
  60. 60. Buzsáki G, Mizuseki K. The log-dynamic brain: how skewed distributions affect network operations. Nature Reviews Neuroscience. 2014 Apr;15(4):264–78. pmid:24569488
  61. 61. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005 Mar 1;3(3):e68. pmid:15737062
  62. 62. van Pelt J, Wolters PS, Corner MA, Rutten WL, Ramakers GJ. Long-term characterization of firing dynamics of spontaneous bursts in cultured neural networks. IEEE Transactions on Biomedical Engineering. 2004 Oct 18;51(11):2051–62. pmid:15536907
  63. 63. van Pelt J, Vajda I, Wolters PS, Corner MA, Ramakers GJ. Dynamics and plasticity in developing neuronal networks in vitro. Progress in brain research. 2005 Jan 1;147:171–88. pmid:15581705
  64. 64. Chiappalone M, Vato A, Berdondini L, Koudelka-Hep M, Martinoia S. Network dynamics and synchronous activity in cultured cortical neurons. International journal of neural systems. 2007 Apr;17(02):87–103. pmid:17565505
  65. 65. Pasquale V, Massobrio P, Bologna LL, Chiappalone M, Martinoia S. Self-organization and neuronal avalanches in networks of dissociated cortical neurons. Neuroscience. 2008 Jun 2;153(4):1354–69. pmid:18448256
  66. 66. Niedermeyer E, Sherman DL, Geocadin RJ, Hansen HC, Hanley DF. The burst-suppression electroencephalogram. Clinical Electroencephalography. 1999 Jul;30(3):99–105. pmid:10578472
  67. 67. Eytan D, Marom S. Dynamics and effective topology underlying synchronization in networks of cortical neurons. Journal of Neuroscience. 2006 Aug 16;26(33):8465–76. pmid:16914671
  68. 68. Jadhav SP, Kemere C, German PW, Frank LM. Awake hippocampal sharp-wave ripples support spatial memory. Science. 2012 Jun 15;336(6087):1454–8. pmid:22555434
  69. 69. Staresina BP, Bergmann TO, Bonnefond M, Van Der Meij R, Jensen O, Deuker L, et al. Hierarchical nesting of slow oscillations, spindles and ripples in the human hippocampus during sleep. Nature neuroscience. 2015 Nov;18(11):1679–86. pmid:26389842
  70. 70. Powanwe AS, Longtin A. Determinants of Brain Rhythm Burst Statistics. Scientific Reports. 2019 Dec 4;9(1):1–23. pmid:31797877
  71. 71. Chen X, Dzakpasu R. Observed network dynamics from altering the balance between excitatory and inhibitory neurons in cultured networks. Physical Review E. 2010 Sep 16;82(3):031907. pmid:21230108
  72. 72. Masquelier T, Deco G. Network bursting dynamics in excitatory cortical neuron cultures results from the combination of different adaptive mechanism. PloS one. 2013 Oct 11;8(10):e75824. pmid:24146781
  73. 73. Zierenberg J, Wilting J, Priesemann V. Homeostatic plasticity and external input shape neural network dynamics. Physical Review X. 2018 Jul 20;8(3):031018.
  74. 74. Saiki A, Sakai Y, Fukabori R, Soma S, Yoshida J, Kawabata M, et al. In vivo spiking dynamics of intra-and extratelencephalic projection neurons in rat motor cortex. Cerebral Cortex. 2018 Mar 1;28(3):1024–38. pmid:28137723
  75. 75. Johnson A, Fenton AA, Kentros C, Redish AD. Looking for cognition in the structure within the noise. Trends in cognitive sciences. 2009 Feb 1;13(2):55–64. pmid:19135406
  76. 76. Raichle ME. Two views of brain function. Trends in cognitive sciences. 2010 Apr 1;14(4):180–90. pmid:20206576
  77. 77. Tozzi A, Zare M, Benasich AA. New perspectives on spontaneous brain activity: dynamic networks and energy matter. Frontiers in human neuroscience. 2016 May 26;10:247. pmid:27303283
  78. 78. Tsodyks M, Kenet T, Grinvald A, Arieli A. Linking spontaneous activity of single cortical neurons and the underlying functional architecture. Science. 1999 Dec 3;286(5446):1943–6. pmid:10583955
  79. 79. Cang J, Rentería RC, Kaneko M, Liu X, Copenhagen DR, Stryker MP. Development of precise maps in visual cortex requires patterned spontaneous activity in the retina. Neuron. 2005 Dec 8;48(5):797–809. pmid:16337917
  80. 80. Chiappalone M, Bove M, Vato A, Tedesco M, Martinoia S. Dissociated cortical networks show spontaneously correlated activity patterns during in vitro development. Brain research. 2006 Jun 6;1093(1):41–53. pmid:16712817
  81. 81. Bertschinger N, Rauh J, Olbrich E, Jost J, Ay N. Quantifying unique information. Entropy. 2014 Apr;16(4):2161–83.
  82. 82. Lizier JT, Bertschinger N, Jost J, Wibral M. Information decomposition of target effects from multi-source interactions: perspectives on previous, current and future work. Entropy. 2018;20(4). pmid:33265398
  83. 83. Pica G, Piasini E, Chicharro D, Panzeri S. Invariant components of synergy, redundancy, and unique information among three variables. Entropy. 2017 Sep;19(9):451.
  84. 84. Lizier JT, Heinzle J, Horstmann A, Haynes JD, Prokopenko M. Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity. Journal of computational neuroscience. 2011 Feb 1;30(1):85–107. pmid:20799057
  85. 85. Stramaglia S, Wu GR, Pellicoro M, Marinazzo D. Expanding the transfer entropy to identify information circuits in complex systems. Physical Review E. 2012 Dec 20;86(6):066211.
  86. 86. Litke AM, Bezayiff N, Chichilnisky EJ, Cunningham W, Dabrowski W, Grillo AA, et al. What does the eye tell the brain?: Development of a system for the large-scale recording of retinal output activity. IEEE Transactions on Nuclear Science. 2004 Aug 16;51(4):1434–40.
  87. 87. Schreiber T. Measuring information transfer. Physical review letters. 2000 Jul 10;85(2):461. pmid:10991308
  88. 88. Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010 Sep 1;52(3):1059–69. pmid:19819337