Skip to main content
Advertisement
  • Loading metrics

Robust information propagation through noisy neural circuits

  • Joel Zylberberg ,

    joel.zylberberg@ucdenver.edu

    Affiliations Department of Physiology and Biophysics, Center for Neuroscience, and Computational Bioscience Program, University of Colorado School of Medicine, Aurora, Colorado, United States of America, Department of Applied Mathematics, University of Colorado, Boulder, Colorado, United States of America, Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America, Learning in Machines and Brains Program, Canadian Institute For Advanced Research, Toronto, Ontario, Canada

  • Alexandre Pouget,

    Affiliations Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom

  • Peter E. Latham ,

    Contributed equally to this work with: Peter E. Latham, Eric Shea-Brown

    Affiliation Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom

  • Eric Shea-Brown

    Contributed equally to this work with: Peter E. Latham, Eric Shea-Brown

    Affiliations Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America, Department of Physiology and Biophysics, Program in Neuroscience, University of Washington Institute for Neuroengineering, and Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington, United States of America, Allen Institute for Brain Science, Seattle, Washington, United States of America

Abstract

Sensory neurons give highly variable responses to stimulation, which can limit the amount of stimulus information available to downstream circuits. Much work has investigated the factors that affect the amount of information encoded in these population responses, leading to insights about the role of covariability among neurons, tuning curve shape, etc. However, the informativeness of neural responses is not the only relevant feature of population codes; of potentially equal importance is how robustly that information propagates to downstream structures. For instance, to quantify the retina’s performance, one must consider not only the informativeness of the optic nerve responses, but also the amount of information that survives the spike-generating nonlinearity and noise corruption in the next stage of processing, the lateral geniculate nucleus. Our study identifies the set of covariance structures for the upstream cells that optimize the ability of information to propagate through noisy, nonlinear circuits. Within this optimal family are covariances with “differential correlations”, which are known to reduce the information encoded in neural population activities. Thus, covariance structures that maximize information in neural population codes, and those that maximize the ability of this information to propagate, can be very different. Moreover, redundancy is neither necessary nor sufficient to make population codes robust against corruption by noise: redundant codes can be very fragile, and synergistic codes can—in some cases—optimize robustness against noise.

Author summary

Information about the outside world, which originates in sensory neurons, propagates through multiple stages of processing before reaching the neural structures that control behavior. While much work in neuroscience has investigated the factors that affect the amount of information contained in peripheral sensory areas, very little work has asked how much of that information makes it through subsequent processing stages. That’s the focus of this paper, and it’s an important issue because information that fails to propagate cannot be used to affect decision-making. We find a tradeoff between information content and information transmission: neural codes which contain a large amount of information can transmit that information poorly to subsequent processing stages. Thus, the problem of robust information propagation—which has largely been overlooked in previous research—may be critical for determining how our sensory organs communicate with our brains. We identify the conditions under which information propagates well—or poorly—through multiple stages of neural processing.

Introduction

Neurons in sensory systems gather information about the environment, and transmit that information to other parts of the nervous system. This information is encoded in the activity of neural populations, and that activity is variable: repeated presentations of the same stimulus lead to different neuronal responses [17]. This variability can degrade the ability of neural populations to encode information about stimuli, leading to the question: which features of population codes help to combat—or exacerbate—information loss?

This question is typically addressed by assessing the amount of information that is encoded in the periphery as a function of the covariance structure [6, 824], the shapes of the tuning curves [25, 26], or both [27, 28]. However, the informativeness of the population responses at the periphery is not the only relevant quantity for understanding sensory coding; of potentially equal importance is the amount of information that propagates through the neural circuit to downstream structures [29, 30].

To illustrate the ideas, consider the case of retinal ganglion cells transmitting information about visual stimuli to the cortex via the thalamus, as shown in Fig 1. To quantify the performance of the retina, one must consider not only the informativeness of the optic nerve responses (Ix(s) in Fig 1A), but also how much of that information is transmitted by the lateral geniculate nucleus (LGN) to the cortex (Iy(s) in Fig 1A) [31]. The two may be very different, as only information that survives the LGN’s spike-generating nonlinearity and noise corruption will propagate to downstream cortical structures.

thumbnail
Fig 1. The information propagation problem.

This problem is illustrated with the visual periphery, but the information propagation problem is general: it arises whenever information is transmitted from one area to another, and also when information is combined to carry out computations. (A) The retina transmits information about visual stimuli, s, to the visual cortex. The information does not propagate directly from retina to cortex; it is transmitted via an intermediary structure, the lateral geniculate nucleus (LGN). Consequently, the information about the stimuli that is available to the cortex, denoted Iy(s), is not the same as the information that retina transmits, denoted Ix(s). Here, we ask what properties of neural activities in the periphery maximize the information that propagates to the deeper neural structures. (B) Illustration of our model. Neural activity in the periphery, x, is generated by passing the stimulus, s, through a set of neural tuning curves, f(s), and then adding zero-mean noise, ξ, which may be correlated between cells. This activity then propagates via feed-forward connectivity, described by the matrix W, to the next layer. The activity at the next layer, y, is generated by passing the inputs, W · x, through a nonlinearity g(⋅), and then adding zero-mean noise, η.

https://doi.org/10.1371/journal.pcbi.1005497.g001

Despite its importance, the ability of information to propagate through neural circuits remains relatively unexplored [31]. One notable exception is the literature on how synchrony among the spikes of different cells affects responses in downstream populations [3236]. This is, however, distinct from the information propagation question we consider here, as there is no guarantee that those downstream spikes will be informative. Other work [25, 29, 30, 37, 38] investigated the question of optimal network properties (tuning curves and connection matrices) for information propagation in the presence of noise.

No prior work, however, has isolated the impact of correlations on the ability of population-coded information to propagate. Given the frequent observations of correlations in the sensory periphery [6, 8, 17, 3945], and the importance of the information propagation problem, this is a significant gap in our knowledge. To fill that gap, we consider a model (Fig 1B; described in more detail below), in which there are two layers (retina and LGN, for example). The first layer contains a fixed amount of information, Ix(s), which is encoded in the noisy, stimulus-dependent responses of the cells in that layer. The information is passed to the second layer via feedforward connections followed by a nonlinearity, with noise added along the way. We ask how the covariance structure of the trial-to-trial variability in the first layer affects the amount of information in the second.

Although we focus on information propagation, the problem we consider applies to more general scenarios. In essence, we are asking: how does the noise in the input to a network interact with noise added to the output? Because we consider linear feedforward weights followed by a nonlinearity, the possible transformations from input to output, and thus the computations the network could perform, is quite broad [46]. Thus, the conclusions we draw apply not just to information propagation, but also to many computations. Moreover, it may be possible to extend our analysis to recurrent, time-dependent neural networks. That is, however, beyond the scope of this work.

Our results indicate that the amount of information that successfully propagates to the second layer depends strongly on the structure of correlated responses in the first. For linear neural gain functions, and some classes of nonlinear ones, we identify analytically the covariance structures that optimize information propagation through noisy downstream circuits. Within the optimal family of covariance structures, we find variability with so-called differential correlations [22]—correlations that are proven to minimize the information in neural population activity. Thus, covariance structures that maximize the information content of neural population codes, and those that maximize the ability of this information to propagate, can be very different. Importantly, we also find that redundancy is neither necessary nor sufficient for the population code to be robust against corruption by noise. Consequently, to understand how correlated neural activity affects the function of neural systems, we must not only consider the impact of those correlations on information, but also the ability of the encoded information to propagate robustly through multi-layer circuits.

Results

Problem formulation: Information propagation in the presence of corrupting noise

We consider a model in which a vector of “peripheral” neural population responses, x, is determined by two components. The first is the set of tuning curves, f(s), which define the cells’ mean responses to any particular stimulus (typical tuning curves are shown in Fig 2A). Here we consider a one dimensional stimulus, denoted s, which may represent, for example, the direction of motion of a visual object. In that case, a natural interpretation of our model is that it describes the transmission of motion information by direction selective retinal ganglion cells to the visual cortex (Fig 1) [5, 6, 47]. Extension to multi-dimensional stimuli is straightforward. The second component of the neural population responses, ξ, represents the trial-to-trial variability. This results in the usual “tuning curve plus noise” model, (1) where ξ is a zero mean random variable with covariance Σξ.

thumbnail
Fig 2. Not all population codes are equally robust against corruption by noise.

We constructed two model populations, each with the same 100 tuning curves for the first layer of cells but with different covariance structures, Σξ (see text, especially Eq (4)). The covariance structures were chosen so that the two populations convey identical amounts of information Ix(s) about the stimulus. (A) 20 randomly-chosen tuning curves from the 100 cell population. (B) We corrupted the responses of each neural population by additional Gaussian noise (independently and identically distributed for all cells) of variance σ2, to mimic corruption that might arise as the signals propagate through a multi-layered neural circuit, and computed the “output” information Iy(s) that these further-corrupted responses convey about the stimulus (blue and green curves). The population shown in green forms a relatively fragile code wherein modest amounts of noise strongly reduce the information, whereas the population shown in blue is more robust. (C) Input information Ix(s) in the two model populations (left; “correlated”) and information that would be conveyed by the model populations if they had their same tuning curves and levels of trial-to-trial variability, but no correlations between cells (right; “trial-shuffled”). For panels B and C, we computed the information for each of 100 equally spaced stimulus values, and averaged the information over those stimuli. See Methods for additional details (section titled “Details for Numerical Examples”).

https://doi.org/10.1371/journal.pcbi.1005497.g002

The neural activity, x, propagates to the second layer via feed-forward weights, W, as in the model of [38]. The activity in the second layer is given by passing the input, W · x, through a nonlinearity, g(⋅), and then corrupting it with noise, η (Fig 1B), (2) where the nonlinearity is taken component by component, and η is zero mean noise with covariance matrix Ση. The function g(⋅) need not be invertible, so this model can include spike generation.

While we have, in Fig 1, given one explicit interpretation of our model, the model itself is quite general. This means that our results apply more broadly than just to circuits in the peripheral visual system. Moreover, while our analysis (below) focuses on information loss between layers, this should not be taken to mean that there is no meaningful computation happening within the circuit: because we have considered arbitrary nonlinear transformations between layers, the same model can describe a wide range of possible computations [46]. Our results apply to information loss during those computations.

In the standard fashion [6, 12, 2022], we quantify the information in the neural responses using the linear Fisher information. This measure quantifies the precision (inverse of the mean squared error) with which a locally optimal linear estimator can recover the stimulus from the neural responses [48, 49]. The linear Fisher information in the first and second layers, denoted Ix(s) and Iy(s), respectively, is given by (3a) (3b) where a prime denotes a derivative. Here Weff are the effective weights—basically, the weights, W, multiplied by the average slope of the gain function, g(⋅)—and Σeff,η includes contributions from the noise in the second layer, η, and, if g(⋅) is nonlinear, from the noise in the first layer. (If g is linear, Σeff,η = Ση, so in this case Σeff,η depends only on the noise in the second layer). This expression is valid if is invertible; so long as there are more cells in the second layer than the first, this is typically the case. See Methods for details (section titled “Information in the output layer”).

Eq (3b) is somewhat intuitive, at least at a gross level: both large effective noise (Σeff,η) and small effective weights (Weff) reduce the amount of information at the second layer. At a finer level, the relationship between the two covariance structures—corresponding to the first and second terms in brackets in Eq (3b)—can have a large effect on Iy(s), as we will see shortly.

Information content and information propagation put different constraints on neural population codes

We begin with an example to highlight the difference between the information contained in neural population codes and the information that propagates through subsequent layers. Here, we consider two different neuronal populations with identical tuning curves (Fig 2A), nearly-identical levels of trial-to-trial neural variability, and identical amounts of stimulus information encoded in their firing-rate responses; the populations’ correlational structures, however, differ. We then corrupt these two populations’ response patterns with noise, to mimic corruption that might arise in subsequent processing stages, and ask how much of the stimulus information remains. Surprisingly, the two population codes can show very different amounts of information after corruption by even modest amounts of noise (Fig 2B).

In more detail, there are 100 neurons in the first layer; those neurons encode an angle, denoted s, via their randomly-shaped and located tuning curves (Fig 2A). We consider two separate model populations. Both have the same tuning curves, but different covariance matrices. For reasons we discuss below, those covariance matrices, denoted and (blue and green correspond to the colors in Fig 2B and 2C), are given by (4a) (4b) where Σ0 is a diagonal matrix with elements equal to the mean response, (5)

Here δij is the Kronecker delta (δij = 1 if i = j and 0 otherwise), and we use the convention that two adjacent vectors denote an outer product; for instance, the ijth element if uu is ui uj. The vector u has the same magnitude as f′, but points in a slightly different direction (it makes an angle θu with f′), and ϵ and ϵu are chosen so that the information in the two populations, Ix(s), is the same (ϵu also depends on s; we suppress that dependence for clarity).

In our simulations, both ϵ and ϵu are small (on the order of 10−3; see Methods), so the variance of the ith neuron is approximately equal to its mean. This makes the variability Poisson-like, as is typically observed when counting neural spikes in finite time windows [16]. (More precisely, the average Fano factors—averaged over neurons and stimuli—were 1.01 for the “blue” population and 1.04 for the “green” one.) Both model populations also have the same average correlation coefficients, which are near-zero (see Methods, section titled “Details for Numerical Examples”).

To determine how much of the information in the two populations propagates to the second layer, we computed Iy(s) for both populations using Eq (3b). For simplicity, we used the identity matrix for the feed-forward weights, W, a linear gain function, g(⋅), and independently and identically distributed (iid) noise with variance σ2. Later we consider the more general case: arbitrary feedforward weights, nonlinear gain functions, and arbitrary covariance for the second layer noise. Those complications don’t, however, change the basic story.

Fig 2B shows the information in the output layer versus the level of output noise, σ2, for the two populations. Blue and green curves correspond to the different covariance structures. Although the two populations have identical tuning curves, nearly-identical levels of trial-to-trial neural variability, and contain identical amounts of information about the stimulus, they differ markedly in the robustness of that information to corruption by noise in the second layer. Thus, quantifying the information content of neural population codes is not sufficient to characterize them: recordings from the first-layer cells of the two example populations in Fig 2 would yield identical information about the stimulus, but the blue population has a greater ability to propagate that information downstream.

One possible explanation for the difference in robustness is that the information in the green population relies heavily on correlations, which are destroyed by a small amount of noise. To check this, we compared the information of the correlated neural populations to the information that would be obtained with the same tuning curves and levels of single neuron trial-to-trial variability, but no inter-neuronal correlations [11, 50, 51] (Fig 2C). We find that removing the correlations actually increases the information in both populations (Fig 2C; “Trial-Shuffled”), and by about the same amount, so this possible explanation cannot account for the difference in robustness. We also considered the case where the correlated responses carry more information than would be obtained from independent cells. We again found (similar to Fig 2C) that there could be substantial differences in the amount of information propagated by equally informative population codes (see Methods, section titled “Details for numerical examples”, and the figure therein).

These examples illustrate that merely knowing the amount of information in a population, or how that information depends on correlations in neural responses, doesn’t tell us how much of that information will propagate to the next layer. In the remainder of this paper, we provide a theoretical explanation of this observation, and identify the covariance structures at the first layer that maximize robustness to information loss during propagation through downstream circuits.

Geometry of robust versus fragile population codes

To understand, from a geometrical point of view, why some population codes are more sensitive to noise than others, we need to consider the relationship between the noise covariance ellipse and the “signal direction,” f′(s)—the direction the mean neural response changes when the stimulus s changes by a small amount. Fig 3A and 3B show this relationship for two different populations. The noise distribution in the first layer is indicated by the magenta ellipses, and the signal direction by the green arrows. The uncertainty in the stimulus after observing the neural response is indicated by the overlap of the green line with the magenta ellipse. Because the overlap is the same for the two populations, they have the same amount of stimulus uncertainty, and thus the same amount of information—at least in the first layer.

thumbnail
Fig 3. Geometry of robust versus fragile population codes.

Cartoons showing the interaction of signal and noise for two populations with the same information in the input layer. The dimension of the space is equal to the number of cells in the population; we show a two dimensional projection. Within this space, when the stimulus changes by an amount Δs (with Δs small), the average neural response changes by f′(ss. Thus, f′(s) is the “signal direction” (green arrows). Trial-by-trial fluctuations in the neural responses in the first layer are described by the ellipses; these correspond to 1 standard-deviation probability contours of the conditional response distributions. The impact of the neural variability on the encoding of stimulus s is determined by the projection of the response distributions onto the signal direction (magenta double-headed arrows). By construction, these are identical in the first layer. Accordingly, an observer of the neural activity in the first layer of either population would have the same level of uncertainty about the stimulus, and so both populations encode the same amount of stimulus information. When additional iid noise is added to the neural responses, the response distributions grow; the dashed ellipses show the resultant response distributions at the second layer. Even though the same amount of iid noise is added to both populations, the one in panel A shows greater stimulus uncertainty after the addition of noise than does the one in panel B. Consequently, the information encoded by the population in panel B is more robust against corruption by noise.

https://doi.org/10.1371/journal.pcbi.1005497.g003

Although the two populations have the same amount of information, the covariance ellipses are very different: one long and skinny but slightly tilted relative to the signal direction (Fig 3A), the other shorter and fatter and parallel to the signal direction (Fig 3B). Consequently, when iid noise is added, as indicated by the dashed lines, stimulus uncertainty increases by very different amounts: there’s a much larger increase for the long skinny ellipse than for the short fat one. This makes the population code in Fig 3A much more sensitive to added noise than the one in Fig 3B.

To more rigorously support this intuition, in Methods, section titled “Analysis behind the geometry of information loss”, we derive explicit expressions for the stimulus uncertainty in the first and second layers as a function of the angle between the long axis of the covariance ellipse and the signal direction. Those expressions corroborate the phenomenon shown in Fig 3.

A family of optimal noise structures

The geometrical picture in the previous section tells us that a code is robust against added noise if the covariance ellipse lines up with the signal direction. Taken to its extreme, this suggests that when all the noise is concentrated along the f′(s) direction, so that the covariance matrix is given by (6) the resulting code should be optimally robust. While this may be intuitively appealing, the arguments that led to it were based on several assumptions: iid noise added in the second layer, feedforward weights, W, set to the identity matrix, and a linear neural response function g(⋅). In real neural circuits, none of these assumptions hold. It turns out, though, that the only one that matters is the linearity of g(⋅). In this section we demonstrate that the covariance matrix given by Eq (6) optimizes information transmission for neurons with linear gain functions (although we find, perhaps surprisingly, that this optimum is not unique). In the next section we consider nonlinear gain functions; for that case the covariance matrix given by Eq (6) can be, but is not always guaranteed to be, optimal.

To determine what covariance structures maximize information propagation, we simply maximize information in the second layer, Iy(s), with respect to the noise covariance matrix in the first layer, Σξ, with the information in the first layer held fixed. When the gain function, g(⋅), is linear (the focus of this section), this is relatively straightforward. Details of the calculation are given in Methods, section titled “Identifying the family of optimal covariance matrices”; here we summarize the results.

The main finding is that there exists a family of first-layer covariance matrices Σξ, not just one, that maximizes the information in the second layer. That family, parameterized by α, is given by (7) where Σy is the effective covariance matrix in the second layer, (8) and Iη(s) is the information the second layer would have if there were no noise in the first layer, (9) (see in particular Methods, Eq (46)). For this whole family of distributions—that is, for any value of α for which Σξ is positive semi-definite—the output information, Iy(s), has exactly the same value, (10) (see Methods, Eq (76)). This is the maximum possible output information given the input information, Ix(s).

Two members of this family are of particular interest. One is α = 0, for which the covariance matrix corresponds to differential correlations (Eq (6)); that covariance matrix is illustrated in Fig 4A. This covariance matrix aligns the noise direction with the signal direction. Accordingly, as for the geometrical picture in Fig 3, it makes the encoded information maximally robust.

thumbnail
Fig 4. Family of optimal covariance matrices.

For all panels, green arrows indicate the signal direction, f′(s). Magenta ellipses indicate the noise in the first layer (with corresponding covariance matrix Σξ), and grey ellipses indicate the effective noise in the second layer (with corresponding covariance matrix Σy). (A) The covariance ellipse in the first layer has its long axis aligned with the signal direction; this configuration (which corresponds to differential correlations) optimizes information robustness for any distribution of second layer noise. (B) The covariance ellipse in the first layer does not have its long axis aligned with the signal direction. However, the covariance ellipse of the effective noise in the second layer, Σy, has the same shape as the covariance ellipse in the first. In this case, the blue “good” projection—which is aligned both with a low-variance direction of the first-layer distribution (magenta), and with the signal curve (green), and thus is relatively informative about the stimulus (see text)—is corrupted by relatively little noise at the second layer. This “matched” noise configuration is among those that optimize robustness to noise. The optimal family of covariance matrices interpolates between the configurations shown in panels A and B. (C) Again the covariance ellipse in the first layer does not have its long axis aligned with the signal direction. But now the “good” projection is heavily corrupted by noise at the second layer. In this configuration, all projections are substantially corrupted by noise at some point in the circuit, and thus relatively little information can propagate.

https://doi.org/10.1371/journal.pcbi.1005497.g004

The other family member we highlight is α = 1, for which ΣξΣy. For this case, the covariance matrix in the first layer matches the effective covariance matrix in the second layer; we thus refer to this as “matched covariance”. To understand why this covariance optimizes information in the second layer, we start with the observation that the population activities can be decomposed into their principal components: each principal component corresponds to a different axis along with the population activities can be projected. The information contained in each such projection (principal component) adds up to give the total Fisher information (see Methods, Eq 71). The most informative of these projections are those that have low noise variance, and which align somewhat with the signal curve—like the blue line in Fig 4B. When ΣξΣy, the projections that are most informative in the first layer are corrupted by relatively little noise in the second layer. Consequently, this configuration enables robust information propagation. In contrast, when the covariance structures in the first and second layers are less well matched, all projections are heavily corrupted by noise at some point (i.e., either in the first or the second layer), and hence very little information propagates (Fig 4C).

The family of optima interpolates between the two configurations shown in Fig 4A and 4B (see also Eq (7)). Almost all members of this optimal covariance family depend on the details of the downstream circuit: for α ≠ 0 in Eq (7), the optimal noise covariance at the first layer depends on the feed-forward weights, W, and the structure of the downstream noise. The one exception to this is the covariance matrix given by Eq (6): that one is optimal regardless of the downstream circuit. These are so-called “differential correlations”—the only correlations that lead to information saturation in large populations [22], and the correlations that minimize information in general (see Methods, section titled “Minimum information”, for proof). The fact that correlations can minimize information content and at the same time maximize robustness highlights the fact that optimizing the amount of information in a population code versus optimizing the ability of that information to be transmitted put very different constraints on neural population codes.

The existence of an optimum where the covariance matrices are matched across layers emphasizes that not all optimally robust population codes are necessarily redundant. (By redundant we mean the population encodes less information than would be encoded by a population of independent cells with the same tuning curves and levels of single neuron trial-to-trial variability [12, 21]; see Fig 2). Notably, if the effective second layer covariance matrix, Σy, admits a synergistic population code—wherein more information is encoded in the correlated population versus an uncorrelated one with the same tuning curves and levels of trial-to-trial response variability—then the matched case, ΣξΣy, will also admit a synergistic population code, and be optimally robust.

Optimally robust, however, does not necessarily mean the majority of the information is transmitted; for that we need another condition. We show in the Methods section titled “Variances of neural responses, and robustness to added noise, for different coding strategies” that for non-redundant codes, a large fraction of the information is transmitted only if there are many more neurons in the second layer than in the first. This is typically the case in the periphery. For differential correlations, that condition is not necessary—so long as there are a large number of neurons in both the input and output layers, most of the information is transmitted.

Nonlinear gain functions

So far we have focused on linear gain functions g(⋅); here we consider nonlinear ones. This case is much harder to analyze, as the effective covariance structure in the second layer, Σeff,η, depends on the noise in the first layer (see Methods, Eq (22)). We therefore leave the analysis to Methods (section titled “Nonlinear gain functions”); here we briefly summarize the main results. After that we consider two examples of nonlinear gain functions—both involving a thresholding nonlinearity to mimic spike generation.

For linear gain functions we were able to find a whole family of optimal covariance structures, for nonlinear ones we did not even try. Instead, we asked: under what circumstances are differential correlations optimal? Even for this simplified question a definitive answer does not appear to exist. Nevertheless, we can make progress in special cases. When there is no added noise in the second layer (e.g., η = 0 for the model in Fig 1B), differential correlations maximize the amount of information that propagates through the nonlinearity, so long as the tuning curves are sufficiently dense relative to the steepness of the tuning curves (meaning that whenever the stimulus changes, the average stimulus-evoked response of at least one neuron also changes; see Methods). If there is added noise at the second layer, differential correlations tend to be optimal in cases where the addition of noise at the first layer, ξ, causes reductions in information, Ix(s). (This means that, so long as there are no stochastic resonance effects causing added noise to increase information, then differential correlations are optimal.)

We first check, with simulations, the prediction that differential correlations are optimal if there is no added noise. For that we use a thresholding nonlinearity, chosen for two reasons: it is an extreme nonlinearity, and so should be a strong test of our theory, and it is somewhat realistic in that it mimics spike generation. For this model, the responses at that second layer, yi, are given by (11) where Θ is the Heaviside step function (Θ(x) = 1 if x ≥ 0 and 0 otherwise), and θi is the spiking threshold of the ith neuron. This is the popular dichotomized Gaussian model [5256], which has been shown to provide a good description of population responses in visual cortex, at least in short time windows [54], and to provide high-fidelity descriptions of the responses of integrate-and-fire neurons, again in short time windows [57].

In our simulations with the step function nonlinearity, as for all of the other cases we considered above, the first layer responses are given by the tuning curve plus noise model (Eq (1)). The tuning curves, f(s), of the 100-neuron population are again heterogeneous (similar to those in Fig 2A but with a different random draw from the tuning curve distribution), and the trial-to-trial variability is given by (12) with Σ0 given by Eq (5). This is the same covariance matrix as in Eq (4b), except that we have included an overall scale factor, γu, chosen to ensure that the information in the input layer is independent of both ϵu and u(s) (see Methods, Eq (99)).

Because these (step function) nonlinearities are infinitely steep, the tuning curves are not sufficiently dense for our mathematical analysis to guarantee that differential correlations are optimal for information propagation. However, we argue in Methods (section titled “Nonlinear gain functions”), that this should be approximately true for large populations. And indeed, that’s what we find with our numerical simulation, as shown in Fig 5B. When θu = 0 (recall that θu is the angle between u(s) and f′(s)), so that u(s) = f′(s), the second term in Eq (12) corresponds to differential correlations; in this case, information increases monotonically with ϵu. In other words, information propagated through the step function nonlinearity increases as “upstream” correlations become more like pure differential correlations. In contrast, when θu is nonzero (as in Fig 3A), information does not propagate well: information decreases as ϵu increases. This is consistent with our findings for the linear gain function considered in Fig 2. Thus, differential correlations can optimize information transmission even for a nonlinearity as extreme as a step function.

thumbnail
Fig 5. Differential correlations enhance information propagation through “spike-generating” nonlinearities.

Responses in the second layer were generated using the dichotomized Gaussian model of spike generation, in which the input from the first layer was simply binarized via a step function (see Eq (11)). We varied the correlations in these inputs (see Eq (12)) while keeping the input information and input tuning curves fixed. (A) Heterogeneous tuning curves in the second layer, evaluated at ϵu = 0; we show a random subset of 20 cells out of the 100-neuron population studied in panel B. (B) Information transmitted by the 100-cell spiking population as a function of ϵu, which is the strength of the noise in the u(s) direction, for different angles, θu, between u and f′(s) (see Eq (12)). The input information was held fixed as ϵu was varied. The information is averaged over 20 evenly spaced stimuli (see Methods, section titled “Details for Numerical Examples”).

https://doi.org/10.1371/journal.pcbi.1005497.g005

The lack of explicit added noise at the second layer makes this case somewhat unrealistic. In neural circuits, we expect noise to be added at each stage of processing—if nothing else, due to synaptic failures. We thus considered a model in which noise is added before the spike-generation process, (13) where ζi is zero-mean noise with covariance matrix Σζ.

We computed information for this model using the same input tuning curves, spike thresholds, and covariance matrix, Σξ, as without the additional noise (i.e., as in Fig 5). To mimic the kind of independent noise expected from synaptic failures, we chose the ζi to be iid, and for simplicity we took them to be Gaussian distributed with variance . We computed the amount of stimulus information, Iy(s), for several different levels of the added input noise . We found that for all levels of noise, differential correlations increase information transmission (Iy(s) increases monotonically with ϵu in Fig 6A, for which θu = 0). And we again found that when the long axis of the covariance ellipse makes a small angle with the signal direction, information propagates poorly (Fig 6B, for which θu = 0.1 rad.).

thumbnail
Fig 6. Information propagation through spike-generating nonlinearities with additive input noise.

As with Fig 5, responses in the second layer were generated using the dichotomized model of spike generation, in which the input from the first layer was simply binarized. Here, though, Gaussian noise was added before thresholding; see Eq (13). We varied the correlations in the input layer (see Eq (12)) while keeping the input information and input tuning curves fixed for the 100-cell population (same tuning curves and covariance matrices as in Fig 5). The additive noise at the second layer (the ζi) was iid Gaussian, with variance ; different colored lines correspond to different values of . (A) Output information versus ϵu for populations with differential correlations (u = f′(s)). (B) Same as panel A, but for populations that concentrate noise along an axis, u, that makes an angle of 0.1 rad with the f′(s) direction. For both panels, the input information was held fixed as ϵu was varied, and the information was averaged over 20 evenly spaced stimuli.

https://doi.org/10.1371/journal.pcbi.1005497.g006

These numerical findings for a spike-generating nonlinearity with added noise are similar to the previous cases of a linear transfer function, g(⋅), with added input noise (Figs 2 and 3), for which we have analytical results, or a spike generating nonlinearity with no added input noise (Fig 5), for which we do not. We further argue in Methods (section titled “Nonlinear gain functions”), that for nonlinear gain functions differential correlations are likely to be optimal if the tuning curves are optimal (in the case of Eq (13), if the thresholds θi are chosen optimally). Taken together, our findings demonstrate that differential correlations in upstream populations generally increase the information that can be propagated downstream through noisy, nonlinear neural circuits.

Discussion

Much work in systems neuroscience has investigated the factors that influence the amount of information about a stimulus that is encoded in neural population activity patterns. Here we addressed a related question that is often overlooked: how do correlations between neurons affect the ability of information to propagate robustly through subsequent stages of neural circuitry? The question of robustness is potentially quite important, as the ability of information to propagate determines how much information from the periphery will reach the deeper neural structures that affect decision making and behavior. To investigate this issue, we considered a model with two cell layers. We varied the covariance matrix of the noise in the first layer (while keeping the tuning curves and information in the first layer fixed), and asked how much information could propagate to the second layer. Our main findings were threefold.

First, population codes with different covariance structures but identical tuning curves and equal amounts of encoded information can differ substantially in their robustness to corruption by additional noise (Figs 2, 5, 6 and 7). Consequently, measurements of information at the sensory periphery are insufficient to understand the ability of those peripheral structures to propagate information to the brain, as that propagation process inevitably adds noise. For instance, populations of independent neurons can be much worse at transmitting information than can populations displaying correlated variability (Fig 5B). Thus, to understand how the brain efficiently encodes information, we must concern ourselves not just with the amount of information in a population code, but also with the robustness of that encoded information against corruption by noise.

thumbnail
Fig 7. Not all synergistic population codes are equally robust against corruption by noise.

This figure is similar to Fig 2, but with synergistic instead of redundant population codes. We constructed two model populations—each with the same 100 tuning curves (20 randomly-chosen example tuning curves are shown in panel A)—for the first layer of cells. The two populations have different covariance structures Σξ for their trial-to-trial variability (see main text, Eq (4)), but convey identical amounts of information, Ix(s), about the stimulus. (B) We corrupted the responses of each neural population by Gaussian noise (independently and identically distributed for all cells) of variance σ2, to mimic corruption that might arise as the signals propagate through a multi-layered neural circuit, and computed the output information, Iy(s), that these further-corrupted responses convey about the stimulus (blue and green curves). (C) Input information Ix(s) in the two model populations (left; “correlated”) and information that would be conveyed by the model populations if they had their same tuning curves and levels of trial-to-trial variability, but no correlations between cells (right; “trial-shuffled”). For panels B and C, we computed the information for 100 different stimulus values, equally spaced between 0 and 2π, and averaged the information over these stimuli.

https://doi.org/10.1371/journal.pcbi.1005497.g007

Second, for linear gain functions, or noise-free nonlinear ones with sufficiently dense tuning curves, populations with so-called differential correlations [22] are maximally robust against noise induced by information propagation. This fact may seem surprising given that differential correlations are the only ones that lead to information saturation in large populations [22], and the correlations that minimize information in general. However, in hindsight it makes sense: differential correlations correspond to a covariance ellipse aligned with the signal direction (see Fig 3B), and added noise simply doesn’t make it much longer. For nonlinear gain functions combined with arbitrary noise, differential correlations are not guaranteed to yield a globally optimal population code for information propagation. However, for the spike-generating nonlinearity we considered here, differential correlations were at least a local optimum (see Figs 5 and 6).

Third, while differential correlations optimize robustness, for linear gain functions that optimum is not unique. Instead, there is a continuous family of covariances that exhibit identical robustness to noise (see Fig 4 and Eq (7)). However, within this family, only differential correlations yield population codes that are optimally robust independent of the downstream circuitry. Thus, they are the most flexible of the optima: for all other members of the family, the optimal covariance structure in the first layer depends on the noise in subsequent layers, as well as the weights connecting those layers.

The existence of this family of optimal solutions raises an important point with regards to redundancy and robust population coding. Populations with differential correlations—which are among the optimal solutions in terms of robustness—are highly redundant: a population with differential correlations encodes much less information than would be expected from independent populations with the same tuning curves and levels of trial-to-trial variability (Fig 2C). It is common knowledge that redundancy can enhance robustness of population codes against noise [58], and thus it is worth asking if our robust population coding results are simply an application of this fact. Importantly, the answer is no: as discussed in Methods, section titled “A family of optimal noise structures”, within the family of optimal correlational structures are codes with minimal redundancy. Moreover, as is shown in Fig 2B, a code can be redundant without being robust to added noise. In other words, redundancy in a population code is neither necessary, nor sufficient, to ensure that the encoded information is robust against added noise. However, there is an important caveat: unless the number of neurons in the second layer is large relative to the number in the first, and/or the added noise in the second layer is small relative to the noise at the first layer, non-redundant codes tend to lose a large amount of information when corrupted by noise. This contrasts sharply with differential correlations, which can tolerate large added noise with very little information loss (see Methods section titled “Variances of neural responses, and robustness to added noise, for different coding strategies”).

In the case of real neural systems, there will always be a finite amount of information that the population can convey (bounded by the amount of input information that the population receives from upstream sources [59]), and so the question of how best to propagate a (fixed) amount of information is of potentially great relevance for neural communication. Our results suggest that the presence of differential correlations serves to allow population-coded information to propagate robustly. Thus, an observation of these correlations in neural recordings might indicate that the population code is optimized for robustness of the encoded information. At the same time, we note that weak differential correlations might be hard to observe experimentally [22]. Moreover, our calculations indicate that there exists a whole family of possible propagation-enhancing correlation structures, and so differential correlations are not necessary for robust information propagation. This means that observations of either differential correlations, correlation structures matched between subsequent layers of a neural circuit (Fig 4), or a combination of the above would indicate that the system enables robust information propagation.

How might the nervous system shape its responses so as to generate correlations that enhance information propagation? Recent work identified network mechanisms that can lead to differential correlations [60]. While it is beyond the scope of this work, it would be interesting to explicitly study the network structures that allow encoded information to propagate most robustly through downstream circuits. Relatedly, [38] and [29, 30] asked how the connectivity between layers affects the ability of information to propagate. While we identified the optimal patterns of input to the multi-stage circuit, they identified the optimal anatomy of that circuit itself.

Note that we have used linear Fisher information to quantify the population coding efficacy. Other information measures exist, and it is worth commenting on how much our findings generalize to different measures. In the case of jointly Gaussian stimulus and response distributions, correlations that maximize linear Fisher information also maximize Shannon’s mutual information [20]. In that regime our findings should generalize well. Moreover, whenever the neural population response distributions belong to the exponential family with linear sufficient statistics, the linear Fisher information is equivalent to the (nonlinear) “full” Fisher information [29]. In practice, this is a good approximation to primary visual cortical responses to oriented visual stimuli [61, 62], and to other stimulus-evoked responses in other brain areas (see [22] for discussion). Consequently, our use of linear Fisher information in place of other information measures is not a serious limitation.

For encoded sensory information to be useful, it must propagate from the periphery to the deep brain structures that guide behavior. Consequently, information should be encoded in a manner that is robust against corruption that arises during propagation. We showed that the features of population codes that maximize robustness can be substantially different from those that maximize the information content in peripheral layers. Moreover, by elucidating the set of covariances structures that optimize information transmission, we found that redundancy in a population code is neither necessary, nor sufficient, to guarantee robust propagation. In future work, it will be important to determine whether the nervous system uses the class of population codes that maximize information transmission.

Finally, while our main focus was on information propagation, the model we used—linear feedforward weights followed by a nonlinearity—is known to have powerful computational properties [46]. It is, in fact, the basic unit in many deep neural networks. Thus, our main conclusion, which is that differential correlations are typically optimal, applies to any computation that can be performed by this architecture.

Methods

Here we provide detailed analysis of the relationship between correlations, feedforward weights, and information propagation. Our methods are organized into sections as follows,

  • “Information in the output layer”: we derive an expression for the information in the output layer (Eq (3b)).
  • “Identifying the family of optimal covariance matrices”: we identify the optimal family of first layer covariance structures when the gain function is linear.
  • “Nonlinear gain functions”
  • “Analysis behind the geometry of information loss”
  • “Minimum information”: we prove that differential correlations minimize information.
  • “Variances of neural responses, and robustness to added noise, for different coding strategies”
  • “Information in a population with a rank 1 perturbation to the covariance matrix”: we compute information for a noise structure consisting of an arbitrary covariance matrix plus a rank 1 covariance matrix.
  • “Details for numerical examples”

Information in the output layer

Our analysis focuses on information loss through one layer of circuitry; to compute the loss, we need expressions for the linear Fisher information in the first and second layers. Expressions for those two quantities are given in Eqs (3a) and (3b). The first is standard; here we derive the second.

To make the result as general as possible, we include noise inside the nonlinearity as well as outside it; if nothing else, that’s probably a reasonable model for the spiking nonlinearity given in Eq (13). We thus generalize slightly Eq (2), and write (14) where ζ is zero mean noise with covariance matrix Σζ, and here and in what follows we use the convention that g is a pointwise nonlinearity, so for any vector v, the ith element of g(v) is g(vi). When Σζ = 0, we recover exactly the model in Eq (2).

Using Eq (1) for x, Eq (14) becomes (15) where, recall, ξ and η are zero mean noise with covariance matrices Σξ and Ση, respectively, and h(s) is the mean drive to neuron i, (16)

To compute the linear Fisher information in the second layer, we start with the usual expression, (17) where E and Cov denote mean and covariance, respectively. The mean value of y given s is, via Eq (15)), (18)

Like g(⋅), is a pointwise nonlinearity. To compute the covariance, we assume, as in the main text, that ξ and η are independent; in addition, we assume that both are independent of ζ. Thus, the covariance of is the sum of the covariances of the first and second terms in Eq (15). The covariance of the second term is just Ση. The covariance of the first term is harder. To make progress, we start by implicitly defining the quantity δΣg(s) via (19) where Weff(s) is the actual feedforward weight multiplied by the average slope of g, (20) and is the a diagonal matrix with entries corresponding to the average slope of g, (21)

As in the main text, δij is the Kronecker delta and a prime denotes a derivative. The above implicit definition of δΣg is motivated by the observation that when g is linear, δΣg vanishes. Below, in Sec., we show that if ξ is Gaussian, δΣg is positive semi-definite. Here we assume that the noise is sufficiently close to Gaussian that δΣg remains positive semi-definite, and thus can be treated as the covariance matrix of an effective noise source. This last assumption is needed below, in the section titled “Nonlinear gain functions”, where we argue that information loss is small when δΣg is small (see text following Eq (64)).

Making the additional definition (22) and using Eqs (15) and (19) and the fact that η is independent of both ξ and ζ, we see that (23)

Combining this with the expression for the mean value of y, Eq (18), the linear Fisher information, Eq (17) becomes (24) where we used Eqs (16) and (20) to replace sE(y|s) with Weff · f′ and, to reduce clutter, we have suppresed any dependence on s. To pull the effective weights inside the inverse, we use the Woodbury matrix identity to write (25)

Then, using the fact that [A + B]−1 = A−1 · [A−1 + B−1]−1 · B−1, and applying a very small amount of algebra, this becomes (26) where I is the identity matrix. It is then straightforward to show that (27)

Inserting this into Eq (24), we see that the right hand side of that equation is equal to the expression given in Eq (3b) of the main text.

δΣg is positive semi-definite for Gaussian noise.

To show that δΣg (defined implicitly in Eq (19)), is positive semi-definite for Gaussian noise, we’ll show that it can be written as a covariance. To simplify the analysis, we make the definition (28)

With this definition, (29) where here and in what follows we are suppressing the dependence on s, Σχ is the covariance matrix of χ, and is defined in Eq (21). Because we are assuming that both ξ and ζ are Gaussian, χ is also Gaussian.

We’ll show now that δΣg is equal to the covariance of the function . We start by noting that (30)

We’ll focus on the second term, which is given explicitly by (31)

When P(χ) is Gaussian, (32)

Inserting this into Eq (31) and integrating by parts, we arrive at (33)

Using the fact that χ g(h + χ) = h g(h + χ), the above expression becomes (34) where the second equality follows from the definition of (Eq (21)). Inserting this into Eq (30), we see that the right hand side of Eq (30) is exactly equal to the right hand side of Eq (29). Thus, δΣg can be written as a covariance, and so it must be positive semi-definite.

Identifying the family of optimal covariance matrices

Here we address the question: what noise covariance matrix optimizes information transmission? In other words, what covariance matrix Σξ maximizes the information given in Eq (3b)? That is hard to answer when g is nonlinear, because in that case Σeff,η depends on Σξ via δΣg (see Eqs (19) and (22)). In this section, then, we consider linear gain functions; in the next we consider nonlinear ones. To make our expressions more readable, we generally suppress the dependence on s.

Our goal is to maximize Iy with Ix fixed. Using the definition of Σy given in Eq (8), for linear gain functions the information in the second layer (Eq (3b)) is written (35)

We use Lagrange multipliers, (36) where λ is a Lagrange multiplier that enforces the constraint . Taking the derivative and setting it to zero yields (37)

In deriving this expression we used the fact that the gain functions are linear, which implies that Σy does not depend on Σξ. Multiplying by Σξ + Σy on both the left and right, we arrive at (38)

This is satisfied when (39)

There are two ways this can happen, (40a) (40b) where a is an arbitrary vector. Combining these linearly, taking into account that Σξ is a covariance matrix and thus symmetric, and enforcing equality in Eq (38), we arrive at (41) where Iη is the information the output layer would have if there was no noise in the input layer, (42) (this is the same expression as in Eq (9), it’s repeated here for convenience), Ω is an arbitrary symmetric matrix, P is a projection operator, chosen so that P · f′ = 0, (43) and α is arbitrary (but subject to the constraint that Σξ has no negative eigenvalues). Note that P is a linear combination of the right hand sides of Eqs (40a) and (40b)), with a = f′ in the latter equation. It is straightforward to verify that when Σξ is given by Eqs (41) and (38) is satisfied.

To find an explicit expression for Σξ, not just its inverse, we apply the Woodbury matrix identity to Eq (41); that gives us (44) where (45)

Inserting this into Eq (44), we arrive at (46)

This is the same as Eq (7) in the main text, except in that equation we let Ω go to ∞, so we ignore the projection-related term. Ignoring that term is reasonable, as it just puts noise in a direction perpendicular to f′, and so has no effect on the information.

By choosing different scalars α and matrices Ω, a family of optimal Σξ is obtained. These all have the same input information, Ix, and the same output information, Iy, after corruption by noise. An especially interesting covariance matrix is found in the limit α = 0, in which case (47)

These are so-called differential correlations [22]. Importantly, the choice α = 0 is the only one for which the optimal correlational structure is independent of the correlations in the output layer, Σy. Note that pure differential correlations don’t satisfy Eq (39). As such, they represent a singular limit, in the sense that Σξ in Eq (46) satisfies Eq (39) with alpha arbitrarily small, but not precisely zero.

The other covariance that we highlight in the text is found for α = 1 and Ω → ∞, in which case . This is the matched covariance case.

Nonlinear gain functions

We now focus on differential correlations, and determine conditions under which they are optimal for information propagation when the gain function, g(⋅), is nonlinear. In this regime, the effective noise in the second layer (the second term in brackets in Eq (3b)) depends on Σξ. This greatly complicates the analysis, and to make headway we need to reformulate our mathematical description of differential correlations. This reformulation is based on the observation that differential correlations correspond to trial-to-trial variability in the value of the stimulus, s[22]. Consequently, the encoding model in the input layer can be written as a multi-step process, (48a) (48b) (48c)

Here s0 is the value of the stimulus that is actually presented. However, the neurons in the input layer, x, encode s—a corrupted version of s0. This is indicated by Eq (48a), which tells us that s deviates on a trial-to-trial basis from s0, with deviations that are described by a zero-mean random variable, δs.

To see that this model does indeed exhibit differential correlations, we Taylor expand Eq (48b) around s0, yielding a model of the form (49) for which the covariance matrix is (50)

The first term corresponds to differential correlations.

Eqs (48b) and (48c) correspond exactly to our previous model (Eq (4a)). Consequently, the information about s in the first and second layers are still given by Eqs (3a) and (3b) of the main text. However, we can’t use those equations for the information about s0. For that, we focus on the variance of its optimal estimator given x, which we denote . Because of the Markov structure of our model (s0sx), we can construct by first considering the optimal estimator of s0 given s, and then the optimal estimator of s given x. The variance of given x is then simply the sum of the variances of these two (independent) noise sources.

The optimal estimator of s0 given s is simply s, with conditional variance . The optimal estimator of s given x is , with variance . Consequently, (51)

As usual, we approximate the variance of given s by the linear Fisher information, yielding an approximation for the total Fisher information about s0 given x, (52)

Similarly, the Fisher information about s0 given y is approximated by (53)

Note that we are slightly abusing notation here: above, Ix(s) and Iy(s) referred to the total information about the stimulus; now they refer to the information about the stimulus that is encoded in the first layer, which is different from the actual stimulus, s0. However, it is a convenient abuse, as it allows us to take over our previous results without introducing much new notation.

Our first step is to parametrize the covariance matrix, ξ, and Var[δs], in a way that ensures that the information in the first layer remains fixed while we vary ξ and Var[δs]. A convenient choice is (54a) (54b) where (55)

Inserting Eq (54) into Eq (52), we see that , independent of Σ0(s).

The information in the second layer about s, Iy(s), is given by Eq (3b), with Σeff,η given in Eq (22). It is convenient to make the definition (56)

This is the analog of Eq (8), but for nonlinear gain functions. It is clear from Eqs (22) and (19) that Σeff,η depends on Σξ; consequently, it depends on ϵ.

To maximize information with respect to ϵ, we take a two step approach. We write (57)

Here Σξ(s,ϵ) and Σy(s,ϵ0) are the same as in Eqs (54b) and (56); we have just made the dependence on ϵ explicit. The two steps are to maximize first with respect to ϵ, then with respect to ϵ0. If the two maxima occurr in the same place, then we have identified the covariance structure that optimizes information transmission.

In the first step we differentiate with respect to ϵ. To simplify the expressions, we make the definition (58)

Combining Eqs (53), (54) and (57), we have (59) where we used the fact that for any square matrix A(x), (d/dx)A−1 = −A−1 · d A/dx · A−1, and we suppressed much of the s, ϵ and ϵ0 dependence for clarity. Using Eq (54b) for Σξ(s,ϵ), the derivative with respect to ϵ in the second term is straightforward, (60)

Then, applying the definition (see Eqs (57) and (58)), and making the new definition (61) we arrive at the expression (62)

The right hand side of Eq (62) is negative or zero if the term in brackets is negative semi-definite; that is, if all its eigenvalues are non-positive. Since the term in square brackets is a rank one matrix minus the identity, all but one of its eigenvalues are equal to -1. The remaining eigenvalue is 0, with corresponding eigenvector (see Eq (55)). Thus, , and must have a global maximum at ϵ = ∞. If g is linear, Σeff,y doesn’t depend on ϵ0, and ϵ = ∞ corresponds to pure differential correlations. We have, therefore, recovered the α = 0 limit of Eq (46).

When ϵ = ∞, Σξ vanishes, and so the expression for the information in the second layer simplifies considerably. Combining Eqs (53) and (54a), we have, in the ϵ → ∞ limit, (63) where (64)

The latter equation follows by combining the fact that Σξ(s, ∞) = 0 (Eq (54b)) with the definitions of Σeff,y and Σeff,η (Eqs (56) and (22), respectively).

The total information in the output layer is maximized when Iy(s0; ∞,ϵ0) is maximized. That quantity depends on ϵ0 via Σξ(s,ϵ0), the noise covariance in the input layer. As can be seen from Eq (54b), larger ϵ0 implies smaller Σξ(s,ϵ0). That has two effects. First, when Σξ(s,ϵ0) is small enough, the covariance matrix δΣg becomes small (see Eq (19), and note that δΣg is positive definite, as shown in Sec.). This tends to increase Iy(s). However, the effective tuning curves, Weff(s;ϵ) · f(s), also depend on Σξ(s,ϵ0) (see Eq (20)). It is possible that increasing Σξ(s,ϵ0) modifes the tuning curves such that Iy(s) increases. Consequently, it is impossible to make completely general statements.

Nevertheless, we can identify two regimes. First, if there is no added noise in the output layer (η = ζ = 0), then Iy(s; ∞, ϵ) goes to ∞ as ϵ0 goes to ∞, thus maximizing the total information. This holds, however, only if the tuning curves are sufficiently dense relative to the steepness of the tuning curves; otherwise, the Fisher information is no longer a good approximation to the true information. For smooth tuning curves this is generally satisfied, but it is not satisfied for the noise-free spike generating mechanism we consider in the main text (Eq (11)), since for that nonlinearity f′(s) = 0 with probability 1. We expect, though, that in the absence of noise, this particular nonlinearity introduces an error that is , implying that Iy(s; ∞, ϵ) ∝ n2. Numerical simulations corroborated this scaling. Thus, for sufficiently large populations, differential correlations are optimal for the noise-free spike-generating nonlinearity. Note, though, that the thresholds must be chosen so that there are always both active and silent neurons; otherwise, in the limit that Σξ vanishes, the activity will contain no information at all about the stimulus.

The second regime is one in which the tuning curves have been optimized. In this case, modifying the tuning curves by adding noise decreases information, and again differential correlations optimize information transmission.

To summarize, we have analyzed the scenario considered in the main text (section titled “Nonlinear gain functions”)—namely, the neural activities at the second layer, y, are given by a nonlinear function of the neural activities at the first layer, x, with noise added both before and after the nonlinearity. In this case, whether or not differential correlations in the first layer optimize information transmission depends on the details. They do if g is linear, the tuning curves are optimal, or there is no added noise in the second layer and the tuning curves are sufficiently dense relative to the steepness of the tuning curves. If none of these are satisfied, however, differential correlations may be sub-optimal.

Analysis behind the geometry of information loss

Our goal in this section is to make more rigorous the geometrical arguments in Fig 3. We start with the observation that, for Gaussian distributed neural responses, the 1 standard-deviation probability contours for the responses in the first layer (magenta ellipses in Fig 3) are defined by (65) where Δrf(s) − r represents fluctuations around the mean response to stimulus s. In two dimensions, which we’ll focus on here, Eq (65) becomes (66) where σ1 and σ2 are the lengths of the principal axes of the covariance ellipse (so and are the eigenvalues of Σξ) and Δr1 and Δr2 are distances spanned by the magenta ellipses along those axes.

As shown in Fig 3, the intersection between the magenta ellipse (the one defined in Eq (66)) and the signal curve tells us the uncertainty in the value of the stimulus. To quantify this uncertainty, we simply set Δr to f′(ssx (the subscript x indicates that this is the uncertainty in the input layer), insert that into Eq (66), and solve for Δsx. Defining θ to be the angle between f′(s) and the long principal axis (see Fig 3, and note that θ = 0 in panel B), and letting σ1 correspond to the length of the ellipse’s major axis (so σ1 > σ2), we have (67)

The left hand side is the linear Fisher information in the first layer [10], a fact that is useful primarily because it validates our (relatively informal) derivation. More importantly, we can now see how iid noise affects information. The addition of iid noise simply increases the eigenvalues by σ2, so the ratio of the information in the output layer to that in the input layer is (68)

We can identify two limits. First, if θ = 0 (as it is in Fig 3B), this ratio reduces to (69)

Second, if tan θσ2/σ1 (which essentially means the green line in Fig 3 intersects the covariance ellipse on the side, as in panel A, rather than somewhere near the end, as in panel B), the ratio of the informations becomes (70)

Because σ1 > σ2, the information loss is larger in the second case than in the first. And the longer and skinnier the covariance ellipse, the larger the difference in information loss. Thus, this analysis quantifies the geometrical picture given in Fig 3, in which there is larger information loss in panel A (where θ > 0) than in panel B (where θ = 0).

Minimum information

Here we ask: what correlational structure minimizes linear Fisher information? To answer that, we use the multi-dimensional analog of Eq (67), (71) where is the kth eigenvalue of the noise covariance matrix and θk is the angle between f′(s) and the kth eigenvector [10]. We would like to minimize Ix(s) with respect to the angles, θk, and the eigenvalues, . Without constraints, this problem is trivial: information is minimized by having infinite variances for the neural activities. To make the problem better-formulated, we add a constraint that prevents the optimization procedure from simply identifying that trivial solution.

We’ll come to the constraint shortly, but first we’ll minimize information with respect to the angles, θk. That minimum occurs when the eigenvector corresponding to the largest eigenvalue is parallel to f′(s); ordering the eigenvalues so that is the largest eigenvalue, we have cos θ0 = 1 and cos θk > 0 = 0. Consequently, the information at the minimum is (72)

The next step is to minimize Ix(s) with respect to the eigenvalues, subject to a constraint on the covariance matrix. We consider constraints of the form (73) where, to avoid the trivial solution (of infinite neural variances), C is an increasing function of each of it’s arguments: for all k, (74)

Examples of are the trace of the covariance matrix (the sum of the eigenvalues) and the Frobenius norm (the square root of the sum of the squares of the eigenvalues).

Because of Eq (74), the information, Eq (72), is minimized and the constraint, Eq (73), is satisfied when all the eigenvalues except are zero. At this global minimum, the covariance matrix, Σξ, displays purely differential correlations, (75) where v0 is the eigenvector associated with the largest eigenvalue. The last term in this expression follows because the above minimization with respect to the angles forced v0 to be parallel to f′(s). Thus, for a broad, and reasonable, class of constraints on the covariance matrix, differential correlations minimize information.

Variances of neural responses, and robustness to added noise, for different coding strategies

Throughout most of our analysis we focused on optimality of information transmission. However, also important is how much information is transmitted at the optimum. That’s the subject of this section. For simplicity we consider a linear gain function, which we set, without loss of generality, to the identity. That allows us to use the analysis above, in the section titled “Identifying the family of optimal covariance matrices”, and in particular Eq (46), which links the noise in the input and output layers.

Our starting point is the derivation of an expression for the ratio of the information in the output layer to that in the input layer. To do that, we dot both sides of Eq (37) by f′ on the left and right sides and solve for λ; we then do the same, except we dot with on the left and its transpose on the right. This yields, after a small amount of algebra, (76) where Ix, Iy and Iη are given by Eqs (3a), (3b) and (9), respectively. For information to be transmitted efficiently, Ix, the information in the input layer, must be small compared to Iη, the information associated with the added noise in the output layer. Below, we investigate the conditions under which IxIη, and thus when information loss is small.

Our strategy is to express Ix/Iη in terms of the single neuron variability, quantified as the average variance—something that has an easy interpretation. We consider two cases: the weights are set to the identity (W = I), and the weights are more realistic (each neuron in the input layer connects to a large number of neurons in the output layer). The first case, identity weights, is not very realistic; we include it because it is much simpler than the second.

While the analysis is straightforward, it is somewhat heavy on the algebra, so we summarize the results here. We consider two extremes in the family of optimal covariance structures: the “matched” case (α = 1 in Eq (46), and, for simplicity, Ω = ∞) and differential correlations (α = 0). For matched covariances, near complete information transfer (IxIη) requires the effective variance of the noise in the second layer to be small. For identity feedforward weights, the effective variance in the input and output layers is about the same, so information loss is large. However, identity feedforward weights are never observed in the brain; instead, each neuron in the input layer connects to a large number of neurons in the output layer. Using Nx and Ny to denote the number of neurons in the input and output layers, respectively, and K the average number of connections per neuron, the effective noise is reduced by a factor or (see Eq 95 below). Thus, if the number of neurons in the output layer is larger than the number in the input layer by a factor much larger than K1/2, near complete information transmission is possible. For pure differential correlations, the story is much simpler: so long as the number of neurons in both layers is large, and the added noise doesn’t have a strong component in the f′(s) direction, near complete information transmission always occurs.

Identity feedforward weights.

We’ll first consider identity feedforward weight, W = I. We’ll start with the matched covariance case. Using Eq (46), we have (77)

Taking the trace of both sides of this expression gives (78) where is the average variance of the input layer noise and is the average variance of the added noise. If the added noise is on the same order as the noise in the input layer, information loss is high. Because of synaptic failures and chaotic dynamics, we expect the added noise to be substantial, implying that matching covariances is not an especially good strategy for transmitting information, in the case where W = I.

Next we consider differential correlations (α = 0 in Eq (46)), (79) where we used Eq (9) for Iη, with Σy replaced by Ση. Taking the trace of both sides gives us (80)

If the added noise doesn’t have much of a component in the f′ direction, then is . In this case, in the large Nx regime, IxIη, and (according to Eq (76)) information loss is small. In other words, for large neural populations, differential correlations allow small information loss even when the amount of added noise is large.

An especially instructive case is iid noise added at the second layer. Using for its variance, Eq (80) simplifies to (81)

Consequently, for differential correlations and reasonably large neural populations, information loss is relatively small unless the variance in the second layer is much larger than the average variance in the first layer (by about a factor of Nx)—something that is not observed in the brain.

Although pure differential correlations can minimize information loss, they are not biologically realistic, as they do not display Poisson-like variability. That’s because for differential correlations, the variance of neuron i scales as rather than fi(s). Fortunately, this can be fixed with very little information loss by adding Poisson-like variability in the input layer. Doing so reduces the information only slightly: for the covariance structure given in Eq (4a), the information is (82) where (83) is the information associated with the covariance matrix Σ0 (see Sec.). That information is large whenever Σ0 doesn’t contain much of a component in the f′ direction and Nx is large. If these hold, the information in the input layer is approximately equal to 1/ϵ—exactly what it is for pure differential correlations. Moreover, so long as Ση also doesn’t contain much of a component in the f′ direction, information in the output layer is also close to 1/ϵ, and very little information is lost. Thus, nearly pure differential correlations are biologically realistic and can lead to very small information loss.

Realistic feedforward weights.

For realistic feedforward weights, W, we need to use Σy rather than Ση in Eq (77), with Σy given by Eq (8). (Note that because the gain function is the identity, Weff = W.) We’ll start, as above, with the matched covariance case. Taking the trace of both sides of Eq (77), but with Ση replaced by Σy, we have (84) where tr denotes trace and, as above, Nx is the number of neurons in the input layer. Using the fact that for any positive semi-definite square n × n matrix A (i.e., for any covariance matrix A), (85) we have (86) with the second equality following from Eq (8).

To get a handle on the size of the trace term in the numerator, we note that it can be written (87) where, defining vk to be the kth eigenvector of Ση, normalized so that vk · vk = 1, and to be its corresponding eigenvalue, (88)

To see that this really is a weighted average, note that because the vk form a complete, orthonormal basis, (89)

Inserting Eq (87) into Eq (86) gives us (90)

This is similar to Eq (78), except for two prefactors. The denominator of the first prefactor lies between and . We’ll assume this is (for iid noise it is exactly 1), although we note that it’s possible to make it either relatively large or relatively small. The second prefactor is more interesting, as it is the sum of a large number of terms, (91) where Ny is the number of neurons in the output layer. To determine the size of the weights, we use that fact that (92) and note that 〈yi〉 and fi should be about the same size, on average. Assuming that each neuron in the input layer connects, on average, to K neurons in the output layer, it follows that Wij is nonzero with probability K/Ny. Consequently, (93) where Wtypical and ftypical are the typical sizes of the nonzero weights and the fj, respectively. To ensure that 〈yi〉 and fi are about the same size, we must have (94)

Inserting this into Eq (91), and using the fact that Wij is nonzero with probability K/Ny, we have (95)

This can be large if NyNx K1/2. Using this relationship in Eq (90), we see that information loss can be small in the case of matched covariances, if there is sufficiently large divergence from the input to output layers.

What about differential correlations, α = 0? To understand information loss in this case, Ση is replaced by Σy in Eq (80), giving us (96) where we used Eq (8) for Σy. Here the logic is the same as it was in the previous section: so long as Σy doesn’t have a strong component in the f′ direction, is , and, since , information loss is . Thus, with realistic feedforward weights, as with the identity case, differential correlations lead to very small information loss in large populations.

Information in a population with a rank 1 perturbation to the covariance matrix

In the analysis of nonlinear gain functions in the main text (section titled “Nonlinear gain functions”), it was necessary to construct a covariance matrix such that the information in the first layer was independent of ϵu and u. For that we included a prefactor γu in the definition of the covariance matrix, Σξ (see Eq (12)). Here we determine how γu should depend on ϵu and u. Our starting point is an expression for the inverse of Σξ. As is straightforward to show, via direct substitution, that’s given by (97)

Thus, the information in the input layer, , is given by (98)

To ensure that this information is independent of γu, we let (99)

Note that γu depends on s as well as ϵu and u.

Details for numerical examples

In this section we provide details for the numerical simulations for each relevant figure.

Fig 2 and its synergistic counterpart, Fig 7.

For the numerical examples in Fig 2, we generated tuning curves for the first layer of cells using Von Mises distributions [16], (100)

For each cell, the amplitudes, υ, widths, β, peak locations, φ, and baseline offsets, ρ, were drawn independently from uniform distributions with the following ranges,

  • υ: 1–51
  • β: 1–6
  • φ: 0–2π
  • ρ: 0–1

The covariance of the noise in the first layer was given by Eq (4), with the following parameters,

  • blue population: ϵ = 10−3.
  • green population: ϵu varies with stimulus so that, for each stimulus, the blue and green populations have identical information (on average, ϵu = 8 × 10−3); |u(s)| = |f′(s)|; angle between u(s) and f′(s) = 1/8 of a radian.

With these parameters, the two populations (blue and green) conveyed the same amount of information about the stimulus.

To rule out the possibility that differences in information robustness were due to differences in average correlations within the populations, we forced the average correlations to be the same for the blue and green populations. To do that, we repeatedly took random draws of the parameters describing the tuning curves (ρ, v, β and φ) until the population averaged correlations matched between the two populations. This resulted in average correlations of −7 × 10−5, and we used this set of tuning curves for our subsequent information calculations.

We computed the information, Iy(s), in the second-layer responses using Eq (3b), with g(x) = x, W = I, and Ση = σ2 I. For the trial-shuffled information (Fig 2C), we used Eq (3a), with all off-diagonal elements of the covariance matrices Σξ set to zero. For all of these information calculations, we computed the information, Ix(s) or Iy(s), for 100 different stimulus values s, uniformly spaced between 0 and 2π, and then averaged over these 100 different values.

To assess whether synergistic population codes can similarly vary in their robustness to corruption by noise, we repeated our calculations from Fig 2, but modified the covariance matrices to make the population synergistic (Fig 7C: the correlated responses convey more stimulus information than would independent cells with the same variances). To do that we again used the covariance matrices given in Eq (4), but we made ϵ and ϵu negative: ϵ = −5 × 10−4 and 〈ϵu〉 = −3 × 10−4 (as in Fig 2, ϵu depends on the stimulus, s: it was chosen so that for each value of s the blue and green populations have identical stimulus information). We chose u(s) so that it had the same magnitude as f′(s) and made an angle of 1/4 of a radian with f′(s). We used the same functions and distributions for the tuning curves as in Fig 2, but used a different seed for the random number generator. As in Fig 2, the seed was chosen (via multiple draws of the tuning curve parameters) so that the two populations had the same average correlations (in this case 2 × 10−5). Also as in Fig 2, the populations were roughly Poisson-like, in the sense that the mean and variance of the activity of each neuron was approximately equal. (Both the “green” and the “blue” populations have average Fano factors—averaged over neurons and stimuli—of 0.99.) We again found that equally-informative population codes could vary significantly in terms of their robustness to noise (Fig 7B).

Fig 5.

To generate Fig 5B, we analytically computed the means of the second layer responses, resulting in the expression (101) where θi is the ith cell’s firing threshold, σi is the standard deviation of the input noise to the cell, and Φ(⋅) is the Gaussian cumulative distribution function. For each cell, the input function fi(s) was given by a Von Mises distribution, Eq (100) (with the same distribution of parameters—v, β, φ and ρ—as in the preceding examples), and the spiking threshold, θi, was set to 3/4 of the peak height of the input tuning curve: θi = 3(ρi + υi)/4.

It is not straightforward to compute the covariance matrix of correlated responses generated by the dichotomized Gaussian model, so we used Monte Carlo methods to estimate the covariance: we took 106 draws from the distribution of x, and for each draw we computed the corresponding responses, y, using the thresholding operation (Eq (11)). We then computed the covariance of these simulated responses, and used them to estimate the linear Fisher information in the second layer activities via the standard expression, (102)

Fig 6.

Fig 6 was made in the same fashion as Fig 5, with the exception that noise was added before the spike generation nonlinearity. The noise, ζ, was Gaussian and drawn iid with variance .

Acknowledgments

We thank Robert Townley, Kresimir Josic, Fred Rieke, Braden Brinkman, Maxwell Turner, and Alison Weber for helpful comments on the project.

Author Contributions

  1. Conceptualization: JZ AP PEL ESB.
  2. Formal analysis: JZ PEL ESB.
  3. Methodology: JZ AP PEL ESB.
  4. Validation: JZ PEL ESB.
  5. Writing – original draft: JZ AP PEL ESB.
  6. Writing – review & editing: JZ AP PEL ESB.

References

  1. 1. Britten KH, Shadlen MN, Newsome WT, Movshon JA. Responses of neurons in macaque MT to stochastic motion signals. Visual Neurosci. 1993;10:1157–1169.
  2. 2. Softky W, Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSP’s. J Neurosci. 1993;13:334–350. pmid:8423479
  3. 3. Faisal AA, Selen LPJ, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci. 2008;9:292–303. pmid:18319728
  4. 4. Churchland M, Yu BM, Cunningham JP, Sugrue LP, Cohen MR, Corrado GS, et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat Neurosci. 2010;13:369–378. pmid:20173745
  5. 5. Franke F, Fiscella M, Sevelev M, Roska B, Hierlemann A, da Silveira R. Structure of neural correlation and how they favor coding. Neuron. 2016;89:409–422. pmid:26796692
  6. 6. Zylberberg J, Cafaro J, Turner MH, Shea-Brown E, Rieke F. Direction-selective circuits shape noise to ensure a precise population code. Neuron. 2016;(89):369–383. pmid:26796691
  7. 7. Zylberberg J, Hyde RA, Strowbridge BW. Dynamics of robust pattern separability in the hippocampal dentate gyrus. Hippocampus. 2016;(29):623–632. pmid:26482936
  8. 8. Zohary E, Shadlen MN, Newsome WT. Correlated neuronal discharge rate and its implications for psychophysical performance. Nature. 1994;370(6485):140–143. pmid:8022482
  9. 9. Abbott LF, Dayan P. The effect of correlated variability on the accuracy of a population code. Neural Comput. 1999;11(1):91–101. pmid:9950724
  10. 10. Sompolinsky H, Yoon H, Kang K, Shamir M. Population coding in neuronal systems with correlated noise. Phys Rev E. 2001;64(5):051904.
  11. 11. Romo R, Hernandez A, Zainos A, Salinas E. Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron. 2003;38(4):649–657. pmid:12765615
  12. 12. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nat Rev Neurosci. 2006;7(5):358–366. pmid:16760916
  13. 13. Shamir M, Sompolinsky H. Implications of neuronal diversity on population coding. Neural Comput. 2006;18(8):1951–1986. pmid:16771659
  14. 14. Averbeck BB, Lee D. Effects of noise correlations on information encoding and decoding. J Neurophys. 2006;95(6):3633–3644.
  15. 15. Josić K, Shea-Brown E, Doiron B, de la Rocha J. Stimulus-dependent correlations and population codes. Neural Comput. 2009;21(10):2774–2804. pmid:19635014
  16. 16. Ecker AS, Berens P, Tolias AS, Bethge M. The Effect of Noise Correlations in Populations of Diversely Tuned Neurons. J Neurosci. 2011;31(40):14272–14283. pmid:21976512
  17. 17. Cohen MR, Kohn A. Measuring and interpreting neuronal correlations. Nat Neurosci. 2011;14(7):811–819. pmid:21709677
  18. 18. Latham P, Roudi Y. Role of correlations in population coding. arXiv:11096524 [q-bio/NC]. 2011;.
  19. 19. da Silveira RA, Berry MJ. High-Fidelity Coding with Correlated Neurons. PLoS Comput Biol. 2014;10:e1003970. pmid:25412463
  20. 20. Hu Y, Zylberberg J, Shea-Brown E. The Sign Rule and Beyond: Boundary Effects, Flexibility, and Noise Correlations in Neural Population Codes. PLoS Comput Biol. 2014;10:e1003469. pmid:24586128
  21. 21. Shamir M. Emerging principles of population coding: in search for the neural code. Curr Opin Neurobiol. 2014;25:140–148. pmid:24487341
  22. 22. Moreno-Bote R, Beck J, Kanitscheider I, Pitkow X, Latham P, Pouget A. Information-limiting correlations. Nat Neurosci. 2014;17:1410–1417. pmid:25195105
  23. 23. Zylberberg J, Shea-Brown E. Input nonlinearities can shape beyond-pairwise correlations and improve information transmission by neural populations. Phys Rev E. 2015;92:062707.
  24. 24. Cayco-Gajic A, Zylberberg J, Shea-Brown E. Triplet correlations among similarly-tuned cells impact population coding. Front Comput Neurosci. 2015;9:57. pmid:26042024
  25. 25. Pouget A, Deneve S, Ducom JC, Latham PE. Narrow versus wide tuning curves: what’s best for a population code? Neural Comput. 1999;11:85–90. pmid:9950723
  26. 26. Zhang K, Sejnowski TJ. Neuronal tuning: To sharpen or broaden? Neural Comput. 1999;11(1):75–84. pmid:9950722
  27. 27. Wilke SD, Eurich CW. Representational accuracy of stochastic neural populations. Neural Comput. 2002;14(1):155–189. pmid:11747537
  28. 28. Tkačik G, Prentice JS, Balasubramanian V, Schneidman E. Optimal population coding by noisy spiking neurons. Proc Natl Acad Sci USA. 2010;107(32):14419–14424. pmid:20660781
  29. 29. Beck J, Bejjanki VR, Pouget A. Insights from a simple expression for linear fisher information in a recurrently connected population of spiking neurons. Neural Comput. 2011;23(6):1484–1502. pmid:21395435
  30. 30. Toyoizumi T, Aihara K, Amari SI. Fisher information for spike-based population decoding. Phys Rev Lett. 2006;97:098102. pmid:17026405
  31. 31. Bejjanki VR, Beck JM, Lu ZL, Pouget A. Perceptual learning as improved probabilistic inference. Nat Neurosci. 2011;14:642–648. pmid:21460833
  32. 32. Salinas E, Sejnowski TJ. Impact of correlated synaptic input on output firing rate and variability in simple neuronal models. J Neurosci. 2000;20(16):6193–6209. pmid:10934269
  33. 33. Salinas E, Sejnowski TJ. Correlated neuronal activity and the flow of neural information. Nat Rev Neurosci. 2001;2:539–550. pmid:11483997
  34. 34. Reid RC. Divergence and reconvergence: multielectrode analysis of feedforward connections in the visual system. Prog Brain Res. 2001;130:141–154. pmid:11480272
  35. 35. Bruno RM. Synchrony in sensation. Curr Opin Neurobiol. 2011;21:701–708. pmid:21723114
  36. 36. Abeles M. Role of the cortical neuron: integrator or coincidence detector? Isr J Med Sci. 1982;18:83–92. pmid:6279540
  37. 37. Seriès P, Latham PE, Pouget A. Tuning curve sharpening for orientation selectivity: coding efficiency and the impact of correlations. Nat Neurosci. 2004;7(10):1129–1135. pmid:15452579
  38. 38. Renart A, van Rossum MCW. Transmission of population-coded information. Neural Comput. 2011;24:391–407. pmid:22023200
  39. 39. Lampl I, Reichova I, Ferster D. Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron. 1999;22:361–374. pmid:10069341
  40. 40. Alonso JM, Usrey WM, Reid RC. Precisely correlated firing of cells in the lateral geniculate nucleus. Nature. 1996;383:815–819. pmid:8893005
  41. 41. Goris RLT, Movshon JA, Simoncelli EP. Partitioning neuronal variability. Nat Neurosci. 2014;17:858–865. pmid:24777419
  42. 42. Smith MA, Kohn A. Spatial and temporal scales of neuronal correlation in primary visual cortex. J Neurosci. 2008;28:12591–12603. pmid:19036953
  43. 43. Ecker AS, et al. State dependence of noise correlations in macaque primary visual cortex. Neuron. 2014;82:235–248. pmid:24698278
  44. 44. Scholvinck ML, Saleem AB, Benucci A, Harris KD, Carandini M. Cortical State Determines Global Variability and Correlations in Visual Cortex. J Neurosci. 2015;35:170–178. pmid:25568112
  45. 45. Lin IC, Okun M, Carandini M, Harris KD. The Nature of Shared Cortical Variability. Neuron. 2015;87:644–656. pmid:26212710
  46. 46. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal function approximators. Neural Netw. 1989;2:359–366.
  47. 47. Barlow HB, Levick WR. The mechanism of directionally selective units in rabbit’s retina. J Physiol. 1965;178:477–504. pmid:5827909
  48. 48. Cramer H. Mathematical methods of statistics. Princeton University Press; 1946.
  49. 49. Rao CR. Information and the accuracy attainable in the estimation of statistical parameters. Bull Calcutta Math Soc. 1945;37:81–89.
  50. 50. Schneidman E, Still S, Berry MJ, Bialek W. Network Information and Connected Correlations. Phys Rev Lett. 2003;91(23):238701. pmid:14683220
  51. 51. Adibi M, McDonald JS, Clifford CWG, Arabzadeh E. Adaptation improves neural coding efficiency despite increasing correlations in variability. J Neurosci. 2013;33:2108–2120. pmid:23365247
  52. 52. Macke JH, Berens P, Ecker AS, Tolias AS, Bethge M. Generating spike trains with specified correlation coefficients. Neural Comput. 2009;21(2):397–423. pmid:19196233
  53. 53. Macke J, Opper M, Bethge M. Common Input Explains Higher-Order Correlations and Entropy in a Simple Model of Neural Population Activity. Phys Rev Lett. 2011;106(20):208102. pmid:21668265
  54. 54. Yu S, Yang H, Nakahara H, Santos GS, Nikolic D, Plenz D. Higher-order correlations characterized in cortical activity. J Neurosci. 2011;31:17514–17526. pmid:22131413
  55. 55. Amari SI, Nakahara H, Wu S, Sakai Y. Synchronous firing and higher-order interactions in neuron pool. Neural Comput. 2003;15(1):127–142. pmid:12590822
  56. 56. Bethge M, Berens P. Near-maximum entropy models for binary neural representations of natural images, Platt JC, Koller D., Singer Y., Roweis S., Editors. Advances in neural information processing systems. 2008;20.
  57. 57. Leen D, Shea-Brown E. A simple mechanism for beyond-pairwise correlations in integrate-and-fire neurons. J Math Neurosci. 2015;5:17.
  58. 58. Barlow H. Redundancy reduction revisited. Network-Comput Neural Syst. 2001;12(3):241–253.
  59. 59. Beck JM, Ma W, Pitkow X, Latham PE, Pouget A. Not noisy, just wrong: the role of suboptimal inference in behavioral variability. Neuron. 2012;74(1):30–39. pmid:22500627
  60. 60. Kanitscheider I, Coen-Cagli R, Pouget A. Origin of information-limiting noise correlations. Proc Natl Acad Sci USA. 2015;112:E6973–E6982. pmid:26621747
  61. 61. Graf AB, Kohn A, Jazayeri M, Movshon JA. Decoding the activity of neuronal populations in macaque primary visual cortex. Nat Neurosci. 2011;14:239–245. pmid:21217762
  62. 62. Berens P, Ecker AS, Cotton RJ, Ma WJ, Bethge M, Tolias AS. A Fast and Simple Population Code for Orientation in Primate V1. J Neurosci. 2012;32:10618–10626. pmid:22855811