Advertisement
  • Loading metrics

Constrained brain volume in an efficient coding model explains the fraction of excitatory and inhibitory neurons in sensory cortices

  • Arish Alreja,

    Roles Conceptualization, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Neuroscience Institute, Center for the Neural Basis of Cognition and Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America

  • Ilya Nemenman,

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of Physics, Department of Biology and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America

  • Christopher J. Rozell

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    crozell@gatech.edu

    Affiliation School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America

Abstract

The number of neurons in mammalian cortex varies by multiple orders of magnitude across different species. In contrast, the ratio of excitatory to inhibitory neurons (E:I ratio) varies in a much smaller range, from 3:1 to 9:1 and remains roughly constant for different sensory areas within a species. Despite this structure being important for understanding the function of neural circuits, the reason for this consistency is not yet understood. While recent models of vision based on the efficient coding hypothesis show that increasing the number of both excitatory and inhibitory cells improves stimulus representation, the two cannot increase simultaneously due to constraints on brain volume. In this work, we implement an efficient coding model of vision under a constraint on the volume (using number of neurons as a surrogate) while varying the E:I ratio. We show that the performance of the model is optimal at biologically observed E:I ratios under several metrics. We argue that this happens due to trade-offs between the computational accuracy and the representation capacity for natural stimuli. Further, we make experimentally testable predictions that 1) the optimal E:I ratio should be higher for species with a higher sparsity in the neural activity and 2) the character of inhibitory synaptic distributions and firing rates should change depending on E:I ratio. Our findings, which are supported by our new preliminary analyses of publicly available data, provide the first quantitative and testable hypothesis based on optimal coding models for the distribution of excitatory and inhibitory neural types in the mammalian sensory cortices.

Author summary

Neurons in the brain come in two main types: excitatory and inhibitory. The interplay between them shapes neural computation. Despite brain sizes varying by several orders of magnitude across species, the ratio of excitatory and inhibitory sub-populations (E:I ratio) remains relatively constant, and we don’t know why. Simulations of theoretical models of the brain can help answer such questions, especially when experiments are prohibitive or impossible. Here we placed one such theoretical model of sensory coding (’sparse coding’ that minimizes the simultaneously active neurons) under a biophysical ‘volume’ constraint that fixes the total number of neurons available. We vary the E:I ratio in the model (which cannot be done in experiments), and reveal an optimal E:I ratio where the representation of sensory stimulus and energy consumption within the circuit are concurrently optimal. We also show that varying the population sparsity changes the optimal E:I ratio, spanning the relatively narrow ranges observed in biology. Crucially, this minimally parameterized theoretical model makes predictions about structure (recurrent connectivity) and activity (population sparsity) in neural circuits with different E:I ratios (i.e., different species), of which we verify the latter in a first-of-its-kind inter-species comparison using newly publicly available data.

Introduction

Neural circuits shape cortical activity to perform computation including the encoding and processing of sensory information. Understanding the design principles as well as the functional computations in such circuits has been a foundational challenge of neuroscience, with potential applications to a wide variety of fields ranging from human health to artificial intelligence. However, the structural complexity and dynamic response properties of these circuits present significant challenges to uncovering their fundamental governing principles. Some of the brain’s structural properties are extremely variable across species and individuals [1], while properties such as the structure of cortical microcircuits seem to be reasonably conserved [25]. These conserved properties offer hope of revealing general principles of how canonical neural computations are organized.

While multiple experimental [613] and computational [1416] studies have offered insights about inhibitory interneurons at different scales, their precise computational role in sensory information processing remains elusive. While the total number of cortical neurons varies by several orders of magnitude across species (e.g mice—106 neurons [17], cats—108 neurons [18] and monkeys—109 neurons [19]), the relative abundance of excitatory and inhibitory neurons appears to one of the better conserved structural properties of cortical microcircuits, making it a potentially important clue for determining neural circuit function. Morphological studies indicate that the ratios of excitatory to inhibitory neurons (E:I ratio) stay within a relatively narrow nominal range of 3:1–9:1 (i.e., inhibitory interneurons are 10%-25% of the neural population) across species and are consistent in different sensory areas within a species, despite significant variation in the number of neurons across both species and sensory areas (Table 1) [2033].

thumbnail
Table 1. E:I ratios and # of Neurons in primary auditory (A1), visual (V1) and somatosensory (S1) cortices for different species from morphological studies.

https://doi.org/10.1371/journal.pcbi.1009642.t001

This relative constancy of the E:I ratio must be understood within the context of sensory computations. Inhibitory interneurons in sensory cortical microcircuits have connectivity patterns contained within local circuits [2, 4], leading to inhibitory cells being generally viewed as performing a modulatory role in computation while excitatory cells code the sensory information directly [5]. For a given sensory cortical area, there are potential computational benefits to increasing the size of both the excitatory and the inhibitory subpopulations. For example, more excitatory cells may provide higher fidelity stimulus encoding, while more inhibitory cells may enable more complexity or accuracy in the computations being performed. However, volume is a critical constrained resource for cortical structures [34], and increasing one of these subpopulations in a fixed volume necessitates decreasing the other. We propose that the narrow variability of the E:I ratio can be explained as an optimal trade-off in the fidelity of the sensory representation contained in the excitatory subpopulation vs. the fidelity of the information processing mediated by the inhibitory subpopulation. Understanding this trade-off may play a critical role in determining the principles underlying the structure and function of cortical circuits.

Specifically, we propose to understand this trade-off in the context of efficient coding models [3538] under a volume constraint (Fig 1). In this initial study, the volume constraint is defined as the total number of neurons and does not explicitly model either volume differences by cell type or non-somatic elements such as axons and dendrites (though those extensions could be added in the future). We implement an efficient coding model known as sparse coding [37, 39], which uses recurrent circuit computations to encode a stimulus in the excitatory cell activities (denoted aj) using as few excitatory neurons as possible (i.e., having high population sparsity). In detail, the sparse coding model proposes encoding a stimulus (e.g., an image) I in terms of the sum of the activity aj of excitatory neurons with receptive fields ϕj, by minimizing a cost function that balances representation error (i.e., fidelity) with the sparsity of the neural population activity: (1) Note that the population sparsity constraint only includes excitatory cells and does not include the activity of the inhibitory cells necessary to enact the required computation (i.e., solve the optimization program). Sparse coding models have been shown to account for many observed response properties of the visual cortex [37, 39, 40] and can be implemented in biophysically plausible recurrent circuits [41, 42] with a desired sparsity level and a given E:I ratio [14, 15] (optimally approximating the ideal circuit implementation). See Methods for details. Recent work has shown also that increasing the population of excitatory [15, 43] and inhibitory [15] cell types in sparse coding models can improve stimulus representation in models where the size of neural populations is unrestricted.

thumbnail
Fig 1. Optimal E:I ratio for coding fidelity.

(top row) A sparse coding model is placed under a volume constraint by restricting the total number of neurons to N. Excitatory neurons receive recurrent as well as feed forward (stimulus) input and are responsible for coding the stimulus. Inhibitory interneurons are driven by recurrent excitatory inputs, and enable accurate computation of the neural encoding to induce sparsity in the excitatory neurons. (middle row) We vary the relative size of the excitatory (NE) and inhibitory (NI) subpopulations and evaluate the model at different E:I ratios under the volume constraint, N = NE + NI. (bottom row) We show that coding fidelity is optimal (boxed image at 6:1) at a unique, biologically plausible E:I ratio for the fixed volume. We evaluate models coding 16 × 16 = 256 pixel natural image patches [37] with N = 1200 (≈ 5× overcomplete representation).

https://doi.org/10.1371/journal.pcbi.1009642.g001

Here we show that, for a fixed volume constraint (using neural population size as a surrrogate), there exists an optimal E:I ratio where the stimulus representation, the sparseness of the sensory representation, and the metabolic efficiency of the entire network are all optimized in the model. This model-optimal E:I ratio is consistent with observed biophysical ranges and it varies based on the sparsity level of the encoding, potentially accounting for species specific variations within the observed biophysical ranges. Furthermore, higher optimal E:I ratios (at higher sparsity levels) produce inhibitory synaptic distributions that are more specific while approximately preserving the total inhibitory influence in the circuit (to retain balanced levels of excitation and inhibition). These results constitute specific and testable theoretical predictions requiring comparative neurophysiology and neuroanatomy experiments for full validation. We also perform novel analyses of experimental recordings of neural populations in area V1 for multiple species (mice, cats and monkeys), constituting the first steps in comparative analyses of population sparsity in large-scale electrophysiology recordings. The results of this analysis are consistent with the model prediction of a correlation between E:I ratio and population sparsity level. Taken together, these results suggest that a combination of optimal coding models with physical constraints (e.g., volume) may provide a potential normative explanation for conserved structures observed in sensory cortical microcircuits across species.

Results

We analyze sparse coding models optimized for a variety of E:I ratios (i.e., the ratio of the number of excitatory cells to inhibitory cells while fixing the total number of neurons) and sparsity levels (denoted by model parameter λ) by unsupervised training using a natural image database [37]. See Methods for details. The performance of these models is quantified using stimulus reconstruction error, population sparsity [44], and metabolic energy consumption [45].

For a sparse coding model trained with the sparsity constraint λ = 0.15, we observe that the reconstruction error is minimized at the ratio of ∼ 6.5 : 1 (Fig 2A). The reconstruction error is a surrogate measure of the fidelity of the stimulus information preserved in the encoding. As the E:I ratio increases from 1:1, the increase in E cells leads to greater receptive field diversity in the E cell subpopulation [15, 43], allowing for better encoding of the stimulus. This increased representational capacity produces a gradual decline in the reconstruction error. As the E:I ratio increases beyond the optimum, the declining number of inhibitory interneurons results in insufficiently diverse inhibition to accurately solve the desired encoding, leading to a rapid increase in the reconstruction error. Results are independent of the size of the used database (10 images with 512 x 512 pixels each) used for training (See Fig A and Supplementary Methods in S1 Text). The tolerance in calculating the optimal reconstruction error was negligible compared to the changes in the error due to varying the E:I ratio.

thumbnail
Fig 2. Optimal E:I ratios are in biophysically observed ranges, increase with sparsity (λ) and coincide for multiple performance measures.

The performance of sparse coding models subject to a constraint of N = 1200 total neurons and under different sparsity constraints (λ ∈ [0.0004, 0.30]) and using stimuli (100 image patches, 16 x 16 pixels) drawn from a database of 10 natural 512 x 512 pixels images [37]. Performance measures are normalized per Eq 8 and standard error (depicted with a shaded band, shown only for λ = 0.15 for clarity) over the natural image database is estimated using a bootstrap procedure (see Supplementary Methods in S1 Text). Markers denote the optimal E:I ratio for models at each sparsity constraint for each performance measure. Optimal E:I ratios for different performance measures are essentially identical as illustrated by vertical lines connecting markers across the 3 plots, and increases in model sparsity (λ) correspond to increases the optimal E:I ratio for each performance measure (also see Fig 4A and Fig D in S1 Text). (A) The coding fidelity for a sparse coding models with different sparsity constraints quantified by the normalized reconstruction error. The coding performance is optimized at an E:I ratio of approximately 6.5:1 (in a biophysically plausible range), with values above (below) that number suffering from lack of diversity in the inhibitory (excitatory) cell population. (B) Population Activity Density (1—Population Sparsity) for a sparse coding model (see Methods) is minimized at nearly the same specific optimal E:I ratio as with coding fidelity. (C) Lastly, a metabolic energy consumption measure [45] (see Methods) reveals minimal metabolic energy consumption at nearly the same specific E:I ratio as with coding fidelity and population density.

https://doi.org/10.1371/journal.pcbi.1009642.g002

Efficient coding models seek a parsimonious representation of sensory inputs in the excitatory neural activity in addition to an accurate encoding. To quantify this parsimony, we plot the density of activity of excitatory neurons in the sparse coding model (Fig 2B), as measured by population density, an additive inverse of the commonly used modified Treves-Rolls (TR) metric [44] that quantifies population sparsity (see Methods). Notably, the population activity density is minimized (i.e., population sparsity is maximized) at approximately the same E:I ratio that optimizes reconstruction fidelity. At low E:I ratios, the stimulus representation is not rich enough to admit a sparse representation of natural scene statistics with available receptive fields of excitatory cells. With high E:I ratios, the available inhibition is insufficient to achieve sparse population activity in the excitatory cells.

A common rationale for the efficient coding hypothesis (including sparse coding models) is that efficient codes may reduce the metabolic cost of the neural activity [4649]. While decreasing the mean firing rate of excitatory neurons would decrease the metabolic cost of producing action potentials in those cells, it is not clear which network architecture minimizes the total metabolic energy consumption when accounting for the cost of supporting the non-sparse activity of the inhibitory interneurons [9, 50, 51]. We quantify and plot (Fig 2C) the total metabolic energy cost of the network (see Methods). Once again, the optimal E:I ratio achieving minimal energy consumption for different sparsity constraints is approximately the same E:I ratio that optimizes reconstruction fidelity and excitatory population sparsity.

We find that the optimal E:I ratios for all three performance measures (reconstruction error, population sparsity, metabolic energy consumption) are consistent for a given sparsity level (λ), underscoring the existence of a clear optimal E:I ratio, which is robust to optimality criteria. We observe that increasing the sparsity level (λ) leads to a higher optimal E:I ratio in all three performance measures (Fig 2). Crucially, as we will show in detailed analysis later, we find that the model optimal E:I ratios correspond to the relatively narrow ranges observed in biology that correspond to species whose cortical sizes vary 1000 fold (Fig 4A). Although it appears counterintuitive that a model with greater sparsity achieves optimal performance with fewer inhibitory interneurons (i.e., a higher optimal E:I ratio), we elaborate the underlying reasons below by examining the detailed inhibitory synaptic structure.

While networks optimized for different sparsity levels have different optimal E:I cell type ratios, it is unclear if either the synaptic distribution (a structural measure) or the total amount of inhibitory activity (a functional measure) change as well. To understand potential structural changes, we first examined the structural nature of the inhibitory interactions in the recurrent network at different sparsity levels (λ) and optimal E:I ratios. We observe that there are systematic changes in the distribution of weights for Inhibitory→Excitatory connections (Fig 3B) as λ changes. In particular, lower sparsity levels (λ) corresponding to lower optimal E:I ratios result in inhibitory synapse distributions that have heavier tails and higher kurtosis (Fig 3C). Therefore, at lower E:I ratios when there are relatively more inhibitory interneurons in the circuit, the individual interneurons have more targeted projections to deliver inhibition more selectively to shape excitatory activity (Fig 3A and B.i and B.iii in S1 Text).

thumbnail
Fig 3. Structure and function of model inhibition change with sparsity.

(A) An illustration visualizing changing weight distributions from the perspective of an inhibitory interneuron. As sparsity (λ) increases, the proportion of stronger (solid lines) I→E projections increases and the number of weaker (dashed lines) I→E projections dwindles. (B) Estimated probability density functions for the inhibitory to excitatory connection weights in the optimal computational models at different sparsity levels reveal an increasing fraction of inhibitory synapses are stronger as sparsity increases. (C) Estimated kurtosis vs. sparsity quantifies the changes visible in the distributions, demonstrating that inhibition is more targeted and less global at lower sparsity levels with smaller E:I ratios. (D) With increasing sparsity (corresponding to higher optimal E:I ratios), the inhibitory subpopulation’s mean activity level declines (p < 10−8, significant after accounting for multiple comparisons; except for λ = 0.1 and λ = 0.2) and becomes less diverse exhibiting a lower standard deviation (p < 10−8, significant after accounting for multiple comparisons). (E) Despite the changes in inhibitory structure and function due to changes in sparsity level (and optimal E:I cell type ratio), the changes to inhibitory synaptic distributions and firing rates counteract each other so that the total inhibitory influence in the network remains constant and the circuit maintains balance between the recurrent excitatory and inhibitory activity.

https://doi.org/10.1371/journal.pcbi.1009642.g003

Functionally, the total amount of inhibitory influence in a circuit is a combination of the spiking activity in the inhibitory interneurons and the total strengths of the synapses from inhibitory to excitatory neurons. We next examined the inhibitory activity in the recurrent network at different sparsity levels (λ) and optimal E:I ratios. We observe that lower λ corresponding to lower optimal E:I ratios result in higher average activity levels per cell (p < 10−8 using a t-test, significant after accounting for multiple comparisons; except for λ = 0.1 and λ = 0.2) and more diverse inhibition reflected by higher standard deviations (p < 10−8 using a f-test, significant after accounting for multiple comparisons) across the relatively larger inhibitory subpopulation (Fig 3D). Despite significant changes in the synaptic structure and firing rates of inhibitory interneurons as λ (and the E:I cell type ratio) changes, the total amount of inhibitory influence in the network does not change substantially (Fig 3E and Fig C in S1 Text). Specifically, as λ increases, the reduction in inhibitory subpopulation size and firing rates is offset by the broader tuning of the inhibitory synapses so that the balance between total excitation and inhibition in the network remains relatively constant in a stable regime (Fig B.ii in S1 Text).

While theoretical modeling often assumes that the sparsity level of an efficient coding model is an unknown parameter that can be fit to data, the analysis above predicts that optimal efficient coding networks should have E:I ratios correlated with population sparsity (Fig 4). Unfortunately, despite sporadic characterizations of population sparsity reported in the literature (with different data types and analysis methods), we lack a comparative analysis of population sparsity across species. In new analyses of recent publicly available datasets comprised of large-scale V1 electrophysiology recordings, we evaluated the sparsity in population activity in 56 mice [52], 2–3 non human primates (macaques) [53, 54] and 1 cat [55] studies featuring natural visual stimuli (movies, images). The similarly low sparsity levels observed in monkeys and cats (E:I = 4–4.3:1) as well as their contrast with higher sparsity levels in mice (E:I = 5.7–9:1) are consistent with the predictions of the efficient coding model in this study. Specifically, using a hierarchical bootstrap procedure [56] (See Methods and Supplementary Methods and Fig E in S1 Text) to compare population sparsity for different species, we observed that mice have much higher population sparsity (lower density) than monkeys and cats when viewing natural movies (Fig 4B, pbootstrap < 10−8). Similarly, mice exhibit higher population sparsity than monkeys (Fig 4C, pbootstrap = 0.01966) in response to natural images.

thumbnail
Fig 4. Model predictions vs. experimental data.

(A) (Left Y and Bottom X axes) Optimal E:I ratio based on normalized reconstruction error (See Fig D in S1 Text for other performance measures) as a function of model sparsity constraint λ is depicted by the solid line (mean) with variability (± standard error) denoted by the shaded band. (Right Y and Top X axes) The population sparsity (TR) measure computed for electrophysiology data from experimental studies in mice [52], non human primates (macaques) [53, 54] and cats [55] is shown (mean (markers) ± standard error (horizontal error bars)) as a function of observed E:I ratio ranges in biology (vertical error bars). Unfilled markers represent natural images and filled markers represent natural movies. Interspecies comparisons (B, C) Statistical significance of hypotheses based on model prediction (i.e., higher E:I ratio in biology corresponds to higher population sparsity) examined via inter-species population sparsity comparisons with all available data using hierarchical boostrapping. * and >*** represent pbootstrap < 0.05 and 0.0001. (B) For natural images, the mice (E:I = 5.7–9:1) exhibit higher population sparsity compared to monkeys (E:I = 4–4.3:1), pbootstrap = 0.01966. (C) For natural movies, mice (E:I = 5.7–9:1) exhibit higher population sparsity than both monkeys (E:I = 4–4.3:1) and cats (E:I = 4:1), pboostrap < 10−8 for both, which is significant after accounting for multiple comparisons.

https://doi.org/10.1371/journal.pcbi.1009642.g004

Discussion

Using only a sparse coding model for early vision and a constraint on the volume (using total number of neurons as a surrogate), we show the emergence of optimal quality and efficiency of stimulus encoding at E:I ratios consistent with the narrow range observed in biology across species whose cortical sizes vary 1000 fold. Increasing the E:I ratio improves the representational capacity of the E cell subpopulation through the potential for greater receptive field diversity [15, 43], but at the expense of reducing the ability of the I cells to produce accurate circuit computations to implement the encoding rule. Decreasing the E:I ratio has an opposite effect, increasing the I cells available to improve computational accuracy for the encoding rule at the expense of the representational capacity of the E cell subpopulation, whose receptive field diversity shrinks, diminishing its ability to represent rich sensory statistics.

This model makes several predictions that are testable with comparative electrophysiology experiments. The primary result of this study predicts that the optimal E:I ratio is directly correlated with population sparsity, such that sparser population activity in a species will correspond to a higher E:I cell type ratio (Fig 4). In secondary results, this model also predicts that species with higher sparsity levels will have inhibitory interneuron subpopulations with both lower average firing rates that are more concentrated around the mean and lower kurtosis of the synaptic distribution than species with lower sparsity levels. These predictions are notable because it is rare for computational theories to make specific and measurable predictions about the relationship between functional and morphological properties of neural systems.

The result that networks with a higher level of population sparsity in the excitatory subpopulation are optimized with fewer inhibitory neurons (i.e., higher E:I ratio) may appear counter-intuitive given the apparent need for increased inhibition to achieve higher sparsity. However, a closer look at the specific structure in the inhibitory synaptic distribution (see Methods and S1 Text) provides some insight into this result. Models having higher population sparsity learn to represent natural stimuli differently from models at lower population sparsity. Specifically, in models with higher population sparsity, the smaller inhibitory subpopulation contains cells that have relatively lower firing rates and global synaptic connections, indicating inhibition that is more broadly tuned and less selective than in models with lower population sparsity. This model prediction is consistent with the contrast observed in experimental results from cats (E:I = 4:1) [57] and mice (E:I = 5.7–9:1) [58]. In contrast, models at lower population sparsity have inhibitory interneurons with relatively higher firing rates and synapses that are targeted to specific excitatory sub-populations (Fig 3). We note that while we discuss inhibitory interneurons generally here, we have not attempted to correspond the inhibitory components of the model to a specific genetic subtype of inhibitory interneuron. Future experimental tests of the predictions from this model can and should address the empirical question of which inhibitory interneuron subtypes are the best fit to the inhibitory influences of this model.

To perform a preliminary evaluation of this model prediction with data that is currently available, we analyzed population sparsity in area V1 of mice [52], non human primates (macaques) [53, 54] and cats [55] using publicly available electrophysiology data sets. We found that the population sparsity trends in experimental data are consistent with the global trends predicted by the model. Specifically, for a given stimulus type, species with higher E:I ratios demonstrated higher population sparsity levels. To our knowledge this is the first comparative analysis of population sparsity across species, providing valuable insight for future computational and theoretical work beyond the specific predictions of this model.

Despite this apparent agreement between experimental data sets and model predictions, the predicted correlation between optimal E:I ratio and population sparsity is challenging to thoroughly evaluate empirically because the literature currently lacks the necessary reports to provide a substantive comparative analysis of population sparsity between species. Large scale population recordings necessary to evaluate sparsity have only become possible relatively recently, and comparability of the very limited publicly available experimental data from existing studies is hampered by variability arising from differences such as the number of subjects (1 cat, 2–3 monkeys, 56 mice), recording methodology, experimental conditions (e.g., type and quantity of anaesthesia administered), brain area, number and type of neurons, stimuli, methodological considerations (e.g., counting cells, spike sorting techniques) and analysis parameters (e.g., window size substantially influences sparsity measures). The data we analyzed come from experiments whose design was not aimed at facilitating comparisons like those made in this study, and experiments that control for these sources of variability may allow for more robust evaluation of our (and future) model predictions. For example, high inter-subject variability gives rise to large error bars for sparsity estimation in monkeys looking at natural images in Fig 4A, because one of the three monkeys differs from the rest. As another example, population recordings analyzed in this study feature differing experimental conditions, with lightly anesthetized mice [52] compared to heavily anesthetized and paralyzed monkeys [53, 54] and cats [55]. Since anesthesia is known to depress neural activity [5961], we anticipate population sparsity for monkeys/cats is elevated. This bias would make it more difficult to observe the significant differences in sparsity level reported in this study, so it is unlikely to be a major confound in our analysis. However, further studies that explore population level activity in different sensory areas or under different experimental conditions may support/refute whether our model predictions apply more generally.

To illustrate the challenges with making comparative meta-analyses from data that was not collected for that purpose, we note that in addition to the data supporting the model predictions above, we have also encountered a limited number of contrasting exceptions that have known confounds that highlight the subtleties in such comparative analyses. For example, one study [62] captures V1 responses to natural stimuli in ferret and reports population sparsity (TR = 0.42) much lower than cats and monkeys despite a higher E:I ratio of 5:1 [63]. However, this study self-identifies a critical methodological issue that likely resulted in overestimated firing rates due to the use of multi-unit signals instead of isolated single units to compute sparseness, deflating the estimated population sparsity. For another example, [64] captures population sparsity in mouse V1 using spike trains estimated from calcium imaging and reports a lower population sparsity (TR = 0.45–0.55) than a recent calcium imaging study [65] (TR = 0.81), as well results from analysis of electrophysiology data from mice presented in this paper. Closer examination of this inconsistency reveals that [65] features specific targeting of excitatory neurons only while [64] does not employ cell-specific targeting, which can deflate population sparsity estimates due to the elevated firing rates of inhibitory interneurons [9, 50, 51]. The confounding effects present in these two conflicting examples from the literature illustrate a number of important methodological issues to be carefully addressed in future experimental work that aims to perform a conclusive comparative analysis.

The results of this study represent an early step toward understanding the connection between optimal coding rules and the diversity sensory cortical structure in mammals. We expect that additional verifiable predictions will be possible when more relevant biological details are introduced into the models. For example, our analysis does not make distinctions between different kinds of inhibitory interneurons and future work may consider their relative contributions when evaluating the trade-off between computational accuracy and representational capacity. Similarly, modeling thalamic input into inhibitory cells may offer greater insight into the role of inhibition beyond modulating computation performed by the excitatory sub-population.

Finally, we note that the shape of the performance curves (Fig 2) are asymmetric, with performance degrading very quickly at E:I ratios higher than the optima. While normative models can never ensure they are capturing all constraints that drive evolutionary or developmental goals for a system, this asymmetry indicates that the constraints considered here are more robust to decreasing E:I ratios rather than increasing E:I ratios. This prediction is consistent with the (limited) currently available morphological data (Table 1) that shows the distribution of E:I ratios across species is asymmetric and skewed to smaller values around the mode. Additional morphological studies on animal models not listed in Table 1 may provide additional support or refutation of this prediction. More broadly, we expect that close interplay between computational and experimental studies will further advance our ability to merge functional and physical constraints to better understand the relationship between the information processing in the brain and its structure.

Methods

Sparse coding model of visual computation

Among neural coding models instantiating the efficient coding hypothesis, we concentrate on the sparse coding model [37] that aims to minimize the number of simultaneously active neurons for each stimulus. This model is sufficient to explain the emergence of classical and nonclassical response properties in V1 [37, 42, 66] and is consistent with recent electrophysiological experiments [6769]. Furthermore, the sparse coding model can be implemented in recurrent network architectures with varying degrees of biophysical plausibility [38, 42, 7072], including distinct inhibitory interneuron populations [14, 15].

Specifically, in the sparse coding model, a set of neurons encodes an image intensity field I(x, y) through the vector of activities a = [a1, a2, …] (i.e., firing rates) by minimizing the so called energy function: (2) where the activity of each neuron ai is associated with a stimulus feature ϕi(x, y) (similar to a receptive field), and μ = 1…M sums over all images in a training set. This energy function uses the scalar parameter λ ∈ [0.0004, 0.4] to balance the preservation of stimulus information (measured by the mean-squared reconstruction error in the first term) with the efficiency of the representation (measured by the sum of the activity magnitudes in the second term). We choose the L1 norm for quantifying the efficiency of the representation since it is known to promote sparsity and is (analytically and computationally) tractable. Higher values of λ encourage more sparsity and lower values prioritize the fidelity of the stimulus encoding. As has been shown in the past, optimizing the feature set ϕi(x, y) for this coding rule using a corpus of natural images [37] will produce a set a features that resemble the measured receptive fields in primary visual cortex [37, 42].

Dynamical system implementation of the sparse coding model

To encode a specified image, we consider a recurrent dynamical circuit model [70] that provably solves the optimization in Eq [2] [73, 74] (including alternative sparsity penalties [75]) in non-spiking or spiking [71, 76, 77] network architectures. Specifically, the system dynamics for this encoding model are: (3) where I is the vectorized version of the stimulus, Φ is a matrix with a vectorized version of the dictionary element ϕi(x, y) in the ith column, the vector u contains internal state variables (e.g., membrane potentials), the vector a contains external activations (e.g., spike rates) of excitatory neurons that represent the stimulus, the matrix W governs the connectivity between the neurons (requiring inhibitory interneurons for implementation), and Tλ(⋅) is a pointwise nonlinear activation function (i.e., a soft thresholding function).

When the recurrent influences in the network are governed by W = GD = ΦTΦ − D, where G is a Grammian matrix and D is the diagonal identity matrix, then the network above is guaranteed to converge to the solution of the sparse coding objective function above [70]. In this case, the required connectivities between the excitatory cells (the principal cells encoding the stimulus) must be mediated by a combination of direct excitatory synapses (negative elements of G) and a local population of inhibitory interneurons (positive elements of G). Deviations from this network structure may result in more efficient implementations (e. g., requiring fewer inhibitory neurons), but will have the consquence of only approximately solving the desired coding objective.

We seek to form a circuit model that approximates the ideal dynamical system above as closely as possible under a fixed size for the inhibitory interneuron population implementing G. To reflect the disynaptic connections onto an inhibitory population and back to the excitatory population, consider the factorization of this connectivity matrix using the singular value decomposition (SVD): G = UΣVT. If we consider only the positive entries in this representation as in [14], each column of V contains the synaptic weights of the connections onto a single inhibitory cell, the corresponding element in the diagonal matrix Σ represents a dendritic gain term, and the corresponding column of U represents the synaptic weights from that inhibitory cell back onto the population of excitatory principle cells. Following previous work [14], we can use the truncated SVD to find the closest approximation (in terms of the Frobenius norm) to G with a specified rank, which corresponds to specifying the size of the inhibitory population.

Experimental and computational studies have reported that depending upon factors such as location, timing and magnitude, PSPs arriving at the dendritic tree can produce sub-linear, supra-linear and linear gain at the soma [78, 79]. Interpreting Σ as a gain term enables us to incorporate the biologically realistic notion of dendritic gain arising from multiple projections from an inhibitory interneuron to an excitatory neuron, into an otherwise abstract circuit model limited to representing a single projection. Under this interpretation, we estimate the activity of inhibitory interneurons as b = Va.

Implementation of a constraint on the total number of neurons

For this study, we represent 16x16 pixel image patches using N = NI + NE = 1200 total neurons to correspond to a fixed volume constraint (implicitly assuming approximately constant volume per neuron). For each E:I ratio tested, we trained a dictionary using natural images [37] for a dictionary optimized for NE excitatory cells. After training the dictionary, we implemented the dynamical system described above with the best approximation to the ideal circuit dynamics using NI inhibitory cells.

In addition to evaluating the model at different E:I ratios, we also trained and evaluated models under different sparsity constraints (λ). For a given sparsity constraint (λ) and E:I ratio, we evaluate the network over an image patch database [37] using three different performance measures.

Performance measures

The first performance measure quantifies the coding fidelity of the model for the reconstruction of an image I encoded by the model. The stimulus reconstruction error is formulated as: (4) The second performance measure is population sparsity using the modified Treves Rolls (TR) metric [44]. TR scores lie between 0 and 1, with 1 being the highest sparsity. We computed model sparsity using excitatory neuron firing rates (ai, i = 1……NE). Existing literature on experimental evidence for sparse activity in the cortex [50] indicates that typically a small inhibitory interneuron sub-population (ai, i = NE+1……N) is far more active than excitatory neurons owing to its role in modulating activity of the entire circuit. Thus sparsity is not expected to be a feature of this sub-population, and these neurons are not included in the TR metric: (5) We define Population Density (or Population Activity Density) as (6) The TR metric is sensitive to bin sizes used to evaluate spike trains and smaller bin sizes lead to higher estimates of sparsity. This consideration does not affect the analysis of model activity (a) which is interpreted as a fixed firing rate. However, the inherent variability of spike trains in experimental data means that the choice of bin size does affect population sparsity computation. In this study, a bin size of 100ms is used for natural images, natural movies and spontaneous activity. Analysis for natural images is bound to a 100ms bin size due to a 106ms trial duration constraint in monkey experimental data [53]. A direct comparison between population sparsity of the model and experimental data is not practical given the sensitivity of the TR metric to scaling, since the dynamic ranges for the model coefficients and firing rates of neurons are very different.

The third performance measure is an estimate of the metabolic energy consumption in sparse coding models constrained to a fixed total number of neurons. We compute this measure using metabolic energy consumption models for rodents and primates [45, 80], which are grounded in physiological and anatomical studies. The models estimate the metabolic energy consumption (ATP molecules/gm-minute) for cortical gray matter by aggregating estimates for the granular processes involved in its functioning. The processes include pumping out Na+ entering during signaling, glutamatergic signaling, glutamate recycling, post-synaptic actions of glutamate and pre-synaptic Ca2+ fluxes and glial cell activity. While the energy consumption associated with inhibition is thought to be somewhat less than excitation [45], we approximate the energy consumption of spiking activity as being equal in all neuron types due to the relatively smaller prevalance of inhibitory neurons and synapses in the population [45]. We have not included energy consumption due to glial cells due to their relatively small fraction of energy usage [81] and lack of a central role in the current modeling study.

For our study, we compute the metabolic energy consumption of sparse coding models constrained to a fixed total number of neurons using the rodent metabolic energy consumption model, which has two main components. The first component represents the energy expended to maintain resting potentials (3.42 × 108 ATP molecules/s-neuron), and the second represents energy spent to sustain action potentials at a given rate (7.1 × 108 ATP molecules/neuron-spike × firing rate (Hz)). These estimates are used to compare the performance of a model at different E:I ratios, and they are only weakly affected by whether the rodent or the primate metabolic energy consumption is used: (7)

Normalization of performance measures

Models with different sparsity constraints (λ) produce deviations against different baselines for reconstruction error, population sparsity/density and metabolic energy consumption. To compare different models, a common baseline is required. We implement normalization for each of the measures above in the form of a relative increase as a percentage of the difference between the value at E:I Ratio of 1:1 and the minimal value observed across all E:I ratios evaluated for a given model. This normalization is described as (8) While the normalization makes visualization easier, it does not change the qualitative results.

Inter-species comparisons of experimental population sparsity

We computed the Population Sparsity (TR metric) for electrophysiology data sets for non human primates (macaques) [53, 54], mice [52] and cats [55] that includes natural images and natural movies as stimuli types. Neural recordings from each study can be viewed as multi-level data sets, with differences in numbers of subjects, trials and neurons across them that can be represented as a hierarchy. For each trial in each data set, we computed a population sparsity value. To test model predictions that higher optimal E:I ratios correspond to greater population sparsity against experimental data from different species, we implemented a hierarchical bootstrap procedure that is more conservative in controlling for Type-I errors with multi-level data sets than traditional paired tests [56]. For each species and stimulus type, we run the bootstrap 10,000 times, generating estimates of average population sparsity. We used the resulting distributions to test the hypotheses framed by model predictions. The hierarchical organization for the bootstrap procedure for each species and stimulus type is described in detail in Supplementary Methods and Fig E in S1 Text.

Supporting information

S1 Text.

Figure A: Estimation of Bias in Statistical Analysis. Figure B: Structure of Recurrent Inhibition. Figure C: Recurrent Excitation vs Inhibition (Normalized Activity Profiles). Figure D: Model Predictions (all measures) vs Biology. Figure E: Hierarchical Bootstrap.

https://doi.org/10.1371/journal.pcbi.1009642.s001

(PDF)

Acknowledgments

We appreciate feedback from Bilal Haider, Josh Siegle and Adam Kohn on drafts of this manuscript.

References

  1. 1. Hofman MA. On the evolution and geometry of the brain in mammals. Progress in Neurobiology. 1989;32(2):137–158. pmid:2645619
  2. 2. Douglas RJ, Martin KA, Whitteridge D. A canonical microcircuit for neocortex. Neural computation. 1989;1(4):480–488.
  3. 3. DeFelipe J, Alonso-Nanclares L, Arellano JI. Microstructure of the neocortex: comparative aspects. Journal of Neurocytology. 2002;31(3-5):299–316. pmid:12815249
  4. 4. Harris KD, Shepherd GM. The neocortical circuit: themes and variations. Nature neuroscience. 2015;18(2):170–181. pmid:25622573
  5. 5. Miller KD. Canonical computations of cerebral cortex. Current Opinion in Neurobiology. 2016;37:75–84. pmid:26868041
  6. 6. Hirsch JA, Martinez LM, Pillai C, Alonso JM, Wang Q, Sommer FT. Functionally distinct inhibitory neurons at the first stage of visual cortical processing. Nature Neuroscience. 2003;6(12):1300–1308. pmid:14625553
  7. 7. El-Boustani S, Sur M. Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo. Nature Communications. 2014;5:5689. pmid:25504329
  8. 8. Atallah BV, Bruns W, Carandini M, Scanziani M. Parvalbumin-expressing interneurons linearly transform cortical responses to visual stimuli. Neuron. 2012;73(1):159–170. pmid:22243754
  9. 9. Haider B, Häusser M, Carandini M. Inhibition dominates sensory responses in the awake cortex. Nature. 2013;493(7430):97–100. pmid:23172139
  10. 10. Haider B, McCormick DA. Rapid neocortical dynamics: cellular and network mechanisms. Neuron. 2009;62(2):171–189. pmid:19409263
  11. 11. Haider B, Schulz DP, Häusser M, Carandini M. Millisecond coupling of local field potentials to synaptic currents in the awake visual cortex. Neuron. 2016;90(1):35–42. pmid:27021173
  12. 12. Adesnik H. Layer-specific excitation/inhibition balances during neuronal synchronization in the visual cortex. Journal of Physiology. 2018;596(9):1639–1657. pmid:29313982
  13. 13. Adesnik H, Bruns W, Taniguchi H, Huang ZJ, Scanziani M. A neural circuit for spatial summation in visual cortex. Nature. 2012;490(7419):226–231. pmid:23060193
  14. 14. Zhu M, Rozell CJ. Modeling biologically realistic inhibitory interneurons in sensory coding models. PLoS Computational Biology. 2015;11(7):e1004353. pmid:26172289
  15. 15. King PD, Zylberberg J, DeWeese MR. Inhibitory interneurons decorrelate excitatory cells to drive sparse code formation in a spiking model of V1. Journal of Neuroscience. 2013;33(13):5475–5485. pmid:23536063
  16. 16. Litwin-Kumar A, Rosenbaum R, Doiron B. Inhibitory stabilization and visual coding in cortical circuits with multiple interneuron subtypes. Journal of Neurophysiology. 2016;115(3):1399–1409. pmid:26740531
  17. 17. Herculano-Houzel S, Watson CR, Paxinos G. Distribution of neurons in functional areas of the mouse cerebral cortex reveals quantitatively different cortical zones. Frontiers in Neuroanatomy. 2013;7:35. pmid:24155697
  18. 18. Jardim-Messeder D, Lambert K, Noctor S, Pestana FM, de Castro Leal ME, Bertelsen MF, et al. Dogs have the most neurons, though not the largest brain: trade-off between body mass and number of neurons in the cerebral cortex of large carnivoran species. Frontiers in neuroanatomy. 2017;11:118. pmid:29311850
  19. 19. Turner EC, Young NA, Reed JL, Collins CE, Flaherty DK, Gabi M, et al. Distributions of cells and neurons across the cortical sheet in old world macaques. Brain, behavior and evolution. 2016;88(1):1–13. pmid:27547956
  20. 20. Winer JA, Larue DT. Populations of GABAergic neurons and axons in layer I of rat auditory cortex. Neuroscience. 1989;33(3):499–515. pmid:2636704
  21. 21. Ouellet L, de Villers-Sidani E. Trajectory of the main GABAergic interneuron populations from early development to old age in the rat primary auditory cortex. Frontiers in neuroanatomy. 2014;8:40. pmid:24917792
  22. 22. Braitenberg V, Schüz A. Cortex: Statistics and Geometry of Neuronal Connectivity. 2nd ed. Springer-Verlag; 1998.
  23. 23. Beaulieu C. Numerical data on neocortical neurons in adult rat, with special reference to the GABA population. Brain Research. 1993;609(1–2):284–292. pmid:8508310
  24. 24. Peters A, Kara DA. The neuronal composition of area 17 of rat visual cortex. II. The nonpyramidal cells. Journal of Comparative Neurology. 1985;234(2):242–263. pmid:3988984
  25. 25. Meyer HS, Schwarz D, Wimmer VC, Schmitt AC, Kerr JN, Sakmann B, et al. Inhibitory interneurons in a cortical column form hot zones of inhibition in layers 2 and 5A. Proceedings of the National Academy of Sciences. 2011;108(40):16807–16812.
  26. 26. Prieto JJ, Peterson BA, Winer JA. Morphology and spatial distribution of GABAergic neurons in cat primary auditory cortex (AI). Journal of Comparative Neurology. 1994;344(3):349–382.
  27. 27. Gabbott PL, Somogyi P. Quantitative distribution of GABA-immunoreactive neurons in the visual cortex (area 17) of the cat. Experimental Brain Research. 1986;61(2):323–331. pmid:3005016
  28. 28. Somogyi P. Synaptic organization of GABAergic neurons and GABAA receptors in the lateral geniculate nucleus and visual cortex. Houston: Portfolio Publishing; 1989.
  29. 29. Li J, Schwark HD. Distribution and proportions of GABA-Immunoreactive neurons in cat primary somatosensory cortex. Journal of Comparative Neurology. 1994;343(3):353–361. pmid:7517965
  30. 30. Binzegger T, Douglas RJ, Martin KA. A quantitative map of the circuit of cat primary visual cortex. Journal of Neuroscience. 2004;24(39):8441–8453. pmid:15456817
  31. 31. Sherwood CC, Raghanti MA, Stimpson CD, Bonar CJ, de Sousa AA, Preuss TM, et al. Scaling of inhibitory interneurons in areas V1 and V2 of anthropoid primates as revealed by calcium-binding protein immunohistochemistry. Brain, Behavior and Evolution. 2007;69(3):176–195. pmid:17106195
  32. 32. Hendry SH, Schwark HD, Jones EG, Yan J. Numbers and proportions of GABA-immunoreactive neurons in different areas of monkey cerebral cortex. Journal of Neuroscience. 1987;7(5):1503–1519. pmid:3033170
  33. 33. Collins CE, Airey DC, Young NA, Leitch DB, Kaas JH. Neuron densities vary across and within cortical areas in primates. Proceedings of the National Academy of Sciences. 2010;107(36):15927–15932. pmid:20798050
  34. 34. Varshney LR, Sjöström PJ, Chklovskii DB. Optimal information storage in noisy synapses under resource constraints. Neuron. 2006;52(3):409–423. pmid:17088208
  35. 35. Barlow HB. Possible principles underlying the transformations of sensory messages. In: Rosenblith WA, editor. Sensory Communication. MIT Press; 1961. p. 217–234.
  36. 36. Földiak P. Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics. 1990;64(2):165–170. pmid:2291903
  37. 37. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996;381(6583):607–609. pmid:8637596
  38. 38. Zylberberg J, Murphy JT, DeWeese MR. A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields. PLoS Computational Biology. 2011;7(10):e1002250. pmid:22046123
  39. 39. Olshausen BA, Field DJ. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision research. 1997;37(23):3311–3325. pmid:9425546
  40. 40. Zhu M, Rozell CJ. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLoS Computational Biology. 2013;9(8):e1003191. pmid:24009491
  41. 41. Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA. Sparse coding via thresholding and local competition in neural circuits. Neural computation. 2008;20(10):2526–2563. pmid:18439138
  42. 42. Rehn M, Sommer FT. A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. Journal of Computational Neuroscience. 2007;22(2):135–146. pmid:17053994
  43. 43. Olshausen BA. Highly overcomplete sparse coding. In: Human Vision and Electronic Imaging XVIII. vol. 8651. International Society for Optics and Photonics; 2013. p. 86510S.
  44. 44. Vinje WE, Gallant JL. Sparse Coding and Decorrelation in Primary Visual Cortex During Natural Vision. Science. 2000;287(5456):1273–1276. pmid:10678835
  45. 45. Attwell D, Laughlin SB. An Energy Budget for Signaling in the Grey Matter of the Brain. Journal Cerebral Blood Flow and Metabolism. 2001;21(10):1133–1145. pmid:11598490
  46. 46. Olshausen BA, Field DJ. Sparse coding of sensory inputs. Current Opinion in Neurobiology. 2004;14(4):481–487. pmid:15321069
  47. 47. Niven JE, Laughlin SB. Energy limitation as a selective pressure on the evolution of sensory systems. Journal of Experimental Biology. 2008;211(11):1792–1804. pmid:18490395
  48. 48. Baum EB, Moody J, Wilczek F. Internal representations for associative memory. Biological Cybernetics. 1988;59(4):217–228.
  49. 49. Charles AS, Yap HL, Rozell CJ. Short term memory capacity in networks via the restricted isometry property. Neural Computation. 2014;26:1198–1235. pmid:24684446
  50. 50. Barth AL, Poulet JFA. Experimental evidence for sparse firing in the neocortex. Trends in Neurosciences. 2012;35(6):345–355. pmid:22579264
  51. 51. Hasenstaub A, Shu Y, Haider B, Kraushaar U, Duque A, McCormick DA. Inhibitory postsynaptic potentials carry synchronized frequency information in active cortical networks. Neuron. 2005;47(3):423–435. pmid:16055065
  52. 52. Siegle JH, Jia X, Durand S, Gale S, Bennett C, Graddis N, et al. Data from “A survey of spiking activity reveals a functional hierarchy of mouse corticothalamic visual areas”; 2019. bioRxiv. Available from: https://www.biorxiv.org/content/early/2019/10/16/805010.
  53. 53. Kohn A, Coen-Cagli R. Data from “Multi-electrode recordings of anesthetized macaque V1 responses to static natural images and gratings.”; 2015. CRCNS.org.
  54. 54. Kohn A, Smith MA. Data from “Utah array extracellular recordings of spontaneous and visually evoked activity from anesthetized macaque primary visual cortex (V1).”; 2016. CRCNS.org.
  55. 55. Blanche T. Data from “Multi-neuron recordings in primary visual cortex.”; 2009. CRCNS.org.
  56. 56. Saravanan V, Berman GJ, Sober SJ. Application of the hierarchical bootstrap to multi-level data in neuroscience; 2019.
  57. 57. Nowak LG, Sanchez-Vives MV, McCormick DA. Lack of orientation and direction selectivity in a subgroup of fast-spiking inhibitory interneurons: cellular and synaptic mechanisms and comparison with other electrophysiological cell types. Cerebral Cortex. 2008;18(5):1058–1078. pmid:17720684
  58. 58. Kerlin AM, Andermann ML, Berezovskii VK, Reid RC. Broadly tuned response properties of diverse inhibitory neuron subtypes in mouse visual cortex. Neuron. 2010;67(5):858–871. pmid:20826316
  59. 59. Antkowiak B, Helfrich-Forster C. Effects of Small Concentrations of Volatile Anesthetics on Action Potential Firing of Neocortical Neurons In Vitro. Anesthesiology: The Journal of the American Society of Anesthesiologists. 1998;88(6):1592–1605. pmid:9637654
  60. 60. Antkowiak B. Different Actions of General Anesthetics on the Firing Patterns of Neocortical Neurons Mediated by the GABAAReceptor. Anesthesiology: The Journal of the American Society of Anesthesiologists. 1999;91(2):500–511.
  61. 61. Lewis LD, Weiner VS, Mukamel EA, Donoghue JA, Eskandar EN, Madsen JR, et al. Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness. Proceedings of the National Academy of Sciences. 2012;109(49):E3377–E3386. pmid:23129622
  62. 62. Weliky M, Fiser J, Hunt RH, Wagner DN. Coding of natural scenes in primary visual cortex. Neuron. 2003;37(4):703–718. pmid:12597866
  63. 63. Peduzzi JD. Genesis of GABA-immunoreactive neurons in the ferret visual cortex. Journal of Neuroscience. 1988;8(3):920–931. pmid:3346729
  64. 64. Froudarakis E, Berens P, Ecker AS, Cotton RJ, Sinz FH, Yatsenko D, et al. Population code in mouse V1 facilitates readout of natural scenes through increased sparseness. Nature neuroscience. 2014;17(6):851. pmid:24747577
  65. 65. Yu Y, Stirman JN, Dorsett CR, Smith SL. Mesoscale correlation structure with single cell resolution during visual coding; 2018. bioRxiv.
  66. 66. Zhu M, Rozell CJ. Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System. PLoS Computational Biology. 2013;9(8):e1003191. pmid:24009491
  67. 67. Haider B, Krause MR, Duque A, Yu Y, Touryan J, Mazer JA, et al. Synaptic and Network Mechanisms of Sparse and Reliable Visual Cortical Activity during Nonclassical Receptive Field Stimulation. Neuron. 2010;65(1):107–121. pmid:20152117
  68. 68. Vinje WE, Gallant JL. Sparse coding and decorrelation in primary visual cortex during natural vision. Science. 2000;287(5456):1273–1276. pmid:10678835
  69. 69. Wolfe J, Houweling AR, Brecht M. Sparse and powerful cortical spikes. Current Opinion in Neurobiology. 2010;20:306–312. pmid:20400290
  70. 70. Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA. Sparse coding via thresholding and local competition in neural circuits. Neural Computation. 2008;20(10):2526–2563. pmid:18439138
  71. 71. Shapero S, Rozell CJ, Hasler P. Configurable hardware integrate and fire neurons for sparse approximation. Neural Networks. 2013;45(0):134–143. pmid:23582485
  72. 72. Hu T, Genkin A, Chklovskii DB. A Network of Spiking Neurons for Computing Sparse Representations in an Energy-Efficient Way. Neural Computation. 2012;24(11):2852–2872. pmid:22920853
  73. 73. Balavoine A, Romberg JK, Rozell CJ. Convergence and Rate Analysis of Neural Networks for Sparse Approximation. IEEE Transactions on Neural Networks and Learning Systems. 2012;23(9):1377–1389. pmid:24199030
  74. 74. Balavoine A, Rozell CJ, Romberg JK. Convergence of a Neural Network for Sparse Approximation using the Nonsmooth Łojasiewicz Inequality. In: International Joint Conference in Neural Networks (IJCNN); 2013. p. 1–8.
  75. 75. Charles AS, Garrigues P, Rozell CJ. A Common Network Architecture Efficiently Implements a Variety of Sparsity-Based Inference Problems. Neural Computation. 2012;24(12):3317–3339. pmid:22970876
  76. 76. Shapero S, Charles AS, Rozell CJ, Hasler P. Low power sparse approximation on reconfigurable analog hardware. Emerging and Selected Topics in Circuits and Systems, IEEE Journal on. 2012;2(3):530–541.
  77. 77. Shapero S, Zhu M, Hasler J, Rozell C. Optimal sparse approximation with integrate and fire neurons. International Journal of Neural Systems. 2014;24(05):1440001. pmid:24875786
  78. 78. Schiller J, Major G, Koester HJ, Schiller Y. NMDA spikes in basal dendrites of cortical pyramidal neurons. Nature. 2000;404(6775):285. pmid:10749211
  79. 79. Polsky A, Mel BW, Schiller J. Computational subunits in thin dendrites of pyramidal cells. Nature neuroscience. 2004;7(6):621. pmid:15156147
  80. 80. Lennie P. The cost of cortical computation. Current Biology. 2003;13(6):493–497. pmid:12646132
  81. 81. Wong-Riley MT. Cytochrome oxidase: an endogenous metabolic marker for neuronal activity. Trends in Neurosciences. 1989;12(3):94–101. pmid:2469224