Skip to main content
Advertisement
  • Loading metrics

Mixed Signal Learning by Spike Correlation Propagation in Feedback Inhibitory Circuits

Abstract

The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.

Author Summary

In natural environments, although sensory inputs are often highly mixed with one another and obscured by noise, animals can detect and learn discrete signals from this mixture. For example, humans easily detect the mention of their names from across a noisy room, a phenomenon known as the cocktail party effect. Spike-timing-dependent plasticity (STDP) is a learning mechanism ubiquitously observed in the brain across various species and is considered to be the neural basis of such learning; however, it is still unclear how STDP enables efficient learning from uncertain stimuli and whether spike-based learning offers benefits beyond those provided by standard machine learning methods for signal decomposition. To begin to answer these questions, we conducted analytical and simulation studies examining the propagation of spike correlation in feedback neural circuits. We show that non-precise spike correlation is useful for handling noise during the learning process. Our results also suggest that neural circuits make use of stochastic membrane dynamics to approximate computationally complex Bayesian learning algorithms, progressing our understanding of the principles of stochastic computation by the brain.

Introduction

Neurons receive inputs from a large number of other neurons encoding a variety of information about various signals. Despite the diversity and variability of input spike trains, neurons can learn and represent specific information during developmental processes and according to specific task requirements. Spike-timing-dependent plasticity (STDP) [1,2] is a candidate mechanism of neural learning. Extensive studies have revealed the type of information that a single neuron can learn through STDP [37]; however, the type of information that a population of neurons interacting with each other learns through STDP has not yet been determined. Understanding this extension from a single neuron to a population of neurons is crucial because a single neuron learns and represents only a limited amount of information that may be transmitted to it from thousands of inputs.

Among neural interactions, lateral inhibition is a basic interaction widely observed in various regions, such as the olfactory bulb [8], visual cortex [9], somatosensory cortex [10], and entorhinal cortex [11]. Previous theoretical results showed that neural circuits with lateral inhibition enhance signal detection [12,13] and improve learning performance [1416]. Several simulation studies further revealed that neurons acquire receptive field [1719] or spike patterns [20] through STDP by introducing lateral inhibition; yet, those studies were limited to simplified cases for which a large population of independent neurons was suggested to be sufficient [5,21,22]. Therefore, it remains unclear whether lateral inhibition plays a crucial role in STDP learning; in particular, the spike level effects of lateral inhibition remain elusive. Moreover, recent experimental results suggest that animals learn and discriminate mixed olfactory signals [2325] or auditory signals masked by noise [26,27], but it is still unknown how feedback interactions contribute to such learning.

Here, by considering a simple feedback network model of spiking neurons, we investigated the algorithm inherent to STDP in neural circuits containing feedback. We analyzed the propagation of spike correlations through inhibitory circuits, and revealed how such secondary correlations influence STDP learning at both feedforward and feedback connections. We discovered that the timescale of spike correlation preferable for learning depends on whether the noise is independent from any signal (random noise) or generated from the mixing of signals (cross-talk noise). We also found that excitatory and inhibitory STDP cooperatively shapes lateral circuit structure, making it suitable for signal detection. We further found a possible link between stochastic membrane dynamics and sampling process, which is necessary for neural approximation of learning algorithm of Bayesian independent component analysis (ICA). We applied our findings by demonstrating that STDP implements a spike-based solution in neural circuits for the cocktail party problem [26,28,29].

Results

Model

We constructed a network model with three feedforward layers as shown in Fig 1A (see Neural dynamics in Methods for details). The external source layer represents the external environment or neural activity at sensory systems. The external layer also provides common inputs to the input layer and induces correlations in the neurons in the input layer. The input layer shows rate-modulated Poisson firing based on events at the external layer and external noise, which is approximated with the constant firing rate {rio}. Subsequently, spike activity at the input layer projects to the output layer, which also receives inhibitory feedback from the lateral layer. Neurons in the lateral layers are excited by inputs from the output layer. We assumed that all neurons in the input layer and the output layer are excitatory, whereas lateral-layer neurons are assumed to be inhibitory. Although excitatory lateral interactions also exist in the sensory cortex, they are typically sparse [30] and weak [10] compared with inhibitory interactions; thus we concentrated on the latter. For the analytical treatment, the neurons in the output and lateral layers were modeled with a linear Poisson model. We first studied synaptic plasticity at the feedforward connections (connections from the input layer to the output layer), while fixing lateral connections (i.e., connections from the output layer to the lateral layer and connections from the lateral layer to the output layer). For STDP, we used pairwise log-STDP (Fig 1B) [31], which replicates the experimentally observed long-tailed synaptic weight distribution [32,33].

thumbnail
Fig 1. Description of the model.

(A) Schematic figure of the model. (B) Spike-time dependent synaptic weight change in log- spike-timing-dependent plasticity (STDP). (C) Normalized temporal cross-correlogram of input neurons receiving common sources (gray line), and kernel functions of plasticity propagated by feedforward correlation (blue line) and feedback correlation (green line).

https://doi.org/10.1371/journal.pcbi.1004227.g001

We considered the case for information encoded in the correlated activity of input neurons [34,35], and fixed the average firing rate of all input neurons at the constant value υoX (See Table 1 and 2 for the list of variables and parameters). If the firing rate of input neuron i is given as , for external event sμ(t) and the response probability of the neuron q, then common inputs from the external layer induce a temporal correlation proportional to (1) where φ(t) is a response kernel (see Eqs (14) and (24) in Methods for details). If we use , where θt is the parameter that controls the timescale of spike correlations, then (gray line in Fig 1C). For the kernel function, we used the gamma distribution with shape parameter kg = 3 in order to reproduce broad spike correlations typically observed in cortical neurons [36,37]. Synaptic weight dynamics by STDP is written as for , , where fd(w) and fp(w) are synaptic weight dependence of LTD/LTP (long-term depression/potentiation), respectively. By taking the average of above equation over time and ensemble (see Average synaptic weight velocity in Methods for details), the weight change of the feedforward connection WX can be approximated as (2) where g1X and g2X are scalar coefficients, C is the correlation matrix, and E is the identity matrix (see Eqs (25)–(30) for derivation). The first term describes the synaptic weight change directly caused by an input spike correlation and can be rewritten into the convolution of the temporal correlation and correlation kernel function χX1 as (3) where F(w,s) = Fd(w,-s) if s<0, else F(w,s) = Fp(w,s), and εX is the EPSP curve of input neurons (see Eqs (15) and (31) in the Methods). By the deconvolution of G1X(w), we can separate the effect of the intrinsic network property χX1 and that of the input correlation h(τ) for STDP-based learning. Due to causality, LTP/LTD balance, and dendritic delay, typically becomes LTP-dominant around τ = 0 (blue line in Fig 1C; we set w = woX), so that g1X takes positive values, which enables coincidence-based learning [4,5,38]. The second term of Eq (2), which is of particular interest in this model, describes how the input correlation influences STDP learning at feedforward connections through lateral inhibition: (4) where D = 2dXd+dY+dZ, and εY and εZ are EPSP/IPSP curves of output/inhibitory neurons, respectively. This term primarily causes LTD as the sign flips through lateral inhibition (; shown as the green line in Fig 1C). Previous simulation studies showed lateral inhibition has critical effects on excitatory STDP learning [1719]; however, it has not yet been well studied how a secondary correlation generated through the lateral circuits influences STDP at feedforward connections, and it is still largely unknown how lateral inhibition functions with various stimuli in different neural circuits. For example, the correlation kernel of the feedback term exhibits a delay as the signal propagates through the inhibitory circuit; yet, we do not know how much delay is permitted for effective learning or if realistic synaptic delays satisfy such a condition. Furthermore, it is also unknown what information a circuit can learn if there are several mixed signals with different amplitudes for which symmetry-breaking learning [5,39] is not valid. Therefore, using theoretical analysis and simulation, we first investigated the properties of the inhibitory kernel in STDP learning.

Lateral inhibition enhances minor source detection by STDP

In Eq (2), if lateral inhibition is negligible (i.e., g2X/g1X = 0), all output neurons acquire the principal component of the response probability matrix Q, and the other information is neglected [7,40,41]. On the other hand, if lateral inhibition is effective, different output neurons may acquire various components of the external structure. We first examined that point in a simple network model with two independent external sources (Fig 2A). In the model, each external source drives an independent subgroup of input neurons (we defined those input neurons as A-neurons and B-neurons), which project excitatory inputs to all of the output neurons. Here, we assume that source A drives input neurons with a higher probability than source B (qA = 0.6, qB = 0.5), so that input neurons projected by source A show higher correlations (cA = 0.36) than those receiving the output of source B (cB = 0.25). In the matrix form, The third row in Q represents response probabilities of background neurons in the input layer (gray triangles in Fig 2A; note that C = QQt). We refer to this as the minor source detection task below. Here, for lateral connections, we assumed that both excitatory-to-inhibitory (E-to-I) and inhibitory-to-excitatory (I-to-E) connections are well organized such that inhibition only works mutually between two output neuron groups (Fig 2A; blue lines are E-to-I and red lines are I-to-E connections. See also Eq (30) in Methods). The origin of these structured lateral connections is discussed later. When the network is excited by inputs from external sources, excitatory postsynaptic potential (EPSP) sizes of feedforward connections WX change according to STDP rules. Initially, in all output neurons, synaptic weights from A-neurons (blue triangles in Fig 2A) become larger because A-neurons are more strongly correlated with one another than B-neurons are. However, as learning proceeds, one of the output neuron groups becomes selective for the minor source B (Fig 2B). After 30 min, the network successfully learns both sources. If we focus on the peristimulus time histogram (PSTH) for the average membrane potential of output neurons aligned to external events, both neuron groups initially show weak responses to both correlation events, and yet the depolarization is relatively higher for source A than for source B (Fig 2C left). After 10 min of learning, both neuron groups show relatively stronger initial responses for source A, but group 1 shows a hyperpolarization soon after the initial response (Fig 2C middle). As a result, synaptic weights from A-neurons to group 1 become weaker, and group 1 neurons eventually become selective for the minor source B (Fig 2C right). The mean cross-correlation (see cross-correlation in Methods for details) between the external sources and the population activity of output neurons is maximized when the delay is approximately 10–15 ms (Fig 2E). If we fix the delay at 14 ms, then the cross-correlation gradually increases as the network learns both sources (Fig 2D). The same argument holds if mutual information is used for performance evaluation (green lines in Fig 2D and 2E). Interestingly, the network better detects the minor source when it is learned with a highly correlated source compared with when it is learned with another minor source (Fig 2F), because a highly correlated opponent source causes strong lateral inhibition on the output neurons, which enhances minor source learning. Similar results are also obtained for conductance-based leaky integrate-and-fire (LIF) neurons (S1 Fig).

thumbnail
Fig 2. Lateral inhibition enables minor source detection by spike-timing-dependent plasticity (STDP) through membrane hyperpolarization.

(A) Schematic figure of the simplified model. SA and SB (on the left side) are the sources that project to subsets of input neurons (colored triangles). Gray triangles are background neurons, black triangles (on the right) are output neurons, and red circles are inhibitory neurons. (B) Development of synaptic weights. Thick lines are mean synaptic weights from A-neurons (blue), B-neurons (red), and Background-neurons (orange) to each output neuron. Thin lines are traces of individual synaptic weights. Gray bar shows the timing at which figure C is calculated. (C) Peristimulus time histograms (PSTHs) of membrane potentials averaged within output neuron groups. T = 0 indicates the timing of events at external layers. The three figures are calculated from the data at t = 0–1 min, 7–8 min, and 29–30 min. (D) Development of mean cross-correlation and mutual information between external sources and population activity of output neurons for the simulation depicted in panels B and C. (E) Delay dependence of mean cross-correlation and mutual information. Both values were calculated from five simulations. (F) Cross-correlation between the output group that detected the minor source and the minor source activity for various response probabilities qB with a fixed qA (= 0.6). When none of output groups detected the minor source, the larger value calculated for the two output groups was used. Throughout the study, error bars represent standard deviation calculated from five simulations, unless otherwise indicated.

https://doi.org/10.1371/journal.pcbi.1004227.g002

Lateral inhibition should be strong, fast, and sharp

To investigate how and when the network can acquire multiple sources represented by correlated inputs, we further analyzed the model above (see Mean-field approximation of a two-source model in Methods for details). Because both output excitatory neurons and lateral inhibitory neurons are bundled into groups, in the mean-field approximation, we can approximate M excitatory populations and N inhibitory populations into two representative output neurons and two inhibitory neurons. Similarly, input neurons can be bundled into three groups (A-neurons, B-neurons, and Background-neurons). In addition, we assumed that the synaptic connections from Background-neurons to output neurons are fixed because they showed little weight change in the simulation (orange lines in Fig 2B). In this approximation, by inserting Eq (32) into Eq (29), the mean synaptic weight changes of feedforward connections follow (5) where μ = 1,2 and (), and ν = A,B. The first two terms are correlation-based learning, and the last term is the homeostatic effect intrinsic to STDP [5]. G1X and G2X are coefficients determined by synaptic delays, EPSP/IPSP (Inhibitory postsynaptic potential) shapes, and correlation structure, as shown in Eqs (3) and (4). By solving the self-consistency condition (Eq (34) in Methods), the firing rates of inhibitory neurons are approximated as

(6)

We estimated the nullclines by calculating the lines that satisfy for μ = A or B. As a result, we found that when the mutual inhibition is weak (wI = 10), the system has only one stable point at which w1A is larger than w1B (Fig 3A left). At this point, w2A is also larger than w2B (w2A = 9.64, w2B = 3.60; not shown in the figure), which means that both output neuron groups are specialized for the major source A (we call this state a winner-take-all state or T-state); however, if the inhibition is moderately strong (wI = 21.5), two new stable fixed points and two unstable fixed points appear in the system (Fig 3A middle). In the stable point on the left, neuron group 1 picks up source B while neuron group 2 picks up source A (w2A = 12.52, w2B = 2.87). On the right-hand side, neuron group 1 selects source A while neuron group 2 selects source B (we denote those two states as winners-share-all states or S-states below). At the stable point in the middle, both groups detect source A (w1A = w2A = 9.47, w1B = w2B = 3.61). Note that because of the mutual inhibition, the synaptic weight from A-neuron is smaller when both groups learn A than it is when only group 1 learns A. For strong inhibition (wI = 40.0), the stable point in the middle disappears, and the system is stable only when two neuron groups detect different sources (Fig 3A right). Simulation results confirm this analysis because strong inhibition indeed causes a winner-share-all state in which multiple neuron groups survive in competition [15], whereas the network tends to show a winner-take-all learning when the inhibition is weak (Fig 3B). We measured the degree of winner-share-all/winner-take-all states by defining the specialization index wSI as (7) If w’SI = 0, we set wSI = 0. If two output groups are specialized for different sources, wSI becomes positive, whereas if two groups are specialized for the same source, wSI becomes negative. When the synaptic delay in the lateral connections is small, only S-states are stable, whereas at longer delays, both S-states and T-states are stable. In the simulation, the network typically grows toward the latter state in the bistable strategy (Fig 3C). Moreover, if we change the shape of the IPSP curve while keeping τZB = 5 τZA, for steep IPSP curves (i.e., both τZA and τZB are small), only the S-states are stable, whereas T-states also become stable for slower IPSPs (Fig 3D). Therefore, both analytical and simulation studies indicate that lateral inhibition should be strong, fast and sharp to detect higher correlation structure. Moreover, lateral inhibition does not need to be pathologically strong because the I/E balance of is sufficient to cause multistability.

thumbnail
Fig 3. Lateral inhibition is strong, fast, and sharp.

(A) Nullclines of the average synaptic weight changes at different inhibitory amplitudes wZ = 0.1, 0.215, 0.4. The inset in the middle graph is a magnified view of boxed area. (B) Specialization indices wSI for various inhibitory weights. Positive wSI indicates the winner-share-all state, whereas negative wSI indicates the winner-take-all state. Blue lines are analytical estimations and cyan squares are the results of simulations. Vertical lines correspond to the values at which the nullclines in Figure A are calculated. (C) The same graphs for various synaptic delays. The average synaptic delay of both lateral excitatory (dYmin+dYmax)/2 and inhibitory (dZmin+dZmax)/2 connections was changed, while the variability was kept at dYmaxdYmin = dZmaxdZmin = 1.0 ms. (D) IPSP rise time dependence. The inset shows IPSP curves at {τZA, τZB} = {0.5, 2.5} (gray line), {1.5, 7.5} (dark gray line), and {2.5, 12.5} (black line).

https://doi.org/10.1371/journal.pcbi.1004227.g003

Optimal correlation timescale changes depend on the noise source

In the previous section, we revealed the effects of network properties for a fixed input correlation structure; however, actual neurons show various timescales for correlations depending on the brain region [37,42] and characteristics of the stimuli [43,44], and it is largely unknown how different timescales influence correlation-driven learning. Therefore, we next considered the effect of correlation timescales, especially on noise tolerance. In our current model, input neurons respond to external sources with input kernel (Fig 4A left), and so the correlation between input neuron i and l is given as By changing the parameter θt, we studied the effect of the correlation timescale on learning. The correlation is precise when θt is small, whereas it becomes broad at large values of θt (Fig 4A right, Fig 4B). Because STDP causes homeostatic plasticity that does not depend on a correlation, as shown in the third term of Eq (5), in a more precise approximation, Eq (2) should be written as (8) We first calculated g1X and g2X at various θt. Both g1X and g2X become smaller for a larger θt, but decreases in g2X are slower than those in g1X, and, as a result, κ = g2X/g1X becomes larger for a longer correlation timescale (Fig 4C). This means that a longer temporal correlation is more suitable for the detection of multi-components. This is indeed confirmed in the simulation (Fig 4D). When θt = 0.5 and the minor component is slightly weaker than the major one (cA = 0.36, cB = 0.25), the minor component is no longer detectable. On the other hand, at θt = 2.0, the minor component is detectable even if the strength of the induced correlation is less than half (cA = 0.36, cB = 0.16). At θt = 4.0, g1X becomes smaller so that even the major signal is not fully detectable.

thumbnail
Fig 4. Optimal correlation timescale changes depending on noise characteristics.

(A) Response kernels of input neurons to external events (left) and cross-correlation among input neurons responding to the same source calculated from simulated data (right) for three different correlation timescale parameters θt. (B) Raster plots of input neurons for various θt. Only 100 correlated neurons are plotted although there are 400 input neurons in total. (C) Analytically calculated correlation kernels g1X, g2X (left), and their ratio g1X/g2X. (D) Specialization index wSI for various response probabilities qB while fixing qA = 0.6. Lines represent wR at analytically estimated stable points, and dotted squares represent simulation results. (E) Raster plots of two types of noise. The upper panel shows random noise, whereas the lower panel depicts crosstalk noise. In both panels, the first 100 neurons respond primarily to the cyan source, and the next 100 neurons respond to the purple source. For random noise, the noise (black dots) is independent from the signals, whereas the crosstalk noise (purple dots in the lower half, cyan dots in the upper half) is correlated with the signal for the other population. (F, G) The effects of random noise (F) and crosstalk noise (G) at various correlation timescales.

https://doi.org/10.1371/journal.pcbi.1004227.g004

Similar results hold for crosstalk noise. In the model above, the noise is provided through the spontaneous Poisson firing of input neurons as random noise (Fig 4E top, black dots are spikes caused by random noise). In reality, however, there would be crosstalk noise among input spike trains caused by the interference of external sources. We implemented this crosstalk noise by introducing non-diagonal components in the response probability matrix as where qS is the response probability to the preferred signal and qN is that to the non-preferred signal (Fig 4E bottom). We refer to this as the noisy source detection task below. To make a clear comparison, in the simulation of random noise, we kept qN = 0 and changed the spontaneous firing rate of the input neurons (rio) to modify the noise intensity, whereas in simulation of crosstalk noise we removed random noise (i.e., rio = 0) and changed qN. For random noise, a smaller θt enables better learning because a large g1X competes with the homeostatic force (Fig 4F). By contrast, for crosstalk noise, the performance is better at θt = 2.0 than at θt = 0.5 because strong lateral inhibition suppresses crosstalk noise (Fig 4G). Although for small noise regimens, the network performs better at θt = 0.5 than at θt = 2.0, but the difference is almost negligible. Therefore, to cope with crosstalk noise, the spike correlation needs to be broad, whereas a narrow spike correlation is better for random noise. We note that qualitatively the same arguments as above also hold for the exponential kernel (S3D and S3E Fig). However, the ratio of two coefficients (i.e., κe = ge2X/ge1X) is typically smaller for this kernel than for the kernel we used throughout this study (S3B and S3C Fig vs. Fig 4D) because lateral inhibition is less effective due to highly peaked spike correlation (S3A Fig).

Excitatory and inhibitory STDP cooperatively shape structured lateral connections

To this point, we have considered a network already clustered into two assemblies that inhibit one another (as in Fig 5A left). This means that the network somehow knows a priori that the number of external sources is two; however, in reality, a randomly connected network should also learn such information. To test this idea, we introduced STDP-type synaptic plasticity in lateral excitatory connections and feedback inhibitory connections and investigated how different STDP rules cause different structures in the circuit.

thumbnail
Fig 5. Lateral connection structuring by excitatory and inhibitory spike-timing-dependent plasticity (STDP).

(A) Schematic figures of connections between the output layer and the lateral layer. In the simulation, each layer consists of 20 neurons. (B) The effect of crosstalk noise on different lateral structures. Analytical results are shown as bold lines, and the results from simulations are shown as dotted lines. (C) Minor source detection with different lateral structures. Because the specialization index is not well defined for a network with random lateral connections, the average synaptic weights from source A to those output neurons that prefer source A were measured instead. (D) Synaptic weight development at three connections. In the left and right columns, panels show synaptic weights of excitatory/inhibitory synapses projected to the neuron group 1 (top) and group 2 (bottom). In the middle graph, panels correspond to excitatory synapses projected from the neuron group 1 (top) and group 2 (bottom). In all panels, thin lines indicate the development of individual synapses, thick lines represent average weights onto output neurons, and colors indicate A-neurons (blue), B-neurons (red), and Background-neurons (orange). (E, F) Performance of the network with different lateral structures in noisy signal detection (E) and minor signal detection (F). Here (and only here), a pre-learned network is used to investigate responses for various inputs.

https://doi.org/10.1371/journal.pcbi.1004227.g005

We first checked whether structured lateral connections were helpful for learning. For comparison, we also considered a model with random lateral connections in which all output neurons and inhibitory neurons are randomly connected with probability 0.5 (Fig 5A middle). When lateral connections are random, mean-field equations are modified as

(9)

We separated lateral connections into two groups as in the previous case, but this approximation is legitimate only when two input sources are symmetrical (i.e., qA = qB). In other cases, neurons are often organized into two groups with different population sizes. In such cases, for evaluating performance, we measured average weights from source A on the output neurons receiving stronger inputs from A-neurons than from B-neurons or Background-neurons. For randomly connected lateral inhibition, learning performance dropped significantly in noisy source detection (Fig 5B) and in minor source detection (Fig 5C); thus clustered connectivity is indeed advantageous for learning.

We next investigated whether such structure can be learned using STDP rules. We first introduced Hebbian STDP for both E-to-I and I-to-E connections. With these learning rules, the lateral connections successfully learn a mutual inhibition structure (Fig 5D); however, this learning is achievable only when the learning of a hidden external structure is possible from the random lateral connections (magenta lines in Fig 5B and 5C; note that orange points are hidden by magenta points because they show similar behaviors in noisy cases), which means either when crosstalk noise is low or two sources have similar amplitudes. Nevertheless, once a structure is obtained in easy settings (qN = 0 or qA = qB), that network outperforms the network with random lateral connections in both noisy source detection (Fig 5E) and minor source detection (Fig 5F). In Fig 5E, we evaluated the performance of noisy source detection by first conducting STDP learning at qN = 0, and then we terminated STDP and performed simulations at the various noise levels qN. Similarly, in the minor source detection task depicted in Fig 5F, we first performed STDP learning with qA = qB = 0.6, and then evaluated the performance for a smaller qB. STDP can also generate similar lateral connection structures when the total number of input sources is larger than two (S2A and S2B Fig). Therefore, STDP at lateral connections helps signal detection by efficiently organizing the connection structure.

We next studied the analytical conditions for learning of the clustered structure (see Analytic approach for STDP in lateral and inhibitory connections in Methods for details). The synaptic weight dynamics of lateral excitatory and inhibitory connections are approximately given as (10) Both equations represent indirect effects of the input correlation propagated into the lateral circuit. From a linear analysis, we can expect that when gY1 is positive, E-to-I connections tend to be feature selective (see Eq (35) in Methods). Each inhibitory neuron receives stronger inputs from one of the output neuron groups and, as a result, shows a higher firing rate for the corresponding external signal. On the other hand, if gZ1 is positive, I-to-E connections are organized in reciprocal form, where one of the reciprocal connections is enhanced and the other is suppressed (see Eq (36) in Methods). We can evaluate feature selectivity of inhibitory neurons by (11) where ΩYA and ΩYB are the sets of excitatory neurons responding preferentially to sources A and B, respectively. Indeed, when the LTD time window is narrow, analytically calculated gY1 tends to take negative values (the green line in Fig 6A), and E-to-I connections organized in the simulation are not feature selective (the blue points in Fig 6A). By contrast, for a long LTD time window (i.e., when LTD is weakly spike-timing dependent), gY1 tends to take positive values, and E-to-I connections become clustered. In the simulation, WZ is also plastic, but as shown in Eq (10), the effect of WZ on the plasticity of WY is negligible in first-order approximations.

thumbnail
Fig 6. Correlation propagation shapes lateral connection structure.

(A) Comparison between feature selectivity ϕY (blue dots) calculated from simulation results and analytically calculated correlation kernel function g1Y (green line) for lateral excitatory connections. Thin green horizontal line represents g1Y = 0. (B) Comparison between the degree of mutual inhibition ϕY (blue dots) calculated from the simulation and analytically calculated correlation kernel g1Z (green line) for lateral inhibitory connections. Negative g1Z is correlated with a high degree of mutual inhibition, as expected (see Methods). (C) Ratio of output neurons tuned for the minor source in a minor source detection task under Hebbian and anti-Hebbian inhibitory spike-timing-dependent plasticity.

https://doi.org/10.1371/journal.pcbi.1004227.g006

Similarly, for I-to-E connections, we measure the degree of mutual inhibition (non-reciprocity) with

(12)

When LTD is strongly spike-timing dependent, gZ1 is negative and ϕZ calculated from the simulation data tends to be large (Fig 6B), which means that inhibitory connections are organized such that the inhibition functions as mutual inhibition between excitatory neuron groups. Note that the organized neuronal wiring patterns are not a pure product of the pre-post causality of STDP but the effect of spike correlations propagating through lateral inhibitory circuits. If the structural plasticity is merely caused by the pre-post causality, both ϕY and ϕZ can decrease with increases in the inhibitory population while maintaining the total synaptic weights because the causal effect becomes weaker as each synaptic weight becomes smaller [45]; however, in our simulations, the values of both quantities generally increased for larger inhibitory populations (S2C Fig).

Hebbian inhibitory STDP at lateral connections is not always beneficial for learning. For example, in minor source detection, if we use Hebbian inhibitory STDP, a slightly minor source is not detectable, whereas for anti-Hebbian STDP, a small number of neurons still detect the minor source because reciprocal connections from strong-source responsive inhibitory neurons to strong-source responsive output neurons inhibit synaptic weight development for the stronger source (Fig 6C).

Neural Bayesian ICA and blind source separation

Our results to this point have revealed that correlation-based STDP learning combined with lateral inhibition can successfully detect signals from mixed inputs masked by noises. To confirm this mechanism is indeed effective in realistic tasks, we applied the above method to blind source separation. We first examined the condition in which the network could capture external sources. We extended the previous network to include four independent sources mixed at the input layer (Fig 7A). In the present application, we used structured lateral connections because learning for clustered structures is difficult with noisy stimuli, as shown in the preceding section. The response probability matrix Q and correlation matrix C are given as Therefore, the principal components of matrix Q (i.e., eigenvectors of C) are {1, 1, 1, 1,}, {-1, 0, 1, 0}, {0, -1, 0, 1}, {-1, 1, -1, 1}. Because the first-order approximation of synaptic weight dynamics follows , we may expect that synaptic weight vectors converge to the eigenvectors of the principal components; however, this was not the case in our simulations, even if we took into account the non-negativity of synaptic weights (see Fig 7B, where we renormalized the principal vectors to the region between 0 and 1). Instead, each weight vector converged to a column of the response probability matrix Q (Fig 7B, the left panel is the projection to the first two dimensions, and the right panel is the projection to the other two dimensions). This result implies that the network can extract independent sources, rather than principal components, from multiple intermixed inputs.

thumbnail
Fig 7. With lateral inhibition, spike-timing-dependent plasticity (STDP) mimics Bayesian independent component analysis (ICA).

(A) Schematic figure of the model with four sources. (B) Synaptic weight development in input neuron space. Arrows qA to qD are response probability vectors of the four sources, and PC1 to PC4 are normalized principal components of the correlation matrix C. Lines represent traces of average synaptic weight from each input group to the output groups that learned corresponding sources during the learning process. (C) Comparison of performance among the ideal observer, Bayesian ICA learning, and STDP learning. (D) LTP/LTD time window of Bayesian ICA learning. (E) Behaviors of log-membrane potential (color lines) in the STDP model, and estimated log-posterior (black lines) in the Bayesian ICA algorithm for the same stimuli. Vertical lines represent timings of external events. Log-membrane potentials are normalized to align the mean and the variance to the corresponding log-posteriors.

https://doi.org/10.1371/journal.pcbi.1004227.g007

We next evaluated the performance of hidden external source detection, especially its tolerance against crosstalk noise. To this end, we compared the performance of the model with that of the Bayesian ICA algorithm, in which independence of external sources is treated as a prior [46,47]. In the algorithm, the learned mixing matrix may converge to its Bayesian optimal value estimated from a stream of inputs. Although we cannot directly argue the optimality of cross-correlations, if the mixing matrix is accurately estimated, external activity is also well inferred, and thus we can use the mean cross-correlation as a measure for the optimality of learning. In terms of discretized input activity X, the external source activity S and prior information I, we can express the conditional probability of the estimated response probability matrix as (see Bayesian ICA in Methods for details). This means that even if no prior information is given for itself (i.e.), posterior still depends on a prior given for S. If we introduce a prior that each external source follows an independent Bernoulli Process (i.e.), then the stochastic gradient descendent of posterior function is given as, where

We approximated this Bayesian ICA algorithm by a sequential sampling source activity instead of calculating the integral over all possible combinations in the estimation of the log-posterior of the response probability matrix Q. In this approximation, the learning rule of the estimated response probability matrix obeys (13) where Y is the sampled sequence, and pik(Y1:k-1) is the sample based approximation of pik in the previous equation. This rule has spike-timing and weight dependence similar to those seen in STDP (Fig 7D). Although the performance of STDP is much worse than the ideal case (when the true Q is given), this performance is similar to that for the sample-based learning algorithm discussed above (Fig 7C). Therefore, the network detects independent sources if crosstalk noise is not large. We further studied the response of the models for the same inputs and found that the logarithm of the average membrane potential well approximates the log-posterior estimated in Bayesian ICA, even in the absence of a stimulus (Fig 7E). This result suggests that in the STDP model, expected external states are naturally sampled through membrane dynamics that are generated through the interplay of feedforward and feedback inputs.

We finally performed the blind separation task using the same network as shown in Fig 7A. We created “sensory” inputs by mixing four artificially created auditory sequences (Fig 8A and S1 Auditory File). In the auditory cortex, various frequency components of a sound, particularly high-frequency components, are represented by specific neurons typically organized in a tonotopic map structure [48], whereas low-frequency components are expected to be perceived as a change in sound pressure. Furthermore, populations of neurons in the primary auditory cortex are known to synchronize the relative timing of their spikes during auditory stimuli and provide correlated spike inputs for higher cortical areas in which the auditory scene is fully analyzed and perceived [49,50]. We modeled these features by assuming that input neurons have a preferred frequency {fi} defined as and auditory inputs are provided as time-dependent response probabilities, which follow , where aqh(f) is the spectrum of auditory source q (left panel of Fig 8C), and aql(t) is the temporal change of the sound pressure (black lines in Fig 8B). In this representation, each sound source is represented by correlated spikes of neural populations (right panel of Fig 8C). Even if signals have overlapping frequency components {aqh(f)}q, blind separation is possible as long as {aql(t)}q are independent and have sharp rising profiles sufficient to cause spike correlations. After learning, four output neuron groups successfully detected changes in the sound pressure of the four original auditory signals (colored lines in Fig 8B) by correctly identifying the input neurons that encoded the signals. Therefore, STDP rules implemented in a feedforward neural network with lateral inhibition serve as a spike-based solution to the blind source separation or cocktail party effect problem.

thumbnail
Fig 8. Blind source separation by spike-timing-dependent plasticity (STDP).

(A) Four original auditory signals (from the top to the fourth set of signals) and one mixed signal (bottom). (B) Amplitudes of original signals (black lines) and those estimated from output firing rates (colored lines). (C) Spectra of auditory sources aqh(f) (left). Raster plots of input neuron activity. Colors were probabilistically assigned based on expected sources. All figures were calculated from the 30’00”–30’10” portion of the auditory signals and simulation.

https://doi.org/10.1371/journal.pcbi.1004227.g008

Discussion

By analytically investigating the propagation of input correlations through feedback circuits, we revealed how lateral inhibition influenced plasticity at feedforward connections. We showed that a population of neurons could learn multiple signals with different strengths or mixed levels. In addition, we found that to perform learning from signals corrupted with random noise, the timescale of the input correlations needed to be in the range of milliseconds, whereas the timescale was broader for crosstalk noise, which may explain why the spike correlation of cortical neurons often exhibits a large jitter (approximately 10 ms) [36,37]. We also investigated the functional roles of STDP at lateral excitatory and inhibitory connections to demonstrate that Hebbian STDP shaped the lateral structure to improve signal detection performance. Our results also suggested that anti-Hebbian plasticity was helpful for learning from minor sources and implied that different STDP rules at lateral connections induced different algorithms at feedforward connections. Furthermore, we derived an STDP-like online learning rule by considering an approximation of Bayesian ICA with sequence sampling. This result suggested that lateral inhibition adjusted the membrane potentials of postsynaptic neurons so that their spiking processes accurately performed sequence sampling. We also demonstrated that this mechanism was applicable to blind source separation of auditory signals.

Noise characteristics and correlation timescales

Simultaneously recorded neurons in close proximity often show correlated spiking, yet the precision of these correlations varies across brain regions. Neurons in the lateral geniculate nucleus show strong spike correlations [42,51], while correlations in V1 [36,52] or higher visual areas [37] are less precise. Our results indicate the interesting possibility that these differences may reflect the different characteristics of the noise with which the various cortical areas need to contend. At an early stage of sensory processing, the major noise component may be environmentally produced background noise from various sources; thus precise spike correlation is beneficial at this stage for noise reduction during signal detection and learning (Fig 4G). By contrast, in higher sensory cortices, crosstalk noise accumulated through signal propagation in circuits may form the primary noise source, so less precise spike correlation is preferable (Fig 4H). It would be intriguing to examine whether lower and higher cortical areas similarly change the strength of spike correlations for other sensory modalities.

STDP in E-to-I and I-to-E connections

It is known that both glutaminergic synapses on inhibitory neurons [53,54] and GABAergic synapses on excitatory neurons [55,56] show STDP, and it is also known that STDP at E-to-I connections plays an important role in developmental plasticity [57]; however, detailed properties of these plasticities are still largely disputable [58,59] and, reportedly, highly dependent on inhibitory cell type [60], neuromodulator [61], and region [58]. We showed that in a feedback circuit, Hebbian inhibitory STDP preferred winner-take-all while anti-Hebbian inhibitory STDP tended to cause winner-share-all (see Fukai and Tanaka 1997 for winner-share-all) at excitatory neurons (Fig 6D). This result indicates that different inhibitory STDP imposes different functions for excitatory STDP, which suggests that a neural circuit may select optimal inhibitory STDP for a specific purpose or strategy of learning, and this may differ across regions and be modified by neuromodulators. A recent study showed that inhibitory plasticity even directly influences the plasticity at excitatory synapses of the postsynaptic neuron [62]. In such cases, algorithm selection would play a more important role than it did for the standard STDP implemented in our model.

Recently, inhibitory neurons in the rodent hippocampus CA1 were shown to display context-dependent activity rate changes during a spatial learning task, in association with the activity rate changes in excitatory cells [63]. In addition, the authors suggested the candidate mechanism for this change in activity is STDP at E-to-I synapses. Our results examining E-to-I STDP confirmed this configuration of inhibitory cells modulated by plasticity at feedforward excitatory connections (Fig 5D, S2A and S2B Fig). In our model, although inhibitory neurons are not directly projected from input sources, as excitatory neurons learn a specific input source (Fig 5D, left panel), inhibitory neurons acquire feature selectivity through Hebbian STDP at synaptic connections from those excitatory neurons (Fig 5D, middle panel). Furthermore, our results indicate an important function of these feature-selective inhibitory neurons. Once an adequate circuit structure is learned and inhibitory connections are organized into a feature-selective pattern, even if the input to the network becomes noisy or faint, the network can still robustly detect signals (Fig 5E and 5F). This robustness would be useful for spatial learning, as contextual information is often uncertain.

STDP and Bayesian ICA

Our results indicated that STDP in a lateral inhibition circuit mimicked Bayesian ICA [46,47]. First, output neurons were able to detect hidden external sources, without capturing principal components (Fig 7B). Previous results suggest that for a single output neuron, an additional homeostatic competition mechanism is necessary to detect an independent component [7,22]. In addition, when information is coded by firing rate, homeostatic plasticity is critically important, because STDP itself does not mimic Bienenstock-Cooper-Munro learning [18]. However in our model, information was encoded by correlation, and mutual inhibition naturally induced intercellular competition so that intracellular competition through homeostatic plasticity was unnecessary. Moreover, our analytical results suggested the reason that independent sources are detected. To perform a principal components analysis using neural units, the synaptic weight change needs to follow where LT[] means lower triangle matrix [64,65]. This LT transformation protects principal components caused by the lateral modification from higher order components; however in our model, because all output neurons receive the same number of inhibitory inputs Eq (2), all neurons are decorrelated with one another and develop into independent components.

Recently, it was shown that STDP can perform Bayesian optimal learning [66,67]. In the model used by those authors, the synaptic weight matrix is treated as a hyper parameter and estimated by considering the maximum likelihood estimation of input spike trains. By contrast, in the Bayesian ICA framework, the mixing matrix (corresponding to synaptic weight matrix) is treated as a probabilistic variable. Using this framework, we needed to calculate an integral over all possible source activities in the past to derive stochastic gradient descendent; however, as shown in Fig 7C, the stochastic learning was well performed by employing an approximation with sequential sampling. Moreover, we naturally derived an adequate LTP time window from the response kernel of input neurons to external events (Fig 7D). We also found that STDP self-organized a lateral circuit structure that performed better than a random global inhibition (Fig 5E and 5F). Mathematically, to perform sampling from a probabilistic distribution, we first needed to calculate the occurrence probability of each state; however, in a neural model, membrane potentials of output neurons approximately represent the occurrence probability through membrane dynamics. In machine learning methods, integration over possible source activities is often approximated using Markov chain Monte Carlo (MCMC) sampling [68]. Interestingly, a recent study showed that a recurrent network performed MCMC sampling [69,70], suggesting that our network may perform a more accurate sampling in the presence of recurrent excitatory connections.

Suboptimality of STDP

Previous theoretical results suggest that STDP can modulate synaptic weights in a way that optimizes information transmission between pre- and postsynaptic neurons [71,72]. In the Bayesian ICA framework, blind source separation can be formulized as an optimization problem, but, in this case, the problem itself is ill-defined because optimality does not guarantee the true solution. In addition, local minima are often unavoidable for online learning rules. Nevertheless, the problems faced by the brain are often ill-defined, and suboptimality is inevitable [73]. Because we performed both nonlinear dynamics-based and machine learning-based analyses, we can offer some insights regarding the origins of local minima in stochastic gradient descendent learning. In the initial state, synaptic weights are typically homogeneously distributed, and this state is often locally stable. As a result, the homogeneous stable point is more likely to be selected in learning (Fig 2C and 2D) than the non-homogenous, more desirable, points; however, introducing additional noise may change this situation. Indeed, in Fig 4B and Fig 7C, the performance of the model was improved by adding a small amount of noise to input activities, although the improvement was not significant; however, because a large amount of noise is harmful for computations and stable learning, the benefit of noise addition is highly limited, and the brain may recruit other mechanisms for near optimal learning.

Neural mechanism of blind source separation

Humans and nonhuman animals can detect a specific auditory sequence from a mixed, noisy auditory stimulus, a phenomenon often called the cocktail party effect. The mechanism underlying the cocktail party effect remains elusive [26,28,29], although several solutions have been proposed [74,75]. An effective solution for this problem is ICA [7678], and the neural implementation of the algorithm has been studied by several authors [14,18,79,80]. Our study extended these results through a rigorous analytical treatment on biologically plausible STDP learning of spiking neurons, and our analyses enabled us to discover interesting functions of correlation coding. Moreover, by explicitly modeling inhibitory neurons, we found that STDP at E-to-I and I-to-E connections cooperatively organized a lateral structure suitable for blind source separation. In addition, we successfully extended a previous model for the formation of static visual receptive fields [18,19] to a more dynamic model in an auditory blind source separation task. In realistic auditory scene analysis, the frequency spectrum of acoustic signals is first analyzed in the cochlea, where each frequency component is the mixture of sound components from independent sources. Components belonging to the same source may be separated and integrated by downstream auditory neurons for the perception of the original signal. These frequency components can be considered a mixed signal in the ICA problem [81]; thus even if signals are mixed in frequency space, if the amplitudes of the signals are temporally independent, blind separation is still achievable. In the neural implementation of the problem, if two frequencies are commonly activated in the same signal, neurons representing those frequencies show spike correlation under the presence of the signal; thus the learning process is naturally achieved by STDP learning. These results indicate an active role of spike correlation and STDP in efficient biological learning.

Methods

Model

Neural dynamics.

Based on the previous study [7], we constructed a network model with one external layer and three layers of neurons (Fig 1A). The first layer is the external layer that corresponds to external stimuli or the sensory system’s response to these stimuli. For simplicity, we approximated the activity of external sources using a Poisson process with the constant rate νSo. If we define the Poisson process with rate r as , the activity of the external source μ at time t is written as (see Table 1 for the list of variables). Neurons in the input layer fire spikes in response to activity in the external layer. By assuming a rate-modulated Poisson process, the spiking activity of the input neuron i follows (14) where rio is the instantaneous firing rate defined with , q is the response probability for the hidden external source μ, and is the response kernel for each external event. In most theoretical studies, cross-correlations give an exponential decay or a delta function [5,38], but here we used a response kernel that produces broader correlations (Fig 4A right), because the actual correlations observed in the cortex are usually not sharply peaked [36,37]. For instance, for the exponential kernel , correlations show a peaked distribution even if the timescale parameter θt is several milliseconds (S3A Fig). Because of the common inputs from the external layer, input neurons show highly correlated activity, which enables population coding of the hidden structure. Although here we explicitly assumed the presence of the external layer, these analytical results can also be applied for arbitrary realization of a spatiotemporal correlation.

Output neurons are modeled with the Poisson neuron model [5,38,45] in which the membrane potential of neuron j at time t is described as (15) where wjiX and wjkZ are the EPSPs/IPSPs of input currents from input neuron xi and lateral neuron zk, respectively, convolution functions are defined as and , and synaptic delays in the feedforward excitatory and feedback inhibitory connections are dijX and djkZ. For feedforward excitatory connections, the synaptic delay dijX is given by the sum of the axonal delay dija and dendritic delay dijd, whereas for inhibitory connections, we assume for simplicity that the delay is purely axonal. The response of the output neuron follows . Similarly, inhibitory neurons in the lateral layer show Poisson firing based on the membrane potential {uIk}k = 1,…,N which is defined as (16) for EPSPs of a lateral connection wYkj, convolution function , and synaptic delay of the lateral connection dYkj. The synaptic delay of the excitatory lateral connection is also approximated as the axonal delay. The spiking activity of the inhibitory neurons is given with . For analytical tractability, we use a linear response curve gE(u) = u and gI(u) = u.

Synaptic plasticity.

For most of this study, we focused on synaptic plasticity in the feedforward connection WX, with fixed lateral synaptic weights WY and WZ. When the timing of the spikes at the cell bodies of pre- and postsynaptic neurons is tpre and tpost, spike timings at the synaptic sites are and with axonal and dendritic delays of daji and ddji. For every pair of tspre and tspost, synaptic weight change is given with

(17)

For the synaptic weight dependence of STDP, we considered a pairwise log-STDP [31] in which LTP/LTD follows (18) where ξ is a Gaussian random variable. The log-weight dependence well replicates experimentally observed synaptic weight distributions [32,33] and is suggested to have an important function in memory modulation [82]. Analytical treatment below is applicable to other types of synaptic weight dependence, yet in the additive STDP (i.e. fp(w) = Cp and fd(w) = Cd), the mean-field equation typically does not have any stable fixed point. In addition, under the multiplicative STDP in which LTD has a linear rather than a logarithmic dependence on synaptic weight, strong correlation is often necessary to induce salient LTP [31]. The coefficients Cp = 1 and are chosen so that total LTP and LTD are balanced around the referential synaptic weight.

The STDP at E-to-I connections and I-to-E connections is similarly defined. For simplicity, we assume that synaptic delays are solely axonal (i.e., ,), and the change in synaptic weight does not depend on the synaptic weight. To maintain the balance between LTP and LTD, coefficients are chosen as , ,. Similarly, for I-to-E connections, , ,. We also modify constant (initial) synaptic weights to woY = 50.0 and woZ = 25.0, and bounded synaptic weights with wYmax = 100.0 and wZmax = 50.0. In this normalization, the total lateral inhibition takes the same value as that in the non-plastic model at the initial state. Time windows are defined as τpY = τdY = τpZ = τdZ = 20.0 ms.

In Fig 6C, anti-Hebbian STDP was calculated by for Q = Y or Z. Similarly, the correlation detector type of STDP in S2 Fig was defined as The anti-correlation detector was calculated by changing the sign of above equations.

Leaky Integrate-and-Fire (LIF) model.

In the main text, we performed all simulations with a linear Poisson model for analytical purposes, although we also confirmed those results with a conductance-based LIF model (S1 Fig). In the LIF model, the membrane potentials of excitatory neurons follow where gjEE and gjEI are excitatory and inhibitory conductances, respectively, and tis and tks are the spike timings of input neuron i and lateral neuron k. Similarly, for inhibitory neurons in the lateral layer,

In addition to the excitatory inputs from the output layer, we added random inhibitory inputs as Poisson processes with a fixed firing rate roII for inhibitory neurons. A neuron fires if the membrane potential exceeds the threshold Vth, and immediately goes into a refractory period in which the membrane potential stays at Vref for 1 ms after spiking. Plasticity was implemented for wjiX in the same manner as that for the Poisson model. Parameters were chosen as VL = -70.0, VE = 0.0, VI = -80.0, Vref = -60.0, Vth = -50.0 mV, tmE = 20.0, tmI = 10.0, tsEE = 5.0, tsEI = 2.5, tsIE = 4.0, tsII = 5.0 ms, woX = 0.001, woI = 0.008, woL = 1.0, roII = 1000.0 Hz, wIIo = 0.005, Cd = 1.8CpτpX/τdX, and α = 50.0. All other parameters were the same as those used in the Poisson model (Table 2).

In the LIF model, synaptic weights develop in a manner similar to that for the linear Poisson model, although change occurs more rapidly (Fig 1B, S1A Fig). Both cross-correlation and mutual information behave as they do in the Poisson model, but the performance is slightly better, possibly because the dynamics are deterministic (Fig 1D and 1E, S1B and S1C Fig); however, membrane potentials show different responses for correlation events (S1D Fig) because output neurons are constantly in high-conductance states, so that correlation events immediately cause spikes. As a result, membrane potentials drop to the Vref, and the average potential goes down. Interestingly, after neuron groups detect different signals, a preferred signal initially causes hyperpolarization due to firing, but, subsequently, a non-preferred signal causes hyperpolarization due to lateral inhibition (Fig 1D right). The PSTH of firing shows that the behavior of the membrane potential in the Poisson model is similar (Fig 1C and S1E Fig). This is natural, because in the linear Poisson model, the firing rate has linear relationship with the membrane potential, whereas in LIF model relationship between the average membrane potential and firing rate is highly non-linear.

Bayesian ICA.

If discretized with Δt, the time series of the external source activity is written as , and input activity becomes . Therefore, for prior information I, the joint probability of sources S and the estimated response probability matrix Q is

Therefore, by considering marginal probability,

(19)

By considering maximum likelihood estimation for a given prior P[S|I], Q can be optimally estimated [46,47]. In our problem setting, by assuming that external signals are independent, and input neurons respond to signals with a Bernoulli process, where Therefore, log-likelihood becomes

(20)

By taking gradient descendent,

Therefore, we need to calculate the integral over all possible combinations of sources in the past to obtain stochastic gradient descendent; however, such a calculation is computationally difficult and incompatible with neural computation. Instead, we used sequential sampling of , which is randomly sampled from (21) where Note in the above equations, xk is given as a fixed value and not a random variable. Under this sample-based approximation, the stochastic gradient descendent follows

(22)

For Fig 7C, we discretized the activity of hidden sources and input neurons with 5 ms bins, and performed learning with a learning rate ηSGD = 0.001. Cross-correlation was evaluated using the sample sequence Y. For the ideal case, we performed sequential sampling from the true response probability Q.

If yk-k’μ = 1 and yk-k”μ = 0 for all other nearby k” (≠k’), and if q = 0 for all ν (≠μ), then LTP at the connection q caused by an output spike yk-k’μ = 1 for xik = 1 is written as

(23)

In the absence of the input spike (xik = 0), an output spike yk-k’μ = 1 causes LTD in total . Therefore, this learning rule has weight dependence and temporal dependence similar to those in STDP. In Fig 7D, we plotted and for different (= 0.1, 0.3, 0.5).

Blind source separation.

In the blind source separation task, we created the original source by calculating high-frequency and low-frequency components separately. First, the spectrum of the signal q at a high frequency was defined as where fqh,i is a characteristic frequency of signal q, and kfqh,i are the harmonics of that frequency. The standard deviation was defined as for . Low-frequency components were directly given as an exponential oscillation as below. fql,i is a characteristic frequency, and δql,i is the delay. By combining these two components, the amplitude of a mixed sound is given as

Summation over frequency f is performed using 400 representative values that correspond to the tuned frequency of each input neuron:

In neural implementation, input neurons were stimulated with the response probability where qo = 0.05.

In the simulated example, for high-frequency components, we defined fqh,I = {{523.3,784.0}, {587.4,880.0}, {650.0,830.6}, {698.5,932.4}}, aqh,I = {{0.6,0.4}, {0.3,0.7}, {0.5,0.5}, {0.9,0.3}}, bqh,k = {{1.0,0.5,0.2,0.1}, {1.0,0.5,0.3,0.2}, {1.0,0.1,1.0,0.8}, {1.0,0.8,0.1,0.1}}, and σoh,f = 20 Hz. Each column represents four different sources. Similarly for low-frequency components, we used fql,I = {{0.4,5.0,10.0,40.0,88.0}, {0.6,6.0,8.0,42.0,86.0}, {0.2,4.0,7.5,44.0,84.0}, {0.3,6.0,7.0,46.0,82.0}}, aql,I = {{0.3,0.4,0.2,0.5,0.5}, {0.25,0.5,0.2,0.5,0.5}, {0.24,0.3,0.4,0.5,0.5}, {0.61,0.2,0.2,0.5,0.5}}, δql,I = {{1.0,0.25,0.65,0.17,0.01}, {3.0,0.12,0.32,0.13,0.02}, {7.8,0.55,0.40,0.11,0.03}, {4.5,0.22,0.71,0.07,0.05}}, βl = 5.0, and Zl = 27.24. We chose fmin = 500 Hz, fmin = 4,500 Hz, and δqf was randomly selected from 0 to 1/fmin. Fig 8A was generated by performing Fourier transformations with 25 ms sliding bins at every 2.5 ms.

Details of the simulation.

Simulations were calculated using the Runge-Kutta method, with a 0.05 ms time step. Initial synaptic weights were randomly chosen with for Q = X, Y, Z and a random Gaussian variable ξ. Similarly, synaptic delays were decided as for a random variable ξ uniformly chosen from [0,1].

Analytical consideration of synaptic weight dynamics

Correlation among input neurons.

Because input neurons receive common inputs from external sources, we define cross-correlation among input neurons as Cil(s) ≡ 〈xi(t)xl(t-s)〉-〈xi(t)〉〈xl(t)〉, and cross-correlation among input neurons satisfies (24) When , Cil(s) becomes where

Average synaptic weight velocity.

The synaptic weight dynamics defined above can be rewritten as (25) for ,. By taking an average over a short period of time and also using a stochastic Poisson process, synaptic weight change follows where

Therefore, by calculating the cross-correlation between pre-spikes xi and post-spikes yj, synaptic weight dynamics can be analytically estimated. Because the spike probability linearly depends on the membrane potential in our model, cross-correlation follows

Since we define cross-correlation among input neurons as the first term is written as

(26)

This result is consistent with that in previous studies [5,38,45]. The analysis can be extended to the cross-correlation between an input neuron and a lateral inhibitory neuron as

(27)

Theoretically, expansion over a lateral connection should be performed infinite times to obtain the exact solution, but at each expansion, the delay caused by synaptic delay dZ+dY and EPSP/IPSP rise times is accumulated so that the effect on correlation rapidly becomes small, especially when the original input cross-correlation C(t) is narrow; however, even if C(t) is broad, the effect for learning is bounded by the STDP time window. Therefore, higher order terms practically influence weight dynamics only through firing rates, so that by applying the approximation the last term can be obtained. In general, νnZ is not analytically calculable, but by considering the balanced condition, it can be estimated. Therefore, the second term is given as Therefore, if we denote (28) average synaptic weight dynamics satisfy (29) The first two terms are Hebbian terms that depend on correlation by ΓX1 and ΓX2, whereas the remainders are homeostatic terms. In all terms, synaptic weight dependence is primarily caused by wXji and not by other synapses.

By inserting the explicit representation of correlation into the equation above, ΓX1 and ΓX2 can be rewritten as (30) where t″ = r+q+r′-s+2dxd+dz+dY. Note that G1X and G2X do not depend on any indexes of the neurons, except for synaptic weight dependency, and so the two values are considered basic constants that decide how correlation shapes learning.

If we ignore the homeostatic term, then the synaptic weight dynamic is written in the matrix form as , where the dot product is defined as (A.B)ij = AijBij. Especially if we approximate G1X and G2X with g1XG1X(woX) and g2XG2X(woX) (or if weight dependence is negligible as in additive-STDP),.

The correlation kernel χ1X was derived from (31) where , and . The second correlation kernel χ2X was calculated in a similar way.

Mean-field approximation of a two-source model.

If the correlation structure C(s) is simply organized, further analytical consideration is possible. In the two-source model shown in Fig 2A, lateral connections are structured non-reciprocally, and EPSP/IPSP sizes are constants. The synaptic weight matrices are written as

(32)

Therefore, the original L × M differential equations can be reduced into 2 × 2 equations of representative neurons as

(33)

The firing rates of inhibitory neurons can be approximated as

(34)

Therefore, by solving the simultaneous equations for ν1Z and ν2Z,

This analytical approach is applicable only when the synaptic weight change is sufficiently slow relative to the neural dynamics. Also, because we ignored the variance in the synaptic weights, numerically the accuracy is limited.

Analytic approach for STDP in lateral and inhibitory connections.

Using a similar calculation as above, synaptic weight development of the lateral connections is given as (35) where where . The meaning of these equations is made clear by summarizing the correlation propagation in the diagrams (S2D i–iii Fig). In the diagram, blue wavy lines represent intrinsic correlation, and arrows are synaptic connections. To estimate how a blue correlation influences STDP at a red arrow, we need to determine all the major trajectories in which the correlation reaches pre- and postsynaptic neurons. In the linear Poisson framework, for a given trajectory, the propagation of a correlation is calculated by simply using integrals as above. From this diagram, we can safely assume that gY2 and gY3 are negligibly smaller than gY1, because trajectories (ii) and (iii) are secondary correlations and also contain synaptic delays. In this approximation, we additionally assume that Then,

Therefore, is a eigenvector of the transition matrix, and the eigenvalue is . Because the eigenvector develops by , when gY1 is positive, the E-to-I connections are more likely to be structured in a way that the inhibitory neurons become feature selective. On the other hand, if that value is negative, such structure may not be obtained. Note that (1, -1, -1, 1) is not the principal eigenvector in this simple analysis, because the eigensystem of the matrix is {{AL+BL, AL+BL, AL-BL, AL-BL}; {1, 1, 0, 0}, {0, 0, 1, 1}, {1, -1, 0, 0}, {0, 0, 1, -1}}.

Similarly, for inhibitory connections

(36)

We approximated with only two terms because the third term is negligible (S2D iv–vi Fig). If we assume and g2Z = 0, then the synaptic weight change follows . This means that if g1Z is positive, reciprocal connections are enhanced (or inhibitory connections to the neurons coding a similar feature are enhanced), whereas for negative g1I, inhibitory connections develop non-reciprocally (i.e., lateral connections function as mutual inhibition between output excitatory neuron groups).

We have restricted our consideration to Hebbian STDP, but the properties of STDP on E-to-I and I-to-E connections are still debatable [58,59]. Although it is difficult to study all combinations of STDPs, we still provide analytical insights by investigating the behaviors of g1Y and g1Z. S2E Fig shows the behaviors of four different types of STDP. This indicates that the anti-correlation detector type of E-to-I STDP [53] tends to cause non-feature-selective lateral connections. In addition, under the anti-coincidence detector type of I-to-E STDP [55], mutual inhibition structures would be preferred; however, the implication of our analytical method is limited, and further study will be necessary to fully understand the functions of the various types of STDP.

Evaluation of the performance

Cross-correlation.

We evaluated the performance by measuring the mean cross-correlation between the external sources and population activity of the output neurons. For time bin Δt = 10 ms, the activity of source μ is defined as , and, similarly, the population activity of the output neuron group ν is , where ΩνY is a set of output neurons coding a source ν. For these, cross-correlation is defined as where , , , and . We used Tc = 10 ms for the analysis. Correspondence between sources and output groups are arbitrary, and so the learned correlation should be given as for all the p! number of combinations with function Ψ between sources and output groups. For example, when p = 2,. Although, in reality, supervised or reinforcement learning is necessary to perform this readout, for simplicity we did not implement readout neurons explicitly. In Fig 2F, we plotted for the minor source B.

For the models with randomly connected lateral inhibition and (e+i) STDP, we defined output neuron j as belonging to ΩμY if for αth = 1.5, and the cross-correlation was calculated based on ΩμY.

Mutual information.

Based on the discretized hidden external source/output neuron activity sμk, yν k, we defined the binary variables Based on these variables, the states at time k can be defined as ,. Therefore, the probability that the external state takes one particular state is , where [X]tof takes 1 if X is true, otherwise it takes 0, for the statement X. Therefore, mutual information can be defined as

Supporting Information

S1 Fig. Simulations with the leaky integrate-and-fire model.

(A) Synaptic weight developments at the feedforward connection. (B) Cross-correlation and mutual information calculated for various delays. Both values were calculated by averaging five independent simulation results. (C) Development of two values for the simulation shown in (A). (D) PSTH of the membrane potential calculated for gray areas in (A). (E) Peristimulus time histogram (PSTH) of the firing probability for the same simulation.

https://doi.org/10.1371/journal.pcbi.1004227.s001

(EPS)

S2 Fig. Spike-timing-dependent plasticity (STDP) at lateral connections shapes network structure.

(A, B) Synaptic weight development when the number of external inputs is three (A) and four (B). Thick lines represent averages over all synapses, and thin lines represent individual synaptic weights. Colors represent detected sources for output neurons (left) and inhibitory neurons (middle right). (C) Relationship between the number of inhibitory neurons and the lateral structure. (D) Propagation of structure. i to iii correspond to lateral excitatory connections, and iv to vi correspond to feedback inhibitory connections. (E) Analytic results for various types of STDP.

https://doi.org/10.1371/journal.pcbi.1004227.s002

(EPS)

S3 Fig. The effects of noise in the model with exponential correlation kernel.

(A) Cross-correlations among input neurons responding to the same source calculated from simulated data for three different correlation timescale parameters θt. Note that in Fig 3, we used θt = 0.5, 2.0, 4.0 ms, while here we used θt = 1.0, 3.0, 5.0ms. (B, C) The correlation kernels ge1X, ge2X (B) and their ratio ge1X/ge2X (C) are shown for the kernels ge1X and ge2X that were calculated from Eq (30) with . (D,E) The effects of random noise (D) and crosstalk noise (E) at various correlation timescales.

https://doi.org/10.1371/journal.pcbi.1004227.s003

(EPS)

S1 Auditory File. Demonstration of blind source separation.

0’00”–0’43”: Independent auditory signals (10 s each); 0’44”–0’54”: Mixed auditory signal; 0’55”–1’38”: Decoded independent signals.

https://doi.org/10.1371/journal.pcbi.1004227.s004

(MP3)

Acknowledgments

We thank Matthieu Gilson and Florence Kleberg for helpful discussions and comments on the manuscript.

Author Contributions

Conceived and designed the experiments: NH TF. Performed the experiments: NH. Analyzed the data: NH. Wrote the paper: NH TF.

References

  1. 1. Markram H, Lübke J, Frotscher M, Sakmann B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science. 1997;275: 213–215. pmid:8985014
  2. 2. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci Off J Soc Neurosci. 1998;18: 10464–10472. pmid:9852584
  3. 3. Gerstner W, Kempter R, van Hemmen JL, Wagner H. A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996;383: 76–81. pmid:8779718
  4. 4. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000;3: 919–926. pmid:10966623
  5. 5. Gütig R, Aharonov R, Rotter S, Sompolinsky H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. J Neurosci Off J Soc Neurosci. 2003;23: 3697–3714.
  6. 6. Legenstein R, Naeger C, Maass W. What can a neuron learn with spike-timing-dependent plasticity? Neural Comput. 2005;17: 2337–2382. pmid:16156932
  7. 7. Gilson M, Fukai T, Burkitt AN. Spectral analysis of input spike trains by spike-timing-dependent plasticity. PLoS Comput Biol. 2012;8: e1002584. pmid:22792056
  8. 8. Arevian AC, Kapoor V, Urban NN. Activity-dependent gating of lateral inhibition in the mouse olfactory bulb. Nat Neurosci. 2008;11: 80–87. pmid:18084286
  9. 9. Lee S- H, Kwan AC, Zhang S, Phoumthipphavong V, Flannery JG, Masmanidis SC, et al. Activation of specific interneurons improves V1 feature selectivity and visual perception. Nature. 2012;488: 379–383. pmid:22878719
  10. 10. Adesnik H, Scanziani M. Lateral competition for cortical space by layer-specific horizontal circuits. Nature. 2010;464: 1155–1160. pmid:20414303
  11. 11. Couey JJ, Witoelar A, Zhang S-J, Zheng K, Ye J, Dunn B, et al. Recurrent inhibitory circuitry as a mechanism for grid formation. Nat Neurosci. 2013;16: 318–324. pmid:23334580
  12. 12. Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27: 77–87. pmid:911931
  13. 13. Wiechert MT, Judkewitz B, Riecke H, Friedrich RW. Mechanisms of pattern decorrelation by recurrent neuronal circuits. Nat Neurosci. 2010;13: 1003–1010. pmid:20581841
  14. 14. Molgedey L, Schuster HG. Separation of a mixture of independent signals using time delayed correlations. Phys Rev Lett. 1994;72: 3634–3637. pmid:10056251
  15. 15. Fukai T, Tanaka S. A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural Comput. 1997;9: 77–97. pmid:9117902
  16. 16. Bartsch AP, van Hemmen JL. Combined Hebbian development of geniculocortical and lateral connectivity in a model of primary visual cortex. Biol Cybern. 2001;84: 41–55. pmid:11204398
  17. 17. Wenisch OG, Noll J, van Hemmen JL. Spontaneously emerging direction selectivity maps in visual cortex through STDP. Biol Cybern. 2005;93: 239–247. pmid:16195915
  18. 18. Savin C, Joshi P, Triesch J. Independent component analysis in spiking neurons. PLoS Comput Biol. 2010;6: e1000757. pmid:20421937
  19. 19. King PD, Zylberberg J, DeWeese MR. Inhibitory interneurons decorrelate excitatory cells to drive sparse code formation in a spiking model of V1. J Neurosci Off J Soc Neurosci. 2013;33: 5475–5485.
  20. 20. Masquelier T, Guyonneau R, Thorpe SJ. Competitive STDP-based spike pattern learning. Neural Comput. 2009;21: 1259–1276. pmid:19718815
  21. 21. Masquelier T, Guyonneau R, Thorpe SJ. Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains. PloS One. 2008;3: e1377. pmid:18167538
  22. 22. Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci. 2010;13: 344–352. pmid:20098420
  23. 23. Wilson RI, Mainen ZF. Early events in olfactory processing. Annu Rev Neurosci. 2006;29: 163–201. pmid:16776583
  24. 24. Niessing J, Friedrich RW. Olfactory pattern classification by discrete neuronal network states. Nature. 2010;465: 47–52. pmid:20393466
  25. 25. Rokni D, Hemmelder V, Kapoor V, Murthy VN. An olfactory cocktail party: figure-ground segregation of odorants in rodents. Nat Neurosci. 2014;17: 1225–1232. pmid:25086608
  26. 26. McDermott JH. The cocktail party problem. Curr Biol CB. 2009;19: R1024–1027. pmid:19948136
  27. 27. Rodgers CC, DeWeese MR. Neural correlates of task switching in prefrontal cortex and primary auditory cortex in a novel stimulus selection task for rodents. Neuron. 2014;82: 1157–1170. pmid:24908492
  28. 28. Cherry EC. Some Experiments on the Recognition of Speech, with One and with Two Ears. J Acoust Soc Am. 1953;25: 975–979.
  29. 29. Haykin S, Chen Z. The Cocktail Party Problem. Neural Comput. 2005;17: 1875–1902. pmid:15992485
  30. 30. Hofer SB, Ko H, Pichler B, Vogelstein J, Ros H, Zeng H, et al. Differential connectivity and response dynamics of excitatory and inhibitory neurons in visual cortex. Nat Neurosci. 2011;14: 1045–1052. pmid:21765421
  31. 31. Gilson M, Fukai T. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PloS One. 2011;6: e25339. pmid:22003389
  32. 32. Song S, Sjöström PJ, Reigl M, Nelson S, Chklovskii DB. Highly Nonrandom Features of Synaptic Connectivity in Local Cortical Circuits. PLoS Biol. 2005;3: e68. pmid:15737062
  33. 33. Buzsáki G, Mizuseki K. The log-dynamic brain: how skewed distributions affect network operations. Nat Rev Neurosci. 2014;15: 264–278. pmid:24569488
  34. 34. von der Malsburg C (1994) The Correlation Theory of Brain Function. In: Domany PE, Hemmen PDJL van, Schulten PK, editors. Models of Neural Networks. Physics of Neural Networks. Springer New York. pp. 95–119.
  35. 35. Kumar A, Rotter S, Aertsen A. Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding. Nat Rev Neurosci. 2010;11: 615–627. pmid:20725095
  36. 36. Lampl I, Reichova I, Ferster D. Synchronous Membrane Potential Fluctuations in Neurons of the Cat Visual Cortex. Neuron. 1999;22: 361–374. pmid:10069341
  37. 37. Bair W, Zohary E, Newsome WT. Correlated Firing in Macaque Visual Area MT: Time Scales and Relationship to Behavior. J Neurosci. 2001;21: 1676–1697. pmid:11222658
  38. 38. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity—strengthening correlated input pathways. Biol Cybern. 2009;101: 81–102. pmid:19536560
  39. 39. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. II. Input selectivity—symmetry breaking. Biol Cybern. 2009;101: 103–114. pmid:19536559
  40. 40. Oja E. Simplified neuron model as a principal component analyzer. J Math Biol. 1982;15: 267–273. pmid:7153672
  41. 41. Van Rossum MCW, Turrigiano GG. Correlation based learning from spike timing dependent plasticity. Neurocomputing. 2001;38–40: 409–415.
  42. 42. Alonso J-M, Usrey WM, Reid RC. Precisely correlated firing in cells of the lateral geniculate nucleus. Nature. 1996;383: 815–819. pmid:8893005
  43. 43. Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268: 1503–1506. pmid:7770778
  44. 44. Kohn A, Smith MA. Stimulus Dependence of Neuronal Correlation in Primary Visual Cortex of the Macaque. J Neurosci. 2005;25: 3661–3673. pmid:15814797
  45. 45. Kempter R, Gerstner W, van Hemmen JL. Hebbian learning and spiking neurons. Phys Rev E. 1999;59: 4498–4514.
  46. 46. Roberts SJ. Independent component analysis: source assessment and separation, a Bayesian approach. Vis Image Signal Process IEE Proc-. 1998;145: 149–154.
  47. 47. Knuth KH (2002) A Bayesian approach to source separation. arXiv:physics/0205032.
  48. 48. Schreiner CE, Read HL, Sutter ML. Modular Organization of Frequency Integration in Primary Auditory Cortex. Annu Rev Neurosci. 2000;23: 501–529. pmid:10845073
  49. 49. deCharms RC, Merzenich MM. Primary cortical representation of sounds by the coordination of action-potential timing. Nature. 1996;381: 610–613. pmid:8637597
  50. 50. Atencio CA, Schreiner CE. Spectrotemporal Processing Differences between Auditory Cortical Fast-Spiking and Regular-Spiking Neurons. J Neurosci. 2008;28: 3897–3910. pmid:18400888
  51. 51. Dan Y, Alonso J-M, Usrey WM, Reid RC. Coding of visual information by precisely correlated spikes in the lateral geniculate nucleus. Nat Neurosci. 1998;1: 501–507. pmid:10196548
  52. 52. Toyama K, Kimura M, Tanaka K. Cross-Correlation Analysis of Interneuronal Connectivity in cat visual cortex. J Neurophysiol. 1981;46: 191–201. pmid:6267211
  53. 53. Lu J, Li C, Zhao J-P, Poo M, Zhang X. Spike-timing-dependent plasticity of neocortical excitatory synapses on inhibitory interneurons depends on target cell type. J Neurosci Off J Soc Neurosci. 2007;27: 9711–9720.
  54. 54. Fino E, Paille V, Deniau J-M, Venance L. Asymmetric spike-timing dependent plasticity of striatal nitric oxide-synthase interneurons. Neuroscience. 2009;160: 744–754. pmid:19303912
  55. 55. Woodin MA, Ganguly K, Poo M. Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl- transporter activity. Neuron. 2003;39: 807–820. pmid:12948447
  56. 56. Haas JS, Nowotny T, Abarbanel HDI. Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex. J Neurophysiol. 2006;96: 3305–3313. pmid:16928795
  57. 57. Yazaki-Sugiyama Y, Kang S, Câteau H, Fukai T, Hensch TK. Bidirectional plasticity in fast-spiking GABA circuits by visual experience. Nature. 2009;462: 218–221. pmid:19907494
  58. 58. Lamsa KP, Kullmann DM, Woodin MA. Spike-timing dependent plasticity in inhibitory circuits. Front Synaptic Neurosci. 2010;2: 8. pmid:21423494
  59. 59. Vogels TP, Froemke RC, Doyon N, Gilson M, Haas JS, Liu R, et al. Inhibitory synaptic plasticity: spike timing-dependence and putative network function. Front Neural Circuits. 2013;7: 119. pmid:23882186
  60. 60. Nissen W, Szabo A, Somogyi J, Somogyi P, Lamsa KP. Cell Type-Specific Long-Term Plasticity at Glutamatergic Synapses onto Hippocampal Interneurons Expressing either Parvalbumin or CB1 Cannabinoid Receptor. J Neurosci. 2010;30: 1337–1347. pmid:20107060
  61. 61. Huang S, Huganir RL, Kirkwood A. Adrenergic gating of Hebbian spike-timing-dependent plasticity in cortical interneurons. J Neurosci Off J Soc Neurosci. 2013;33: 13171–13178. pmid:23926270
  62. 62. Wang L, Maffei A. Inhibitory plasticity dictates the sign of plasticity at excitatory synapses. J Neurosci Off J Soc Neurosci. 2014;34: 1083–1093.
  63. 63. Dupret D, O’Neill J, Csicsvari J. Dynamic reconfiguration of hippocampal interneuron circuits during spatial learning. Neuron. 2013;78: 166–180. pmid:23523593
  64. 64. Sanger TD. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 1989;2: 459–473.
  65. 65. Oja E. Neural networks, principal components, and subspaces. Int J Neural Syst. 1989;01: 61–68.
  66. 66. Nessler B, Pfeiffer M, Buesing L, Maass W. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity. PLoS Comput Biol. 2013;9: e1003037. pmid:23633941
  67. 67. Habenschuss S, Puhr H, Maass W. Emergence of Optimal Decoding of Population Codes Through STDP. Neural Comput. 2013;25: 1371–1407. pmid:23517096
  68. 68. Moussaoui S, Brie D, Mohammad-Djafari A, Carteret C. Separation of Non-Negative Mixture of Non-Negative Sources Using a Bayesian Approach and MCMC Sampling. IEEE Trans Signal Process. 2006;54: 4133–4145.
  69. 69. Buesing L, Bill J, Nessler B, Maass W. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons. PLoS Comput Biol. 2011;7: e1002211. pmid:22096452
  70. 70. Petrovici MA, Bill J, Bytschok I, Schemmel J, Meier K (2013) Stochastic inference with deterministic spiking neurons. ArXiv13113211 Cond-Mat Physicsphysics Q-Bio.
  71. 71. Toyoizumi T, Pfister J-P, Aihara K, Gerstner W. Optimality Model of Unsupervised Spike-Timing-Dependent Plasticity: Synaptic Memory and Weight Distribution. Neural Comput. 2007;19: 639–671. pmid:17298228
  72. 72. Hennequin G, Gerstner W, Pfister J-P (2010) STDP in Adaptive Neurons Gives Close-To-Optimal Information Transmission. Front Comput Neurosci 4:143. pmid:21160559
  73. 73. Beck JM, Ma WJ, Pitkow X, Latham PE, Pouget A. Not Noisy, Just Wrong: The Role of Suboptimal Inference in Behavioral Variability. Neuron. 2012;74: 30–39. pmid:22500627
  74. 74. von der Malsburg C, Schneider W. A neural cocktail-party processor. Biol Cybern. 1986;54: 29–40. pmid:3719028
  75. 75. Asari H, Pearlmutter BA, Zador AM. Sparse Representations for the Cocktail Party Problem. J Neurosci. 2006;26: 7477–7490. pmid:16837596
  76. 76. Comon P. Independent component analysis, A new concept? Signal Process. 1994;36: 287–314.
  77. 77. Bell AJ, Sejnowski TJ. An Information-Maximization Approach to Blind Separation and Blind Deconvolution. Neural Comput. 1995;7: 1129–1159. pmid:7584893
  78. 78. Amari S. Natural Gradient Works Efficiently in Learning. Neural Comput. 1998;10: 251–276.
  79. 79. Jutten C, Herault J. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Process. 1991;24: 1–10.
  80. 80. Girolami M, Gyfe C. Extraction of independent signal sources using a deflationary exploratory projection pursuit network with lateral inhibition. Vis Image Signal Process IEE Proc-. 1997;144: 299–306.
  81. 81. Smaragdis P. Blind separation of convolved mixtures in the frequency domain. Neurocomputing. 1998;22: 21–34.
  82. 82. Hiratani N, Fukai T. Interplay between short- and long-term plasticity in cell-assembly formation. PloS One. 2014;9: e101535. pmid:25007209