Skip to main content
Advertisement
  • Loading metrics

Multiplexing rhythmic information by spike timing dependent plasticity

  • Nimrod Sherf ,

    Roles Investigation, Software, Writing – original draft

    sherfnim@post.bgu.ac.il

    Affiliations Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel

  • Maoz Shamir

    Roles Conceptualization, Investigation, Writing – original draft

    Affiliations Physics Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel

Abstract

Rhythmic activity has been associated with a wide range of cognitive processes including the encoding of sensory information, navigation, the transfer of information and others. Rhythmic activity in the brain has also been suggested to be used for multiplexing information. Multiplexing is the ability to transmit more than one signal via the same channel. Here we focus on frequency division multiplexing, in which different signals are transmitted in different frequency bands. Recent work showed that spike-timing-dependent plasticity (STDP) can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competition between subgroups of correlated synaptic inputs. This competition between different rhythmicity channels, induced by STDP, may prevent the multiplexing of information. Thus, raising doubts whether STDP is consistent with the idea of multiplexing. This study explores whether STDP can facilitate the multiplexing of information across multiple frequency channels, and if so, under what conditions. We address this question in a modelling study, investigating the STDP dynamics of two populations synapsing downstream onto the same neuron in a feed-forward manner. Each population was assumed to exhibit rhythmic activity, albeit in a different frequency band. Our theory reveals that the winner-take-all like competitions between the two populations is limited, in the sense that different rhythmic populations will not necessarily fully suppress each other. Furthermore, we found that for a wide range of parameters, the network converged to a solution in which the downstream neuron responded to both rhythms. Yet, the synaptic weights themselves did not converge to a fixed point, rather remained dynamic. These findings imply that STDP can support the multiplexing of rhythmic information, and demonstrate how functionality (multiplexing of information) can be retained in the face of continuous remodeling of all the synaptic weights. The constraints on the types of STDP rules that can support multiplexing provide a natural test for our theory.

Author summary

Spike timing dependent plasticity (STDP) quantifies the change in the synaptic efficacy as a function of the temporal relationship between pre- and post-synaptic firing. STDP can be viewed as a microscopic unsupervised learning rule, and a wide range of such microscopic learning rules have been described empirically. Since there is no supervisor in unsupervised learning (which would provide with the system its goal), theoreticians have struggled with the question of the possible computational roles of the various STDP rules. Previous studies have focused on the possible contribution of STDP to the spontaneous development of spatial structure. However, the rich temporal repertoire of reported STDP rules has largely been ignored. Here we studied the contribution of STDP to the development of temporal structure. We show how STDP can shape synaptic efficacies to facilitate the transfer of rhythmic information downstream and to enable the multiplexing of information across different frequency channels. Our work emphasizes the relationship between the temporal structure of the STDP rule and the rhythmic activity it can support.

Introduction

Neuronal oscillations have been described and studied for more than a century [114]. Rhythmic activity in the central nervous system has been associated with: attention, learning, encoding of external stimuli, consolidation of memory and motor output [810, 12, 1423]. Rhythmic activity has also been suggested to support multiplexing in the central nervous system [24, 25]. Multiplexing is the ability to transmit two or more different signals via the same channel. The two main forms of multiplexing are: (i) Time division multiplexing, in which different time slots are allocated for the transmission of the different signals, e.g., distributing information for different clients by the same server. (ii) Frequency division multiplexing, in which different signals are transmitted via different frequency bands, e.g., the allocation of different frequency bands for different radio stations.

Various forms of multiplexing have been proposed to be utilized by the brain [2434]. Caruso et al. [32] suggested that time division multiplexing is used in the auditory system to represent different objects. They demonstrated that some neurons shift their response from one object to another in time (similar to fluctuating focus of attention) in a manner that correlated with behaviour. Frequency division multiplexing has been suggested by Teng & Peoppel [30] (they also suggested other methods as well) to be utilized by the auditory system for encoding different features of the same auditory object in the theta and gamma channels; thus, frequency division multiplexing may also be used for binding. Here, we focus on frequency division multiplexing. Our aim is to study a mechanism that can shape synaptic connectivity to facilitate frequency division multiplexing.

The transfer of even a single oscillatory signal downstream is not necessarily trivial and requires some mechanism to prevent destructive interference and maintain the rhythmic component [35]. Recently, it has been suggested that synaptic plasticity, and especially spike-timing-dependent plasticity (STDP), can provide such a mechanism. STDP can be thought of as a generalization of Hebb’s rule that neurons that “fire together wire together” [36] to the temporal domain. In STDP, the amount of potentiation and depression depends on the temporal relation between the pre- and post-synaptic firing [2, 3744]. Luz and Shamir [35] analyzed the characteristics of the STDP rule that will enable the transfer of a single frequency channel downstream.

Multiplexing requires the transfer of more than one frequency channel. However, STDP has been shown to generate a winner-take-all like competition between subgroups of correlated pools of neurons [4549]. Consequently, one may expect that the transfer of one frequency channel will suppress the other; thus, raising serious doubts whether STDP is consistent with multiplexing in the brain. Can STDP develop the capacity for transmitting rhythmic activity in more than one frequency band spontaneously, and facilitate multiplexing of information?

We address this question here in the framework of a modelling study. Below, we define the network architecture and the STDP learning rule. We then derive a mean-field approximation for the STDP dynamics in the limit of slow learning rate for a threshold-linear Poisson downstream neuron model. Analysing the STDP dynamics yields constraints on the STDP rules that enable multiplexing. Next, we test the generalisation of our understanding beyond the simplified analytical toy model using numerical simulations. Finally, we summarize our results and discuss how STDP can yield robustness of function in the face of constant synaptic remodelling.

Results

The pre-synaptic populations

We model a system of two excitatory populations of N neurons, each responding to a different feature of an external stimulus, Fig 1. The external stimulus is characterized by two feature variables, D1 and D2, to which populations 1 and 2 respond, respectively. The response of each population is further assumed to be rhythmic, albeit in a different frequency, representing the different features of the stimulus.

thumbnail
Fig 1. Network architecture.

A schematic description of the network architecture showing two pre-synaptic populations, each oscillating at a different frequency. The output of these pre-synaptic neurons, serves as a feed-forward input to a single post-synaptic neuron.

https://doi.org/10.1371/journal.pcbi.1008000.g001

The spiking activity of neuron k ∈ {1, …N} in population η ∈ {1, 2}, (where are the spike times) is a doubly stochastic process. Given the ‘intensity’ Dη of feature η ∈ {1, 2} of the external stimulus, the spiking activity, ρη,k(t), follows an independent inhomogeneous Poisson process statistics with a mean rate (mean over the Poisson distribution given the intensity variables D1 and D2) that is given by: (1) where νη is the angular frequency of oscillations for neurons in population η, γ is the modulation to the mean ratio of the firing rate, and ϕη,k is the preferred phase of the kth neuron from population η. The preferred phases are assumed to be evenly spaced on the ring. Thus, it is convenient to think of the neurons in each population as organized on a ring according to their preferred phases of firing.

As the intensity parameters, Dη, represent features of the external stimulus they fluctuate on a timescale which is typically longer than the characteristic timescale of the neural response. For simplicity we assumed that D1 and D2 are independent random variables with identical distributions: (2) where 〈…〉 denotes averaging with respect to the neuronal noise and stimulus statistics. The essence of multiplexing is to enable the transmission of different information channels; hence, the assumption of independence of D1 and D2 represents fluctuations of different features. This assumption also drives the winner-take-all competition between the two populations. We further assume that the stimulus changes at a slower timescale than the neural responses; thus, for example typical values for the timescale for the stimulus are on the order of 1s, neural response will follow the stimulus within an order of 10ms and spike at about 10Hz.

As we are interested in studying multiplexing, we assume that the two populations are synapsing in a feed-forward manner onto the same downstream neuron.

The downstream neuron model

Spike time correlations are the driving force of STDP dynamics [39, 45, 46, 50, 51]. Correlated pairs are more likely to affect the firing of the downstream (post-synaptic) neuron, and as a result, to modify their synaptic connections [45, 46, 49]. To analyze the STDP dynamics we need a simplified model for the post-synaptic firing that will enable us to compute the pre-post cross-correlations, and in particular, their dependence on the synaptic weights.

Following past studies [35, 39, 48, 5153], the post-synaptic neuron is modeled as a linear Poisson neuron with a characteristic delay d > 0. The mean firing rate of the post-synaptic neuron at time t, rpost(t), is given by (3) where wη,k is the synaptic weight of the kth neuron of population η.

Temporal correlations & order parameters

The utility of the linear neuron model is that the pre-post correlations are given as a linear combination of the correlations of the pre-synaptic populations. The cross-correlation between pre-synaptic neurons at time difference Δt is given by: (4) where δηξ = 1 when η = ξ and 0 otherwise, is the Kronecker delta function.

The correlation between the jth neuron in population 1 and the post-synaptic neuron can therefore be written as (5) in which δ(t) is the Dirac delta function, and the correlations are determined by global order parameters and , where is the mean synaptic weight and is its first Fourier component. For N ≫ 1 these parameters are defined as follows (6) and (7) The phase ψη is determined by the condition that is real non-negative. Note that the coupling between the two populations is only expressed through the last term of the correlation function, Eq (5).

The STDP rule

Following [46, 50, 51] we model the synaptic modification Δw following either a pre- or post-synaptic spike as: (8) The STDP rule, Eq (8), is written as a sum of two processes: potentiation (+, increase in the synaptic weight) and depression (-, decrease). We further assume a separation of variables by writing each process as a product of a weight dependent function, f±(w), and a temporal kernel, K±t). The term Δt = tposttpre is the time difference between pre- and post-synaptic spiking. Here we assumed, for simplicity, that all pairs of pre and post spike times contribute additively to the learning process via Eq (8). Note, however, that the temporal kernels of the STDP rule, K±t) have a finite support. Here we normalized the kernels, ∫K±t)dΔt = 1. The parameter λ is the learning rate. It is assumed that the learning process is slower than the neuronal spiking activity and the timescale of changes in the external stimulus. Thus, the synaptic weights are relatively fixed on timescales characterizing changes in the external stimulus and the neural response. Here, we used the synaptic weight dependent functions of the form of [46]: (9) (10) where α > 0 is the relative strength of depression and μ ∈ [0, 1] controls the non-linearity of the learning rule. The functions f(w)± ensure that the synaptic weights are confined to the region w ∈ [0, 1]. Gütig and colleagues [46] showed that the relevant parameter regime for the emergence of a non-trivial structure is α > 1 and small μ. Gütig and colleagues have also showed that the limit of μ = 0, termed the additive model enhances the competitive nature of STDP dynamics, whereas the limit of μ = 1, termed the linear model, greatly suppresses the competitive nature.

Empirical studies reported a large repertoire of temporal kernels for STDP rules [37, 38, 4042, 44, 5456]. Here we focus on two families of STDP rules: 1. A temporally asymmetric kernel [37, 41, 44, 55]. 2. A temporally symmetric kernel [38, 42, 44, 56].

For the temporally asymmetric kernel we use the exponential model, Fig 2a: (11) where Δt = tposttpre, Θ(x) is the Heaviside function, and τ± is the characteristic timescale of the potentiation (+) or depression (−). We assume that τ > τ+ as typically reported.

thumbnail
Fig 2. The STDP rules.

The temporal kernels, Kt) = K+t) − Kt), of the STDP rules are shown as a function of pre-post spike time difference, Δt = tposttpre, for (a) The asymmetric learning rule, Eq (11) with τ = 50ms. (b) The symmetric learning rule, Eq (12) with τ+ = 20ms. Different values of τ+ in (a) and τ in (b) are depicted by color as shown in the legend.

https://doi.org/10.1371/journal.pcbi.1008000.g002

For the temporally symmetric learning rule we use a difference of Gaussians model, Fig 2b: (12) where τ± is the temporal width. In this case, the order of firing is not important; only the absolute time difference. We further assume, in both models, that τ+ < τ, as is typically reported.

STDP dynamics in the limit of slow learning

Due to noisy neuronal activities, the learning dynamics is stochastic. However, in the limit of a slow learning rate, λ → 0, the fluctuations become negligible and one can obtain deterministic dynamic equations for the (mean) synaptic weights (see [50] for a detailed derivation) (13) where η = 1, 2 and (14) In Methods we derive the dynamics of the global order parameters using the correlation structure induced by the rhythmic activity, Eq (5). Note that in the dynamics of the order parameters, Eq (26), does not appear explicitly in the dynamics of , and vice-versa. This results from the linearity of the post synaptic neuron model we chose, Eq (3).

The homogeneous solution, Winner-take-all and multiplexing.

Fig 3 shows three example results of simulating the STDP dynamics in the limit of slow learning, Eqs (25) and (26). Panels a & b show the dynamics of the synaptic weights of both populations, color coded by their preferred phase of firing. Panel c depicts the spectrum of the downstream (post-synaptic) neuron firing. Initially, as the synaptic weights are random, the input to the downstream neuron has almost no rhythmic component. In the example of Fig 3a–3c the synaptic weights of both populations converge to a homogeneous solution, in which all the synaptic weights from input neurons of different preferred phases and different populations are the same. As a result, the input to the downstream neuron has no rhythmic component and its activity shows no peak at any non-trivial frequency. The homogeneous solution is expected to be stable for large μ [46].

thumbnail
Fig 3. Three examples: Homogeneous solution, winner-take-all and multiplexing.

Simulation results of the STDP dynamics in the limit of slow learning and a linear Poisson downstream neuron, Eq (13), are shown for three example cases. (a)-(c) The homogeneous solution, using μ = 0.1 and α = 1.05. (d)-(f) Winner-take-all competition, using μ = 0.001 and α = 1.1. (g)-(i) Multiplexing, using μ = 0.01 and α = 1.05. Panels a, b, d, e, g and h show the synaptic weights as a function of time. Each trace depicts the dynamics of a single synaptic weight. The synapses (traces) are differentiated by color according to the preferred phases of the corresponding pre-synaptic neurons, see legend. Panels c, f and i show the spectrogram of the downstream neuron. The horizontal dashed white lines depict the location of the rhythmic channels. In addition to the rhythmic channel (f) and rhythmic channels (i) one can identify sidebands in the spectrograms (f & i). Note that: 1) These additional bands are attenuated by about 100dB relative to the rhythmic channels. 2) They reflect the ongoing STDP dynamics; hence, they do not appear in c, and freezing the STDP abolishes these additional bands (results not shown). In all these examples the asymmetric learning rule, Eq (11), was used. The initial conditions of the synaptic weights were random. The synaptic weights at time zero were independent and identically distributed uniformly on [0, 1]. The parameters that were used in these examples are: ν1 = 11Hz, ν2 = 14Hz, N = 120, λ = 0.001, γ = 1, D = 10Hz, σ = 0.8, d = 10ms, τ+ = 20ms and τ = 50ms.

https://doi.org/10.1371/journal.pcbi.1008000.g003

In the example of Fig 3d–3f rhythmic activity is transferred. As can be seen from Fig 3e, the homogeneous solution is not stable and the STDP dynamics causes the development of preference in the synaptic weights of population 2 to certain phases over the others. This allows the transmission of the rhythmic signal downstream. However, the STDP dynamics induces a winner-take-all competition and population 2 fully suppress population 1; hence, only one rhythmic signal is transmitted downstream and multiplexing is not enabled. The example of Fig 3g–3i depicts the desired scenario of multiplexing. In this case the homogeneous solution is unstable and both populations develop a phase preference; thus, enabling the transmission of rhythmic activity for both signals downstream.

The above examples differ in the parameters that define their respective STDP rules. Below we aim obtain insight as to the conditions that allow multiplexing. Eq (23) describes high dimensional coupled non-linear dynamics for the synaptic weights. Studying the development of rhythmic activity is, thus, not a trivial task. To this end, we take an indirect approach. We study the stability of the homogeneous solution, for all i, in which the rhythmic activity does not propagate downstream, Fig 3a–3c. Specifically, we investigate the conditions in which the homogeneous solution is unstable, and the STDP dynamics can evolve to a solution that has the capacity to transmit rhythmic information in both channels downstream. A complete derivation of the stability analysis can be found in Methods.

The homogeneous solution.

The symmetry of the STDP dynamics, Eq (23), with respect to rotation guarantees the existence of a uniform solution where j ∈ {1, …N} and with η = 1, 2. Solving the fixed point equation for the homogeneous solution yields (15) where (16) Due to the scaling of X± with N, αc is not expected to be far from 1. From symmetry : (17)

Substituting the homogeneous solution into the post-synaptic firing rate equation, Eq (3) yields (18) Thus, in the homogeneous solution, the post-synaptic neuron will fire at a constant rate in time and the rhythmic information will not be relayed downstream.

Stability of the homogeneous solution.

Performing standard stability analysis, we consider small fluctuations around the homogeneous fixed point, wη,j = w* + δwη,j, and expand to first order in the fluctuations: (19) where M is the stability matrix. Analysis of the stability matrix yields four prominent eigenvalues, see Methods. The first, , represents fluctuations in the uniform direction, in which all the synapses are either potentiating or depressing together, is always stable Fig 4a. Furthermore, provides a stabilizing term in other modes of fluctuation. We distinguish two regimes according to the relative strength of depression, α. For α < αc, , and the uniform solution is expected to remain stable. For α > αc, , and structure may emerge for sufficiently low values of μ, Fig 4a. Thus, in order for the STDP dynamics to develop structure, whether winner-take-all of multiplexing, the relative strength of depression, α, must be sufficiently high and μ has to be low.

thumbnail
Fig 4. The uniform and winner-take-all eigenvalues.

(a) The uniform eigenvalue, , is shown as a function of μ, for different values of α/αc, depicted by color. (b) The competitive eigenvalue of the winner-take-all mode, λWTA, is shown as a function of μ, for different values of α/αc, depicted by color. (Inset) Enlarged section of the figure, showing the eigenvalue corresponding to the choice of parameters of Fig 8a–8d, depicted by a blue asterisk. (c) The maximal value of λWTA in the interval μ ∈ [0, 1] is shown as a function of α. Note that for ααc, is obtained on the boundary μ = 0. The eigenvalues were computed for the asymmetric STDP rule, Eq (11). Unless stated otherwise, the following parameters were used: σ = 0.6, D = 10Hz, N = 120, τ = 50ms, τ+ = 20ms and d = 10ms.

https://doi.org/10.1371/journal.pcbi.1008000.g004

The second eigenvalue, λWTA, represents a winner-take-all like mode of fluctuations, in which synapses in one population suppress the other. Clearly, a winner-take-all competition will prevent multiplexing. Typically, for sufficiently high values of α and low values of μ the winner-take-all competitive mode will become unstable; hence, suppressing the option of multiplexing, Fig 4b and 4c (see Methods for complete analysis).

The last two prominent eigenvalues, (η = 1, 2), are due to the rhythmic modes, see Eq (36) in Methods. Thus, , represents fluctuations in the synaptic weights of population η, in which a phase preference is developed, enabling the transmission of rhythmic activity in (angular) frequency νη. Examining the rhythmic eigenvalues reveals that they are composed of a sum of two terms, see Methods. The first term is similar to λWTA and can become positive (unstable) for sufficiently large values of α and low values of μ, and is largely independent of the temporal structure of the STDP rule. The second term depends on the rhythmic activity and the temporal structure of the STDP rule. It is this second term that can enable the rhythmic modes to develop while the competitive winner-take-all mode is suppressed.

Figs 5 and 6 show the dependence of the rhythmic eigenvalues on the various parameters that govern the STDP dynamics for the temporally asymmetric, Eq (11), and the temporally symmetric, Eq (12), learning rules, respectively. The temporal structure of the STDP rule, Eqs (11) and (12), as well as the delay, d, and the modulation depth, γ, determine the frequency dependence of , whereas increasing α and decreasing μ shifts the curve up, increasing the range of unstable frequencies.

thumbnail
Fig 5. The rhythmic eigenvalue, , for the asymmetric learning rule, Eq (11).

(a)-(d) The rhythmic eigenvalue, , is shown as a function of frequency, , for different values of: the delay, d, in (a), relative strength of depression, α/αc in (b), μ in (c), and of the potentiation time constant, τ+, in (d)—as depicted by color. The black (11Hz) and purple (14Hz) stars in (d) show the eigenvalues with the parameters used in the simulations as shown in Fig 8a–8d. (e) and (f) The phase diagram of the system showing regions of different types of solutions in the plane of [τ+, τ] in (e) and the plane of [μ, α] in (f), as determined by the signs of and λWTA. The abbreviations to the right of the panels are: NR—non rhythmic, R—rhythmic (multiplexing) and WTA—winner-take-all. Unless stated otherwise, the parameters used in this figure are: γ = 1, σ = 0.6, D = 10Hz, N = 120, τ = 50ms, τ+ = 20ms, μ = 0.01, α = 1.1 and d = 10ms. In (d) μ = 0.011.

https://doi.org/10.1371/journal.pcbi.1008000.g005

thumbnail
Fig 6. The rhythmic eigenvalue, , for the symmetric learning rule, Eq (12).

(a)-(d) The rhythmic eigenvalue, , is shown as a function of frequency, , for different values of: the delay, d, in (a), relative strength of depression, α/αc in (b), μ in (c), and of the potentiation time constant, τ+, in (d)—as depicted by color. The blue (11Hz) and red (14Hz) stars in (d) show the eigenvalues with the parameters used in the simulations as shown in S4a–S4d Fig. (e) and (f) The phase diagram of the system showing regions of different types of solutions in the plane of [τ+, τ] in (e) and the plane of [μ, α] in (f), as determined by the signs of and λWTA. The abbreviations to the right of the panels are: NR—non rhythmic, R—rhythmic (multiplexing) and WTA—winner-take-all. Unless stated otherwise, the parameters used in these figures are: γ = 1, σ = 0.6, D = 10Hz, N = 120, τ = 50ms, τ+ = 20ms, μ = 0.01, α = 1.1 and d = 10ms. In (d) α = 1.05 and μ = 0.011, in (e) μ = 0.005 was taken.

https://doi.org/10.1371/journal.pcbi.1008000.g006

The above analysis provides intuition as to the required conditions for multiplexing. Essentially, one expects that if the competitive eigenvalue is stable, λWTA < 0, and both rhythmic eigenvalues are unstable, , then the synaptic weights will evolve towards a state that will allow the transfer of both rhythms downstream. This is summarized in the phase diagram of the system, which depicts the different types of behaviour as a function of the parameters that characterize the STDP rule, panels e & f of Figs 5 and 6. Note, however, that the computation of these phase diagrams is based on local analysis of the homogeneous solution (the stability of λWTA, and instability of ). To study the non-local behaviour, a numerical investigation is required.

Fig 7 depicts the results of a numerical simulation of the STDP dynamics with λWTA ≈ −0.003 < 0, , and . Fig 7a and 7b show the dynamics of the synaptic weights. Initially, the synaptic weights were homogeneous (up to a small noise component, see caption) and no rhythm was transmitted downstream. However, through a process of spontaneous symmetry breaking both populations developed a phase preference; thus, enabling multiplexing.

thumbnail
Fig 7. Multiplexing as a limit cycle solution.

Simulation results of the STDP dynamics in the limit of slow learning and a linear Poisson downstream neuron, Eq (13). (a) and (b) The synaptic weights are shown as a function of time, for populations 1 and 2 in (a) and (b), respectively. Each trace depicts the dynamics of a single synaptic weight. The synapses (traces) are differentiated by color according to the preferred phases of the pre-synaptic neurons, see legend. (c) The dynamics of the order parameters: mean, , and the magnitude of the first Fourier component, , are shown for populations 1 and 2, see legend. (d) The dynamics of the phases, ψ1 and ψ2, is shown in red and pink, respectively, as a function of time. The synaptic weights at tie zero were random, stochastically independent with identical uniform distribution on [0.45, 0.55]. The parameters used in this simulation are: N = 120, , , λ = 0.001, γ = 1, σ = 0.6, D = 10Hz, N = 120, τ = 50ms, τ+ = 20ms, μ = 0.01, α = 1.05 and d = 10ms.

https://doi.org/10.1371/journal.pcbi.1008000.g007

Examining the dynamics of the order parameters, one can see how the rhythmic components, and , evolve in time, rising from zero towards a fixed point value, Fig 7c. In contrast with the order parameters, and , the synaptic weights themselves do not reach a fixed point, rather they remain dynamic. How can the order parameters remain fixed while the entire synaptic population is constantly changing? The solution to this puzzle is provided by examining the dynamics of ψη, see Eq (7), the phases of the rhythmic inputs, Fig 7d. As can be seen from the figure, the phases, ψ1 and ψ2, drift on the circle with constant velocities. Thus, the STDP dynamics converge to a limit cycle solution, in which the synaptic weights profile remains fixed—relative to to its phase, wη(ϕ, t) = wη(ϕψη(t)), while the phases, themselves drift in time, ψη(t) = ψη(0) + vηt. Qualitatively similar results can be obtained for the temporally symmetric STDP rule, Eq (12), see S3 Fig in Supporting information.

Conductance based downstream neuron model

The above analysis relies on the choice of a linear neuron model, Eq (3), which resulted in the lack of explicit interaction term between the two rhythms and , see Eq (26) in Methods. Non-linearity in the response of the downstream neuron to its inputs will generate interaction between the two rhythms. This interaction may increase the competition between the two rhythms that will prevent multiplexing. How robust are our results with respect to non-linear response of the downstream neuron? This issue is addressed below by studying the STDP dynamics in a conductance based Hodgkin-Huxley type model for the downstream neuron.

We used the conductance based model of Shriki et al. [57] for the downstream neuron. This choice was motivated by the ability to control the degree of non-linearity of the neuron’s response to its inputs. Often the response of a neuron is quantified using an f-I curve, which maps the frequency (f) of the neuronal spiking response to a certain level of injected current (I). In the Shriki model [57], a strong transient potassium A-current yields a threshold linear f-I curve to a good approximation. Thus, by adjusting the strength of the A-current we can control the ‘linearity’ of the f-I curve of the downstream neuron.

Fig 8 presents the results of simulating the temporally asymmetric STDP dynamics with a conductance based downstream neuron. In Fig 8a–8d the downstream neuron is characterized by a strong A-current. In this regime, the f-I curve of the downstream neuron is well approximated by a threshold-linear function (see Fig 1 in [57]). Consequently, it is reasonable to expect that our analytical results will hold, in the limit of a slow learning rate. Indeed, even though the initial conditions of all the synapses are uniform, the uniform solution loses its stability, and a structure that shows phase preference emerges, Fig 8a and 8b. After about half an hour of simulation time, the STDP dynamics of each sub-population converges to an approximately periodic solution. The order parameters and appear to converge to a fixed point, Fig 8c, while the phases ψ1 and ψ2 continue to drift with a relatively fixed velocity, Fig 8d. For the specific choice of parameters used in this simulation, the competitive winner-take-all eigenvalue is stable, see inset Fig 4b, whereas the rhythmic eigenvalue is unstable, see stars in Fig 5d.

thumbnail
Fig 8. STDP dynamics with a conductance based downstream neuron.

Results of two numerical simulation of STDP dynamics with a conductance based downstream neuron are presented: (a)-(d) using a downstream neuron with a linear f-I curve, and (e)-(h) using a downstream neuron with a non-linear f-I curve, see Details of numerical simulations in Methods. (a), (b), (e) and (f) The synaptic weights are shown as a function of time for population 1 (in a and e) and population 2 (in b and f). The different traces show the dynamics of different synapses colored by the preferred phase of their pre-synaptic neuron, see legend. (c) and (g) The dynamics of the order parameters: the mean, , and first Fourier component, , are shown as a function of time for both populations, see legend. (d) and (h) The dynamics of the phases, ψ1 and ψ2, is shown as a function of time in red and pink, respectively. Here we used the temporally asymmetric STDP rule, Eq (11). Additional parameters are: N1 = N2 = 120, , , α = 1.1 and μ = 0.011. The learning rate λ in the non-linear case is 5 times larger than in the linear case. For further details see Methods.

https://doi.org/10.1371/journal.pcbi.1008000.g008

Fig 8e–8h depict the STDP dynamics, for a non-linear downstream neuron. To this end we used the Shriki model [57] with no A-current. As can be seen from the figure, the system converges to a dynamical solution that shows some degree of similarity with the linear neuron (Fig 8a–8d). Specifically, the system relaxes to a dynamical solution that enables the transmission of both rhythms, i.e., multiplexing. However, in the non-linear case the order parameters and do not converge to a fixed point but fluctuate around some mean value. Qualitatively similar results can be obtained for the temporally symmetric STDP rule, Eq (12), see S4 Fig in Supporting information.

Discussion

Rhythmic activity in the brain has fascinated and puzzled neuroscientists for more than a century. Nevertheless, the utility of rhythmic activity remains enigmatic. One explanation frequently put forward is that of the multiplexing of information. Our work provides some measure of support for this hypothesis from the theory of unsupervised learning.

We studied the computational implications of a microscopic learning rule, namely STDP, in the absence of a reward or a teacher signal. Previous work showed that STDP generates a strong winner-take-all like competition between subgroups of correlated neurons, thus effectively eliminating the possibility of multiplexing [46]. Our work demonstrates that rhythmic activity does not necessarily generate competition between different rhythmic signals. Moreover, we found that under a wide range of parameters STDP dynamics will develop spontaneously the capacity for multiplexing.

Not every learning rule, i.e., choice of parameters that describes the STDP update rule, will support multiplexing. This observation provides a natural test for our theory. Clearly, if multiplexing has evolved via a process of STDP, then the STDP rule must exhibit instability with respect to the rhythmic modes and stability against fluctuations in the winner-take-all direction. These constraints serve as basic predictions of our theory.

Khamechian and colleagues [28] suggested a model in which visual information from the ventral and dorsal pathways is transmitted to the prefrontal cortex by means of two well separated rhythms. Specifically, they proposed that the ventral pathway conveys information to the prefrontal cortex in the gamma band (40-70 Hz), whereas information from the dorsal pathway is relayed in the high gamma band (180-220 Hz). As the frequency bands are so segregated, does this necessarily imply two different STDP rules with different temporal characteristics are used, one for each stream? No. Interestingly, Khamechian and colleagues showed that for the dorsal stream to transmit information efficiently, the distribution of preferred phases must change from uniform to a non-uniform distribution. This is consistent with the homogeneous solution, implying that the rhythmic eigenvalue of the higher gamma is stable (as well as the competitive eigenvalue). Alternatively, the preferred phases of dorsal neurons can fluctuate from trial to trial, yielding a homogeneous correlation structure, on the timescale of plasticity. In this case, STDP will not develop a rhythmic solution regardless of the plasticity rule. Thus, to pursue our theory empirically, one has first to characterize the distribution of preferred phases of the upstream population, and especially, the stability of the relative phases over time. Secondly, one has to establish multiplexing: is there a subgroup of neurons that receives and responds-to both streams of information? Or is there a segregation of signals also at the level of the downstream population? Third, one has to characterize the STDP rule and compute its eigenvalues.

In our work we have made several simplifying assumptions to facilitate the analysis:

  1. We limited the discussion to multiplexing of only two rhythms. Is our theory general? Can STDP support multiplexing of more than two rhythms?
  2. We assumed the two populations are symmetric. This reduced the number of free parameters (e.g., N, D, σ and so on). Will asymmetry generate a winner-take-all competition in which the ‘stronger’ population suppresses the other; thus, preventing multiplexing?
  3. We further assumed symmetry within each population. Specifically, we assumed that the preferred phases of the neurons in each population are evenly spaced on the ring. However, if the preferred phases of the neurons are not distributed uniformly, then even a homogeneous solution will transmit the rhythm downstream. In that case, can multiplexing emerge in a trivial manner irrespective of the STDP rule?

STDP can support multiplexing of more than one rhythm, as illustrated in S1 Text in the section Supporting information that shows the multiplexing of three rhythms. Can STDP support the multiplexing of 10, 100, 1000 rhythms? Is there a capacity limit? We believe that the nature of rhythmic activity in the brain limits the number of channels that can be efficiently multiplexed. This is due to the fact the rhythmic activity in the brain is relatively wide band and efficient multiplexing requires the different signals to be well segregated in frequency. Consequently, we believe that if frequency division multiplexing is used in the brain, then the number of multiplexed signals is small.

Question number 2 is addressed in S2 Text where we show that fine tuning of a symmetry between the two populations is not required in order to obtain multiplexing. Regarding question number 3, while it is true that a non uniform phase distribution will make the transmission of rhythmic activity easier, it will not abolish the winner-take-all like competition between the two signals. Furthermore, by selectively potentiating certain phases and depressing others, STDP has the ability to amplify the transmitted rhythmic component. This issue and the question of transmitting information about the phase of the rhythmic activity are beyond the scope of the current work and will be addressed elsewhere.

In Luz & Shamir [35], due to the underlying U(1) symmetry, the system converged to a limit cycle solution. For a single population, the order parameter, , will drift on the ring |w| = const with a constant velocity. Here, in the linear neuron model, in the limit of a slow learning one expects the system to converge to the product space of two limit cycles. As there is no reason to expect that the ratio of drift velocities of the two populations will be rational, the order parameters , will, most likely, cover the torus uniformly. Nevertheless, the reduced dynamics of each population will exhibit a limit cycle. This intuition relies on the lack of interaction between and in the dynamics of the order parameters, Eq (26). Essentially, the interaction between the two populations is mediated solely via their mean component, . However, introducing non-linearity to the response properties of the post-synaptic neuron will induce an interaction between the modes. Similarly, for any finite learning rate, λ ≠ 0, the rhythmic modes will not be orthogonal and consequently will be correlated.

A post-synaptic neuron with a non-linear f-I curve and a finite learning rate is expected to induce an interaction between the two populations. Consequently, for a finite learning rate the order parameters , will not be confined to a torus and will fluctuate around (in contrast with on) the ring. Traces for this behaviour can be seen by the fact that drift velocity in the numerical simulations is not constant (compare Fig 8d and 8h) and the global order parameters and do not converge to a fixed point, but remain to fluctuate around some mean value (compare Fig 8c and 8g). Thus, the system converges to a strange attractor around the torus. Nevertheless, in this example, the post-synaptic neuron responds to both rhythms; hence, multiplexing does not require a linear neuron.

Synaptic weights in the central nervous system are highly volatile and demonstrate high turnover rates as well as considerable size changes that correlate with the synaptic weight [5865]. How can the brain retain functionality in the face of these considerable changes in synaptic connectivity? Our work demonstrates how functionality, in terms of retaining the ability to transmit downstream rhythmic information in several channels, can be retained even when the entire synaptic population is modified throughout its entire dynamic range. Here, robustness of function is ensured by the dynamics of the global order parameters.

Methods

STDP dynamics of the order parameters

Using the correlation structure, Eqs (5) and (14) yields (20) where and are the Fourier transforms of the STDP kernels (21) (22) Note that for our specific choice of kernels, , by construction.

The dynamics of the synaptic weights can be written in terms of the order parameters, and (see Eqs (6) and (7)). In the continuum limit, Eq (13) becomes (23) where (24) Integrating Eq (23) over ϕ yields the dynamics of the order parameters (25) (26) where (27) Note, that in Eq (26), does not appear explicitly in the dynamics of .

Analysis of the stability matrix

Below we analyze the stability matrix with respect to fluctuations around the homogeneous solution, M, Eq (19). Using Eqs (13) and (20), the fluctuations can be written as (28) with (29) Without loss of generality, taking η = 1, ξ = 2 yields (30) In the homogeneous fixed point, Eq (15): (31) where (32)

Studying Eq (30), the stability matrix, M, has four prominent eigenvalues. Two are in the subspace of the uniform directions of the two populations, and two are in directions of the of the rhythmic modes. As the uniform modes of fluctuations, , span an invariant subspace of the stability matrix, M, we can study the restricted matrix, , defined by: (33)

The matrix has two eigenvectors, and , and the corresponding eigenvalues are (34) (35)

The first eigenvector represents the uniform mode of fluctuations and its eigenvalue is always negative, Fig 4a; hence, the homogeneous solution is always stable with respect to uniform fluctuations. Furthermore, serves as a stabilizing term in other modes of fluctuations. We distinguish two regimes: α < αc and α > αc. For α < αc, , and the uniform solution is expected to remain stable. For α > αc, , and structure may emerge for sufficiently low values of μ, Fig 4a.

The second eigenvalue represents a winner-take-all like mode of fluctuations, in which synapses in one population suppress the other, and will prevent multiplexing. For α > αc, limμ→0+λWTA = 2(αc−1). For the temporally asymmetric learning rule, Eq (11), X = 0; consequently αc > 1. In the temporally symmetric difference of Gaussians STDP model, Eq (12), αc > 1 if and only if τ+ < τ, which is the typical case. For α < αc in the limit of small μ the divergence of stabilizes fluctuations in this mode. In this case (α < αc), λWTA reaches its maximum at an intermediate value of μ ∈ (0, 1), Fig 4b. For a small range of α < αc, ααc this maximum can be positive, see Fig 4c. Fig 4b depicts λWTA as a function of μ for different values of α, shown by color. Note that λWTA depends on the temporal structure of the STDP rule solely via the value of αc and X; however, its sign is independent of X. As can seen in the figure, for α > αc > 1, λWTA is a decreasing function of μ and the homogeneous solution loses stability in the competitive winner-take-all direction in the limit of small μ. Note that the competitive eigenvalue is not identical in the asymmetric and symmetric rules due to the fact that . However, as , λWTA behaves qualitatively the same in both types of STDP rules.

The rhythmic modes are eigenvectors of the stability matrix M with eigenvalues (36) (37) The first two terms in the right hand side of Eq (36) contain the stabilizing term , and their dependence on α and μ is similar to that of λWTA; compare with Eq (35). The last term depends on the real part of the Fourier transform of the delayed STDP rule at the specific frequency of oscillations, νη. This last term can destabilize the system in a direction that will enable the propagation of rhythmic activity downstream while keeping the competitive WTA mode stable depending on the interplay between the rhythmic activity and the temporal structure of the STDP rule. For , is an increasing function of the modulation to the mean ratio γ. If in addition α > αc > 1 then is an increasing function of σ as well.

In the low frequency limit, , and depends on the characteristic timescales of τ+, τ, and d only via αc. For large frequencies . In this limit the STDP dynamics loses its sensitivity to the rhythmic activity. Consequently, the resultant modulation of the synaptic weights profile, , will become negligible; hence, effectively rhythmic information will not propagate downstream even if the rhythmic eigenvalue is unstable [35]. Thus, the intermediate frequency region is the most relevant for multiplexing.

In the case of the temporally symmetric kernel, Eq (12), the value of is given by (38) where . Fig 6a shows the rhythmic eigenvalue, , as a function of the oscillation frequency, , for different values of the delay, d as depicted by color. Since typically, τ+ < τ, for finite ν, . Consequently, will be dominated by the potentiation term, , except for the very low frequency range of . Typical values for the delay, d, are 1-10ms, whereas typical values for the characteristic timescales for the STDP, τ±, are tens of ms. As a result, the specific value of the delay, d, does not affect the rhythmic eigenvalue much, and the system becomes unstable in the rhythmic direction for .

Increasing the relative strength of the depression, α, strengthens the stabilizing term ; however, Δf(w*) scales approximately linearly with α such that the rhythmic eigenvalue is elevated, causing the frequency range in which to widen, Fig 6b. Similarly, for α > αc > 1, increasing μ strengthens and reduces the frequency range in which , Fig 6c. Decreasing the characteristic timescale of potentiation, τ+ increases the frequency region with an unstable rhythmic eigenvalue; however, when τ+ becomes comparable to the delay, d, the oscillatory nature of in ν is revealed, Fig 6d.

In the case of the temporally asymmetric kernel, Eq (11), the value of is given by (39) with . The main difference between the temporally symmetric and the asymmetric rules is that due to the discontinuity of the asymmetric STDP kernel, decay algebraically rather than exponentially fast with ν. As a result, the phase , plays a more central role in controlling the stability of the rhythmic eigenvalue. As above, since typically, τ+ < τ, then . Fig 5a shows the rhythmic eigenvalue, , for different values of d. The dashed vertical lines depict the frequencies at which the potentiation term changes its sign, . As can be seen from the figure, for this choice of parameters the upper cutoff of the central frequency range in which the rhythmic eigenvalue is unstable is dominated by ν*, which is governed by the delay.

The effects of parameters α and μ show similar trends as for the symmetric STDP rule. Specifically, increasing μ or decreasing α, in general, shrinks the region in which fluctuations in the rhythmic direction are unstable, Fig 5b and 5c. Increasing the characteristic timescale of potentiation, τ+ beyond that of the depression, makes the depression term, , more dominant, Fig 5d. In this case the lower frequency cutoff will be dominated by the change of sign in the depression term; i.e., by the angular frequency ν*, such that .

Details of the numerical simulations

The conductance based model of the downstream neuron.

We used the conductance based model of Shriki et al. [57]. The model is fully defined and studied in [57]. Here, we briefly describe the model dynamics, as used in our numerical simulations. The dynamics of the membrane potential is given by: (40) where the leak current is given by IL = gL(VEL). INa and Ik are the sodium and potassium currents, respectively, and are given by and . The relaxation equations of the the gating variables x = h, n are dx/dt = (xx)/τx. The time independent functions x = h, n, m and τx are: x = αx/(αx + βx) and τx = 0.1/(αx + βx), with αm = − 0.1(V + 30)/(exp(− 0.1(V + 30)) − 1), βm = 4 exp(− (V + 55)/18), αh = 0.07 exp(− (V + 44)/20), βh = 1/(exp(− 0.1(V + 14)) + 1), αn = − 0.01(V + 34)/(exp(− 0.1(V + 34)) − 1) and βn = 0.125 exp(− (V + 44)/80).

The A-current, IA, that linearizes the f-I curve is given by , where a = 1/(exp(−(V + 50)/20) + 1) and db/dt = (bb)/τA. The time independent function of b is b = 1/(exp((V + 80)/6) + 1) with the voltage independent time constant τA.

The term gη is the total conductance of the pre-synaptic population η and can be written as follows (41) Here, Nη is the number of neurons in population η, is the synaptic weight of the jth neuron from population η and [y]+ = max(0, y). The s spike of the jth neuron is denoted by . We used with Sη = 1000/Nη, τη = 5ms and (see [35, 46]).

The membrane capacity is cm = 0.1μF/cm2. The sodium, potassium and leak conductances are , and gL = 0.05mS/cm2. For the conductance based downstream neuron model with a linear f-I curve we used the following parameters: gA = 20mS/cm2 and τA = 20ms. For the conductance based downstream neuron with a non-linear f-I curve we took gA = 0. The reversal potentials of the ionic and synaptic currents are ENa = 55mV, , , Eη = Eexc = 0mV.

Modeling pre-synaptic activity.

Pre-synaptic activities were modeled by independent inhomogeneous Poisson processes, with time dependent mean firing rate given by Eq (1), with γ = 1. Every second of simulation time D1 and D2 were independently sampled from a uniform distribution with a minimum of 7Hz and a maximum of 13Hz, D = (7+ U(0, 6))Hz. Each pre-synaptic neuron, spiked according to an approximated Bernoulli process, with a probability of prΔt, where r is the mean firing rate (Eq (1)) and Δt = 1ms. The number of pre-synaptic neurons in each population was N = 120.

STDP.

The learning rate of the simulations with a linear f-I curve is λ = 0.01, Fig 8a–8d and S4a–S4d Fig. In the non-linear cases the learning rate is λ = 0.05, Fig 8e–8h and S4e–S4h Fig. In both the linear and non-linear cases we used μ = 0.011. In order to update the synaptic weights we relied on the separation of time scales between the synaptic dynamics of Eq (40); hence, the synaptic weights were updated every 1s of simulation.

  • Asymmetric learning rule: The ratio of depression to potentiation is α = 1.1 and the characteristic decay times were chosen to be τ+ = 20ms and τ = 50ms.
  • Symmetric learning rule: Here, we chose α = 1.05, furthermore, based on our analysis, we chose the ratio of decay times to be ∼10, τ+ = 5ms, τ = 50ms.

Initial conditions for all neurons were uniform; i.e., wη(ϕ, t = 0) = 0.5, η = 1, 2.

Supporting information

S1 Fig. Multiplexing three signals for the temporally asymmetric STDP rule.

Simulation results of the STDP dynamics in the limit of slow learning and a linear Poisson downstream neuron, Eq (13), with three input signals. (a), (b) and (c) The synaptic weights are shown as a function of time, for populations 1,2 and 3, respectively. Each trace depicts the dynamics of a single synaptic weight. The synapses (traces) are differentiated by color according to the preferred phases of thier pre-synaptic neurons, see legend. The initial conditions of the synaptic weights were random with uniform distribution on the interval [0, 1]. The parameters used in this simulation are: N = 120, λ = 0.001, , , , D = 10Hz, σ = 0.81, γ = 0.9. We simulated the temporally asymmetric STDP rule, Eq (11), with τ = 50ms, τ+ = 5ms, μ = 0.01, α = 1.01. The delay of the downstream neuron was d = 10ms.

https://doi.org/10.1371/journal.pcbi.1008000.s003

(TIF)

S2 Fig. Multiplexing asymmetric signals for the temporally asymmetric STDP rule.

Simulation results of the STDP dynamics in the limit of slow learning and a linear Poisson downstream neuron, Eq (13), for two asymmetric signals. (a) and (b) The synaptic weights are shown as a function of time, for populations 1 and 2, respectively. Each trace depicts the dynamics of a single synaptic weight. The synapses (traces) are differentiated by color according to the preferred phases of their pre-synaptic neurons, see legend. The initial conditions of the synaptic weights were random with uniform distribution on the interval [0, 1]. The parameters used in this simulation are N = 120, λ = 0.001, , , A1 = A2 = 10Hz, D1 = 15Hz, D2 = 10Hz. We simulated the temporally asymmetric STDP rule, Eq (11), with: τ = 50ms, τ+ = 20ms, μ = 0.01, α = 1.05. The delay of the downstream neuron was d = 10ms.

https://doi.org/10.1371/journal.pcbi.1008000.s004

(TIF)

S3 Fig. Multiplexing as a limit cycle solution, symmetric learning rule.

Simulation results of the STDP dynamics in the limit of slow learning and a linear Poisson downstream neuron, Eq (13). (a) and (b) The synaptic weights are shown as a function of time, for populations 1 and 2 in (a) and (b), respectively. Each trace depicts the dynamics of a single synaptic weight. The synapses (traces) are differentiated by color according to the preferred phases of their pre-synaptic neurons, see legend. (c) The dynamics of the order parameters: mean, , and the magnitude of the first Fourier component, , are shown for populations 1 and 2, see legend. (d) The dynamics of the phases, ψ1 and ψ2, is shown in red and pink, respectively, as a function of time. The parameters used in this simulation are: N = 120, , , λ = 0.001, γ = 1, σ = 0.6, D = 10Hz, N = 120, τ = 50ms, τ+ = 5ms, μ = 0.001, α = 1.05 and d = 10ms.

https://doi.org/10.1371/journal.pcbi.1008000.s005

(TIF)

S4 Fig. STDP dynamics with conductance based downstream neuron for the temporally symmetric STDP rule.

Results of two numerical simulation of STDP dynamics with a conductance based downstream neuron are presented: (a)-(d) using a downstream neuron with a linear f-I curve, and (e)-(h) using a downstream neuron with a non-linear f-I curve, see Details of numerical simulations Methods. (a), (b), (e) and (f) The synaptic weights are shown as a function of time for population 1 (in a and e) and population 2 (in b and f). The different traces show the dynamics of different synapses colored by the preferred phase of the pre-synaptic neuron, see legend. (c) and (g) The dynamics of the order parameters: the mean, , and first Fourier component, , are shown as a function of time for both populations, see legend. (d) and (h) The dynamics of the phases, ψ1 and ψ2, is shown as a function of time in red and pink, respectively. Here we used the temporally symmetric STDP rule, Eq (12). Additional parameters are: , , α = 1.05 and μ = 0.011. The learning rate λ in the non-linear case is 5 times larger than in the linear case. Further details of the numerical simulations appear in Methods.

https://doi.org/10.1371/journal.pcbi.1008000.s006

(TIF)

References

  1. 1. Coenen A, Fine E, Zayachkivska O. Adolf Beck: A forgotten pioneer in electroencephalography. Journal of the History of the Neurosciences. 2014;23(3):276–286. pmid:24735457
  2. 2. Haas L. Hans Berger (1873-1941), Richard Caton (1842-1926), and electroencephalography. J Neurol Neurosurg Psychiatry. 2003;74(1):9–9. pmid:12486257
  3. 3. Steriade M, Deschenes M, Domich L, Mulle C. Abolition of spindle oscillations in thalamic neurons disconnected from nucleus reticularis thalami. Journal of neurophysiology. 1985;54(6):1473–1497. pmid:4087044
  4. 4. Buzsaki G, Horvath Z, Urioste R, Hetke J, Wise K. High-frequency network oscillation in the hippocampus. Science. 1992;256(5059):1025–1027. pmid:1589772
  5. 5. Gray CM. Synchronous oscillations in neuronal systems: mechanisms and functions. Journal of computational neuroscience. 1994;1(1-2):11–38. pmid:8792223
  6. 6. Bragin A, Engel J Jr, Wilson CL, Fried I, Buzsáki G. High-frequency oscillations in human brain. Hippocampus. 1999;9(2):137–142. pmid:10226774
  7. 7. Burgess AP, Ali L. Functional connectivity of gamma EEG activity is modulated at low frequency during conscious recollection. International Journal of Psychophysiology. 2002;46(2):91–100. pmid:12433386
  8. 8. Buzsaki G. Rhythms of the Brain. Oxford University Press; 2006.
  9. 9. Shamir M, Ghitza O, Epstein S, Kopell N. Representation of time-varying stimuli by a network exhibiting oscillations on a faster time scale. PLoS computational biology. 2009;5(5):e1000370. pmid:19412531
  10. 10. Buzsáki G, Freeman W. Editorial overview: brain rhythms and dynamic coordination. Current opinion in neurobiology. 2015;31:v–ix. pmid:25700995
  11. 11. Ray S, Maunsell JH. Do gamma oscillations play a role in cerebral cortex? Trends in cognitive sciences. 2015;19(2):78–85. pmid:25555444
  12. 12. Bocchio M, Nabavi S, Capogna M. Synaptic plasticity, engrams, and network oscillations in amygdala circuits for storage and retrieval of emotional memories. Neuron. 2017;94(4):731–743. pmid:28521127
  13. 13. Proskovec AL, Wiesman AI, Heinrichs-Graham E, Wilson TW. Beta Oscillatory Dynamics in the Prefrontal and Superior Temporal Cortices Predict Spatial Working Memory Performance. Scientific Reports. 2018;8(1):8488. pmid:29855522
  14. 14. Taub AH, Perets R, Kahana E, Paz R. Oscillations synchronize amygdala-to-prefrontal primate circuits during aversive learning. Neuron. 2018;97(2):291–298. pmid:29290553
  15. 15. Engel AK, König P, Kreiter AK, Schillen TB, Singer W. Temporal coding in the visual cortex: new vistas on integration in the nervous system. Trends in neurosciences. 1992;15(6):218–226. pmid:1378666
  16. 16. Singer W, Gray CM. Visual feature integration and the temporal correlation hypothesis. Annual review of neuroscience. 1995;18(1):555–586. pmid:7605074
  17. 17. Engel AK, Fries P, Singer W. Dynamic predictions: oscillations and synchrony in top–down processing. Nature Reviews Neuroscience. 2001;2(10):704.
  18. 18. Fries P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences. 2005;9(10):474–480. pmid:16150631
  19. 19. Jensen O, Kaiser J, Lachaux JP. Human gamma-frequency oscillations associated with attention and memory. Trends in neurosciences. 2007;30(7):317–324. pmid:17499860
  20. 20. Knyazev GG. Motivation, emotion, and their inhibitory control mirrored in brain oscillations. Neuroscience & Biobehavioral Reviews. 2007;31(3):377–395.
  21. 21. Hobson JA, Pace-Schott EF. The cognitive neuroscience of sleep: neuronal systems, consciousness and learning. Nature Reviews Neuroscience. 2002;3(9):679. pmid:12209117
  22. 22. Engel AK, Fries P. Beta-band oscillations—signalling the status quo? Current opinion in neurobiology. 2010;20(2):156–165.
  23. 23. Storchi R, Bedford RA, Martial FP, Allen AE, Wynne J, Montemurro MA, et al. Modulation of fast narrowband oscillations in the mouse retina and dLGN according to background light intensity. Neuron. 2017;93(2):299–307. pmid:28103478
  24. 24. Hoppensteadt FC, Izhikevich EM. Thalamo-cortical interactions modeled by weakly connected oscillators: could the brain use FM radio principles? Biosystems. 1998;48(1-3):85–94. pmid:9886635
  25. 25. Lisman JE, Idiart MA. Storage of 7+/-2 short-term memories in oscillatory subcycles. Science. 1995;267(5203):1512–1515. pmid:7878473
  26. 26. Peter A, Uran C, Klon-Lipok J, Roese R, Van Stijn S, Barnes W, et al. Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations. Elife. 2019;8:e42101. pmid:30714900
  27. 27. Lankarany M, Al-Basha D, Ratté S, Prescott SA. Differentially synchronized spiking enables multiplexed neural coding. Proceedings of the National Academy of Sciences. 2019;116(20):10097–10102.
  28. 28. Khamechian MB, Kozyrev V, Treue S, Esghaei M, Daliri MR. Routing information flow by separate neural synchrony frequencies allows for “functionally labeled lines” in higher primate cortex. Proceedings of the National Academy of Sciences. 2019;116(25):12506–12515.
  29. 29. Akam T, Kullmann DM. Oscillatory multiplexing of population codes for selective communication in the mammalian brain. Nature Reviews Neuroscience. 2014;15(2):111–122. pmid:24434912
  30. 30. Teng X, Poeppel D. Theta and Gamma Bands Encode Acoustic Dynamics over Wide-ranging Timescales. bioRxiv. 2019; p. 547125.
  31. 31. Diamond ME. Perceptual uncertainty. PLOS Biology. 2019;17(8):1–7.
  32. 32. Caruso VC, Mohl JT, Glynn C, Lee J, Willett SM, Zaman A, et al. Single neurons may encode simultaneous stimuli by switching between activity patterns. Nature communications. 2018;9(1):1–16.
  33. 33. Panzeri S, Brunel N, Logothetis NK, Kayser C. Sensory neural codes using multiplexed temporal scales. Trends in neurosciences. 2010;33(3):111–120. pmid:20045201
  34. 34. Naud R, Sprekeler H. Sparse bursts optimize information transmission in a multiplexed neural code. Proceedings of the National Academy of Sciences. 2018;115(27):E6329–E6338.
  35. 35. Luz Y, Shamir M. Oscillations via Spike-Timing Dependent Plasticity in a Feed-Forward Model. PLoS computational biology. 2016;12(4):e1004878. pmid:27082118
  36. 36. Hebb DO. The organization of behavior: A neuropsychological theory. Psychology Press; 2005.
  37. 37. Bi Gq, Poo Mm. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of neuroscience. 1998;18(24):10464–10472. pmid:9852584
  38. 38. Woodin MA, Ganguly K, ming Poo M. Coincident Pre- and Postsynaptic Activity Modifies GABAergic Synapses by Postsynaptic Changes in Cl− Transporter Activity. Neuron. 2003;39(5):807—820. https://doi.org/10.1016/S0896-6273(03)00507-5.
  39. 39. Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biological cybernetics. 2008;98(6):459–478. pmid:18491160
  40. 40. Markram H, Lübke J, Frotscher M, Sakmann B. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science. 1997;275(5297):213–215. pmid:8985014
  41. 41. Froemke RC, Tsay IA, Raad M, Long JD, Dan Y. Contribution of individual spikes in burst-induced long-term synaptic modification. Journal of neurophysiology. 2006;95(3):1620–1629. pmid:16319206
  42. 42. Shouval HZ, Bear MF, Cooper LN. A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proceedings of the National Academy of Sciences. 2002;99(16):10831–10836.
  43. 43. Dudek SM, Bear MF. Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade. In: How We Learn; How We Remember: Toward An Understanding Of Brain And Neural Systems: Selected Papers of Leon N Cooper. World Scientific; 1995. p. 200–204.
  44. 44. Abbott LF, Nelson SB. Synaptic plasticity: taming the beast. Nature Neuroscience. 2000;3(11):1178–1183. pmid:11127835
  45. 45. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. Biological Cybernetics. 2009;101(5-6):411. pmid:19937071
  46. 46. Gütig R, Aharonov R, Rotter S, Sompolinsky H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. Journal of Neuroscience. 2003;23(9):3697–3714. pmid:12736341
  47. 47. Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biological cybernetics. 2008;98(6):459–478. pmid:18491160
  48. 48. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience. 2000;3(9):919. pmid:10966623
  49. 49. Shamir M. Theories of rhythmogenesis. Current opinion in neurobiology. 2019;58:70–77. pmid:31408837
  50. 50. Luz Y, Shamir M. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses. PloS one. 2014;9(7):e101109. pmid:24999634
  51. 51. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLoS computational biology. 2012;8(1):e1002334. pmid:22291583
  52. 52. Kempter R, Gerstner W, Hemmen JLv. Intrinsic stabilization of output rates by spike-based Hebbian learning. Neural computation. 2001;13(12):2709–2741. pmid:11705408
  53. 53. Kempter R, Gerstner W, Van Hemmen JL. Hebbian learning and spiking neurons. Physical Review E. 1999;59(4):4498.
  54. 54. Sjöström PJ, Turrigiano GG, Nelson SB. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron. 2001;32(6):1149–1164. pmid:11754844
  55. 55. Zhang LI, Tao HW, Holt CE, Harris WA, Poo Mm. A critical window for cooperation and competition among developing retinotectal synapses. Nature. 1998;395(6697):37. pmid:9738497
  56. 56. Nishiyama M, Hong K, Mikoshiba K, Poo MM, Kato K. Calcium stores regulate the polarity and input specificity of synaptic modification. Nature. 2000;408:584–588. pmid:11117745
  57. 57. Shriki O, Hansel D, Sompolinsky H. Rate models for conductance-based cortical neuronal networks. Neural computation. 2003;15(8):1809–1841. pmid:14511514
  58. 58. Mongillo G, Rumpel S, Loewenstein Y. Intrinsic volatility of synaptic connections—a challenge to the synaptic trace theory of memory. Current opinion in neurobiology. 2017;46:7–13.
  59. 59. Attardo A, Fitzgerald JE, Schnitzer MJ. Impermanence of dendritic spines in live adult CA1 hippocampus. Nature. 2015;523(7562):592. pmid:26098371
  60. 60. Grutzendler J, Kasthuri N, Gan WB. Long-term dendritic spine stability in the adult cortex. Nature. 2002;420(6917):812. pmid:12490949
  61. 61. Zuo Y, Lin A, Chang P, Gan WB. Development of long-term dendritic spine stability in diverse regions of cerebral cortex. Neuron. 2005;46(2):181–189. pmid:15848798
  62. 62. Holtmaat AJ, Trachtenberg JT, Wilbrecht L, Shepherd GM, Zhang X, Knott GW, et al. Transient and persistent dendritic spines in the neocortex in vivo. Neuron. 2005;45(2):279–291. pmid:15664179
  63. 63. Holtmaat A, Wilbrecht L, Knott GW, Welker E, Svoboda K. Experience-dependent and cell-type-specific spine growth in the neocortex. Nature. 2006;441(7096):979. pmid:16791195
  64. 64. Loewenstein Y, Kuras A, Rumpel S. Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo. Journal of Neuroscience. 2011;31(26):9481–9488. pmid:21715613
  65. 65. Loewenstein Y, Yanover U, Rumpel S. Predicting the dynamics of network connectivity in the neocortex. Journal of Neuroscience. 2015;35(36):12535–12544. pmid:26354919