Skip to main content
Advertisement
  • Loading metrics

Oscillations via Spike-Timing Dependent Plasticity in a Feed-Forward Model

  • Yotam Luz ,

    yotam.luz@gmail.com

    Affiliations Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel

  • Maoz Shamir

    Affiliations Department of Physiology and Cell Biology Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel, Department of Physics Faculty of Natural Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel

Abstract

Neuronal oscillatory activity has been reported in relation to a wide range of cognitive processes including the encoding of external stimuli, attention, and learning. Although the specific role of these oscillations has yet to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the origin of these oscillations: are the mechanisms for generating these oscillations genetically hard-wired or can they be acquired via a learning process? Here, we study the conditions under which oscillatory activity emerges through a process of spike timing dependent plasticity (STDP) in a feed-forward architecture. First, we analyze the effect of oscillations on STDP-driven synaptic dynamics of a single synapse, and study how the parameters that characterize the STDP rule and the oscillations affect the resultant synaptic weight. Next, we analyze STDP-driven synaptic dynamics of a pre-synaptic population of neurons onto a single post-synaptic cell. The pre-synaptic neural population is assumed to be oscillating at the same frequency, albeit with different phases, such that the net activity of the pre-synaptic population is constant in time. Thus, in the homogeneous case in which all synapses are equal, the post-synaptic neuron receives constant input and hence does not oscillate. To investigate the transition to oscillatory activity, we develop a mean-field Fokker-Planck approximation of the synaptic dynamics. We analyze the conditions causing the homogeneous solution to lose its stability. The findings show that oscillatory activity appears through a mechanism of spontaneous symmetry breaking. However, in the general case the homogeneous solution is unstable, and the synaptic dynamics does not converge to a different fixed point, but rather to a limit cycle. We show how the temporal structure of the STDP rule determines the stability of the homogeneous solution and the drift velocity of the limit cycle.

Author Summary

Oscillatory activity in the brain has been described in relation to many cognitive states and tasks, including the encoding of external stimuli, attention, learning and consolidation of memory. However, without tuning of synaptic weights with the preferred phase of firing the oscillatory signal may not be able to propagate downstream—due to distractive interference. Here we investigate how synaptic plasticity can facilitate the transmission of oscillatory signal downstream along the information processing pathway in the brain. We show that basic synaptic plasticity rules, that have been reported empirically, are sufficient to generate the required tuning that enables the propagation of the oscillatory signal. In addition, our work presents a synaptic learning process that does not converge to a stationary state, but rather remains dynamic. We demonstrate how the functionality of the system, i.e., transmission of oscillatory activity, can be maintained in the face of constant remodeling of synaptic weights.

Introduction

It is generally believed that synaptic plasticity is the basis for learning and memory. According to Hebb's rule [1], which is considered the foundation for current views on learning and memory, the interaction strength between two neurons that are co-activated potentiates. This rule has been extended to the temporal domain by taking into account the effect of the causal relation of pre- and post-synaptic firing on the potentiation and depression of the synapse, which is known as spike-timing dependent plasticity (STDP). STDP has been identified in numerous systems in the brain, and a rich repertoire of causal relations has been described [212].

Considerable theoretical efforts have been devoted to investigating the possible computational implications of STDP [1329]. STDP can be thought of as a process of unsupervised learning (for other views such as reward modulated STDP see [30], for example). It has been shown that certain STDP rules can give rise to the emergence of response selectivity at the level of the post-synaptic neuron [14, 15, 20], whereas other STDP rules can provide a homeostatic mechanism that balances the excitatory and inhibitory inputs to the cell [25, 29]. Furthermore, STDP in combination with other plasticity rules has been shown to lead to structure formation in networks [31, 32].

Although 'spike-timing' is explicitly emphasized by the term STDP, most theoretical studies have focused on very basic temporal structures of neuronal activity. Many studies, for example, assume that neuronal firing follows homogeneous Poisson process statistics and that the correlations are instantaneous in time (with a possible small time shift to reflect delayed reactions). However, neuronal oscillatory activity has been reported in such cognitive processes as the encoding of external stimuli, attention, learning and consolidation of memory [33]. Thus, although the specific role of these oscillations in the learning process remains to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the mechanisms that generate these oscillations: are they genetically hard-wired into the system or can they be acquired via a learning process?

The effect and possible computational role of oscillations on STDP has been addressed in several studies [3442]. However, in all of these studies the oscillatory activity was either an inherent property of the neuron or inherited via feed-forward connections from inputs that were oscillating and had a clear preferred phase. A recent numerical study indicated that oscillations may emerge in a large scale detailed thalamocortical model with STDP [43]. Nevertheless, it remains unclear whether STDP can give rise to the emergence of oscillatory activity by itself, and if so, under what conditions.

This paper is organized as follows. First, we define the STDP rule and the architecture of the feed-forward model network. Next, we examine the learning dynamics of a single plastic synapse onto a post-synaptic cell, for the case where both pre- and post-synaptic neurons are oscillating. We then investigate the emergence of oscillations in a post-synaptic cell as a result of STDP in a population of feed-forward inputs each of which are oscillating, albeit in different phases such that their net contribution has no defined oscillatory behavior. In this case, if the synaptic weights are uniform or random then the total input to the post-synaptic neuron will have none or very little oscillatory component. We analyze the stability of the homogeneous solution, and show that when the homogeneous solution is not stable the post- synaptic neuron begins to oscillate. However, the synaptic weights themselves do not converge to a fixed point, but rather to a limit cycle solution. Finally, we discuss our results and suggest an intuitive explanation for the limit cycle solution.

Results

The STDP rule

For convenience, we adopt the STDP formalism presented in [14, 25, 26]. Specifically, we explore the following STDP rule (1) where Δw denotes the modification of the weight w of a synapse connecting a pre- and post- synaptic neurons, following either a pre- or post- synaptic spike, λ is the learning rate, and Δt = tposttpre is the time difference between the pre- and post- synaptic spikes. For simplicity, we assume that all pre-post spike time pairs contribute additively to synaptic plasticity, where the contribution of each such pair follows eq (1). The STDP rule is written as the sum of two processes: potentiation (+) and depression (-). We further assume a separation of variables, such that potentiation and depression are given as the product of the synaptic weight dependence f±(w) and the temporal kernel K±t) of the STDP rule. Specifically, for the synaptic weight dependence we use the Gütig et al. [14] model (2) (3) where α characterizes the relative strength of depression and μ the non-linearity of the learning rule (see Fig 1A and 1B). Here, we focused on two families of temporal kernels for the STDP rule. One is a temporally asymmetric exponential rule of the form (4) where Θ(x) is the Heaviside step function, and τ± denote the characteristic timescales of the LTP and LTD branches of the rule, respectively. The parameter H allows to controls the nature of the learning rule, with H = 1 for a “Hebbian” rule, as in Fig 1C (i.e., potentiating at the causal branch, that is for Δt > 0 when the post fires after pre), and H = −1 for an “Anti-Hebbian” STDP rule, see e.g., [12, 44].

thumbnail
Fig 1. Illustrations of the STDP rules studied in this paper.

(A,B) The weight dependence of the STDP rule, f+(w), eq (2) and f(w), eq (3), with α = 1. Shown for different values of μ in different colors. (C) The “temporally asymmetric exponential” temporal kernel, eq (4), with H = 1 (for Hebbian rule), τ+ = 20ms. Different values of τ are shown in different colors. (D) The “difference of Gaussians” temporal kernel, eq (5), with τ+ = 20ms, τ = 30ms, different values of T+ = T: = Tshown in different colors.

https://doi.org/10.1371/journal.pcbi.1004878.g001

The second family of STDP rules considered here uses a difference of Gaussians of the form (5) as temporal kernels, where τ±, and T± are the temporal widths and temporal shifts of the rule respectively, see Fig 1D.

Single synapse STDP

We first analyze STDP-driven dynamics of a single synapse. We assume that both pre- and post- synaptic neurons are oscillating at the same frequency ν albeit with a possible different phase φ = φpreφpost. This is done in the limit of weak coupling, assuming that the modification of a single synapse has a marginal effect on the post-synaptic neuron activity. We assume that the pre and post neuronal spiking activity can be described by an inhomogeneous Poisson process statistics with intensity parameter representing instantaneous expected firing rate (6) where denotes the spike trains of the pre/post synaptic cell, as represented by a linear combination of delta functions at the neuron's spike times, with being the spike times; rpre/post(t) is the instantaneous firing rate of the pre/post- synaptic cell; Dpre/post, Apre/post are the mean (DC) and amplitude modulation (AC) of the firing rate, respectively, and φpre/post is the phase of the pre/post- synaptic neuron. The angular brackets 〈⋯〉 denote ensemble averaging.

The pre-post correlation structure is an important factor driving the synaptic dynamics. In the limit of weak coupling, the pre and post firing can be modeled as independent Poisson processes. Consequently, one obtains the following pre-post correlation structure (7) where (8)

Note that Γr ≤ 1/2 since the intensity parameter of a Poisson process must be non-negative (AxDx).

The fixed point of the learning dynamics.

In the limit of slow learning dynamics, λ→0, we obtain the “mean field” Fokker–Planck approximation for the synaptic dynamics (see e.g., [26] for more details) (9)

Using the pre-post correlation structure of eq (7) yields (10) where (11) Where are the absolute values of the Fourier transforms, and are real. Note that our kernel functions are normalized (eqs (4) and (5)), such that by construction.

The fixed point of the synaptic dynamics obeys (12)

The solution of the fixed point equation yields (13) where the notation w*(φ) underscores the dependence of the fixed point solution on the relative phase of the pre and post neuronal mean activity. Additionally, eq (13) shows the different effects of the parameters that govern the STDP-driven synaptic dynamics on the fixed point solution.

Fig 2A1 shows w*(φ) for different values of μ (depicted in different colors) for the temporally asymmetric Hebbian (exponential) rule with τ+ = τ. As can be seen from the figure, this rule favors negative phases, −π < φ = φpreφpost < 0. In the additive case (i.e., for μ = 0) w*(φ) is a step function with a discontinuity at φ ∈ {0, π}. Increasing μ results in a smoother profile that does not alter the width of the profile. The width of the profile is governed by the relative strength of the depression, α. Decreasing α widens the profile (consider for example the width at half height or region in which the synaptic weights are larger than 0.5), whereas increasing α narrows it, Fig 2B1.

thumbnail
Fig 2. The asymptotic synaptic weight in the case of single synapse STDP.

The different panels show the resultant synaptic weight profile w*(φ) given by eqs (12) and (13) as a function of the pre-post phase difference, φ, for different sets of parameters. A1-E1 (left column) show the results for the exponential STDP kernel, as in Fig 1C. A2-E2 (right column) show the results for the difference of Gaussians STDP kernel, as in Fig 1D. The different colors show the variation of the asymptotic weight with respect to a single parameter of the STDP-driven synaptic dynamics that is being varied. In A1 and A2 the different colors depict different values of μ. In B1 and B2 the different colors depict different values of the relative strength of depression α. In C1 and C2 the different colors are for different time constants of the STDP rule τ and T, respectively. Panels D1 and D2 show the effect of the relative strength of the oscillatory activity Γr, see eq (8). Panels E1 and E2 show the effect of the oscillation frequency f. In all panels, unless stated otherwise in the legend the following parameters were used: α = 1, μ = 0.1, ν = 2π · 10Hz, and Γr = 0.5, for the exponential STDP kernel (left column) τ+ = τ = 20ms was used, and for the Gaussian STDP kernel (right column) T± = 0 and τ+ = 20ms, τ = 30ms were used.

https://doi.org/10.1371/journal.pcbi.1004878.g002

The structure of the profile, and in particular the ‘preferred phase’, is governed by the temporal kernel of the STDP rule. Whereas the temporally asymmetric Hebbian rule prefers negative phase differences (causal), the temporally symmetric rule prefers phase differences that are small in their absolute value, Fig 2A1 and 2A2.

To quantify the width of the profile, it is convenient to use the μ-invariant solution that results from the fact that for w*(φc) = 1/2 the ratio f+/f is independent of μ. From eq (13), the μ-invariant solution φc is determined by (14)

Thus φc yields the phases in which the profile crosses w* = 1/2; hence, they can be used to characterize the “profile width”.

In the temporally asymmetric exponential rule one obtains (15)

For this rule, with α = 1 one obtains . Thus, for τ+ = τ the solutions are φc = 0,−π. As the ratio τ+ = τ is increased, both solutions are increased, and the profile rotates clockwise on the ring, but its width remains π, Fig 2C1.

For the Gaussian kernels one obtains (16)

In the case of a temporally shifted “Mexican hat” rule; i.e., T+ = T: = T, the μ-invariant solutions are symmetric around νT: φc = −νT ± β, with the half width , Fig 2C2. For this rule, α = 1 implies a half width of β = π/2.

From eq (14), for α = 1, the parameters that characterize the mean firing rate (DC) and modulation amplitude (AC) do not affect the profile width. However, the AC to DC ratio, Γr, will affect the circular variance of the synaptic weight profile for μ ≠ 0 (see eq (12)), Fig 2D1 and 2D2.

The Fourier transforms of the STDP rule are monotonically decreasing functions of the oscillation frequency ν. Increasing the frequency decreases and causes Q(φ) to converge towards 1. Hence, in the high frequency limit the profile becomes independent of the phase and converges to , see Fig 2E1 and 2E2. For both types of temporal kernels studied here, the characteristic frequency for the decay is 1/τ. Thus, the synaptic dynamics loses its sensitivity to the oscillations when the relevant timescale of the STDP rule is on the order of the period of the oscillations. Note however that due to the discontinuity in the STDP profile of the exponential rule the Fourier transform of its temporal kernel decays algebraically fast (i.e., for large ν) with the oscillation frequency ν, whereas the Fourier transform of the smooth Gaussian model decays exponentially fast with ν (i.e., for large ν).

Excitatory synapse population

We now want to consider in which ways oscillatory activity affects the STDP-driven dynamics of a synaptic population of N excitatory neurons that project onto a single post-synaptic neuron in a feed-forward manner, see Fig 3. All the excitatory synapses obey the same STDP rule.

thumbnail
Fig 3. A schematic description of the network architecture showing the pre-synaptic population of oscillating neurons serving as feed-forward input to a single post-synaptic neuron.

https://doi.org/10.1371/journal.pcbi.1004878.g003

The isotropic “Ring Model”.

The spiking activity of the pre-synaptic population is modeled by an independent inhomogeneous Poisson processes with instantaneous mean firing rates that are oscillating in time. We further assume that all pre-synaptic neurons are oscillating at the same angular frequency ν, same mean firing rate D, and same rate modulation A, albeit with phases that are evenly spaced on the ring (17) where 〈ρj(t)〉 is the mean firing rate of the j’th pre-synaptic neuron. Note that in this model the net activity of the pre-synaptic population is constant in time. Using the independent Poisson process statistics yields (18)

The post-synaptic neuron model.

We model the post-synaptic activity by a “delayed linear Poisson neuron”, which has been frequently used in studies on STDP, see e.g. [14, 18, 19, 25, 45]. The post-synaptic activity obeys a Poisson process statistics with a mean instantaneous firing rate that is given by (19) where d>0 is a characteristic delay of the post-synaptic neuron response, and wj is the weight of the j’th synapse. The synaptic dynamics is driven by the overlap of the pre-post correlations and the STDP rule. In the linear Poisson model, eq (19), the pre-post correlations are determined by the pairwise correlation structure of the input layer, and the synaptic weights (20)

For large N it is convenient to take the continuum limit; hence replacing the discrete sum over the phases by integration over φ, and replacing wj by the function w(φ): (21) where and are order parameters that describe the synaptic weight distribution, (22)

Thus, is the mean (DC component) of the synaptic weight profile w(φ) and its first Fourier component. The phase ψ is determined by the condition that is real and non-negative. We also refer to ψ intuitively as the “center of mass” of the weight profile.

The mean field Fokker-Planck equations.

In the limit of a slow learning rate, λ→0, one obtains (see e.g., [26] for a detailed derivation of the mean field equations) (23)

Using the spatiotemporal correlation structure induced by the oscillations, eq (21), we obtain (24) where we neglected the first term of eq (21) which is O(1/N) in the limit of large N. The terms F0 and F1 are given by (25)

Integrating eq (24) over φ (see e.g., Ben Yishai et al. [46] for details) we obtain the dynamics of the order parameters , , and ψ: (26) (27) where and (x = 0, 1).

The homogeneous synaptic steady state.

Due to the symmetry of the mean field eq (23) with respect to the phase φ, a uniform solution w(φ,t) = wh always exists. In this case , and the steady state equation reduces to (28)

The trivial solution wh = 0 is of no biological interest. In our definitions we used , consequently a non-trivial homogeneous solution to eq (28) for μ > 0 obeys f+(wh) = f(wh), yielding (29)

For such a uniform profile, the post-synaptic activity as given by eq (19) becomes: . Consequently, the post-synaptic cell will not oscillate in the homogeneous case.

Stability of the homogeneous steady state.

To study the stability of the homogeneous solution, we consider an arbitrary (though small) fluctuation w(φj) = wh + δw(φj) and expand the synaptic dynamics around the homogeneous fixed point to a leading order in the fluctuations: . The stability of the fixed point is determined by the eigenvalues of the stability matrix M. Due to the symmetry of the problem in the given case, it is sufficient to study the stability with respect to fluctuations in the uniform direction and in the direction of the first Fourier mode , obtaining: and ; thus, the uniform and the first Fourier mode are eigenvectors of the stability matrix, with m0 and m1 as their corresponding eigenvalues. We obtain (30) and (31)

At the homogeneous fixed point (which obeys f+(wh) = f(wh) = (1−wh)μ) one obtains . Thus, for μ>0, the homogeneous fixed point is always stable with respect to fluctuations in the uniform direction (see also [14]).

In contrast, the homogeneous fixed point is not always stable with respect to fluctuations in the direction of . Namely, (32)

The first Fourier eigenvalue m1 is a sum of two terms. The first term is a stabilizing term m0/2 that is linear in μ in the limit of small μ (for α≥1, note that for α>1, wh→0 in the limit of small μ, and for α = 1, wh = 1/2 for all μ). The second term scales with (A/D)2, and its sign is determined by the sign of . Thus, for α≥1, if , there exists a critical value of the non-linearity parameter μc, such that the homogenous fixed point loses its stability for μ<μc. As wh≤1/2 for α≥1 (applicable in the excitatory case—compare with Gütig et al. [14]) μc satisfies: (33)

Thus, for example, in the case of a “Mexican hat” type STDP rule, which is a temporally symmetric difference of Gaussians kernel with T+ = T = 0, and τ+ < τ (see eq (5)), one obtains , (see eq (16)). Consequently, an oscillation frequency that obeys |νd| < π/2 yields μc>0 for any positive value of A. In contrast, for this frequency regime, m1 is negative in the case of an inverted Mexican hat STDP rule; i.e., for τ < τ+.

In the case of a temporally anti-symmetric STDP rule, that is, the exponential kernel of eq (4) with τ+ = τ: = τ, one obtains and , see eq (15). As a result, up to a positive factor the right hand side of eq (33) is H sin(νd); hence, μc>0 for νd < π for the case of a Hebbian rule, H = 1, whereas for the anti-Habbian rule, H = −1, m1 is negative for νd < π, and the homogeneous state is expected to be stable. Note that in the above examples we did not fully describe all the regimes in which μc > 0.

Numerical simulations.

As the uniform solution loses its stability to fluctuations in the direction of one might naively expect that the synaptic dynamics would converge to another fixed point in which the post-synaptic neuron develops a phase preference, and begins to oscillate.

Fig 4 shows a typical example of a numerical simulation of the synaptic dynamics using a conductance- based post-synaptic neuron (see Methods for details). The initial conditions in the simulations were uniform, w(φ,t = 0) = 1/2, for all φ. The uniform solution in this case is not stable. Some synapses increase their weight and some decrease and this process does not appear to converge.

thumbnail
Fig 4. Numerical simulation of STDP-driven synaptic dynamics of a population of 120 excitatory synapses providing feed-forward input onto a single conductance based post-synaptic neuron (see e.g. Fig 3 and Methods for details).

The pre-synaptic activity followed an inhomogeneous Poisson process with a time-varying intensity by eq (17), using N = 120, D = 10Sp/sec, A = 10Sp/sec, ν = 2π·10Hz. We simulated the temporally anti-symmetric exponential STDP rule (eq (4) with τ± = 20ms), with α = 1.1, μ = 0.1, using a learning rate constant λ = 5·10−4. (A and B) The synaptic weight profile w(φ, t) as a function of φ is shown for different times t = 0, 35,…80min by the different colors. The solid lines show the temporal average of w(φ, t) on the interval [t,t+1min]. (C) Traces showing the dynamics of all 120 synapses, each shown in a different color: black for the 60th synapse (φ60 = π), magenta for 120th synapse (φ120 = 2π), shades of blue for synapses 1–59 and shades of red for synapses 61–119. (D) The dynamics of the order parameter (eq (22)) shown in red (x100 scaled), and the oscillation amplitude of post-synaptic activity Apost (eq (6)) shown in black and measured in Sp/sec, binned in time-bins of 1 minute. (E) The dynamics of the order parameter ψ (eq (22)) shown in red (calculated in time-bins of 1 sec), and the oscillation phase of post-synaptic activity φpost (calculated in time-bins of 1 min). (F) The distribution of synaptic weights of all 120 synapses, calculated using the second half of the simulation (after convergence to a limit cycle solution). The vertical cyan line denotes the mean value of the distribution. (G) The characteristic shape of the propagating wave, calculated from synaptic weights during the second half of the simulation. Error bars depict the standard error of the mean (calculated from measurements binned at 1sec during the second half of the simulation). The calculation of φpost was obtained using time-bins of 1 min. The vertical magenta line shows the mean phase difference between pre- and post- synaptic activity.

https://doi.org/10.1371/journal.pcbi.1004878.g004

Fig 4A shows the synaptic weight profile at different times. The red line depicts the weight profile averaged over the first minute, and is not identically flat due to the inherent stochasticity of the learning dynamics. Due to such random fluctuations a small preference for a certain phase is generated, which in turn is then amplified by the synaptic dynamics (m1>0). As a result, after several minutes of STDP-driven dynamics the synaptic weights profile developed a pronounced phase preference. However, this profile was not stable, but rather drifted in time, see Fig 4B.

Fig 4C shows the dynamics of the entire synaptic population that appear to oscillate. However it is more convenient to view the limit cycle solution in terms of the order parameters, see Fig 4D and 4E. As can be seen from the figure, both and converge after about 60 minutes of simulation time. During this period, the synaptic weight profile develops a unimodal hill shape. The structure of this shape is stable; namely, its mean, width, amplitude modulation, etc. are fixed in time, see Fig 4F and 4G. However, its “center of mass” ψ drifts in time with a constant angular velocity, see Fig 4E.

The limit cycle solution.

To study the propagating wave solution we apply the limit-cycle ansatz (34) where W(ϕ) is a steady state profile of the propagating wave, see e.g., Fig 4G, and V is its angular velocity. Under the limit cycle ansatz the order parameters obey (35) and one obtains (36)

Substituting eqs (35) and (36) into the dynamics of the order parameters, eqs (26) and (27), yields (37) (38)

From eq (38) one obtains that the drift velocity V scales linearly with the learning rate λ. Although eqs (37) and (38) seem to be a simple set of equations for the order parameters, and , solving them is not trivial. This is because and also depend on higher order Fourier modes of the synaptic weight profile. However, in the special case of “zero drift”, V = 0, one can obtain a closed form set of equations that only involve the order parameters and .

The case of zero drift velocity.

The zero drift velocity corresponds to a fixed point solution for the synaptic weights. In this case, the input to the post-synaptic neuron is a weighted sum of inhomogeneous Poisson processes that the intensities of which are each cosine modulated in time. Hence, the total input to the post-synaptic neuron is also cosine modulated. As a result, in the linear Poisson model, the mean rate of the post-synaptic neuron is cosine modulated, conforming to eq (6) with (39)

Consequently, the fixed point (zero drift) solution for w(φ) is given by eq (13) as (40)

Note that the approximation of neglecting terms that are O(1/N) in the transition to eq (24) conforms with the assumption of the “weak coupling limit” in eq (7). Eq (13) needs to be solved self- consistently; i.e., the parameters are both determined by Q(φ) and determine Q(φ) via . The self-consistent equations yield conditions on the modulation parameters D, A, ν of eq (17) that yield a zero drift.

Solution in the additive rule.

For simplicity, we focus on the “additive STDP rule” (i.e.; the case of μ = 0) in the following. For the case of the additive rule the synaptic weight profile w(φ) is binary, see e.g. Fig 2A1 and 2A2 Hence, (41) where φc1, φc2 are the phases at which the transition occurs, and are given by eq (14). Consequently, the phase of the input profile is ψ = (φc1, φc2)/2, and the self-consistent: eq (40) is reduced to a self-consistent condition on the phases (42) where we have taken, without loss of generality, the phase of the post-synaptic neuron to be zero. Thus, for example, the temporally anti-symmetric case (exponential kernel of eq (4) with τ+ = τ), with α = 1 yields φc1 = −π and φc2 = 0, resulting in the condition νd = π/2 for zero drift. Fig 5A shows the drift velocity as a function of the angular frequency of the oscillations ν/(2π) for a conductance based integrate and fire post-synaptic neuron. As can be seen from the figure, the drift velocity vanishes at a frequency f ≈ 29Hz, which is consistent with a delay d ≈ 8.6ms that can be measured numerically from the phase difference between the post-synaptic neuron and its input.

thumbnail
Fig 5. Numerical simulations of the STDP-driven synaptic dynamics of 1200 excitatory synapses in an isotropic “Ring Model” architecture, serving as feed-forward input onto a single conductance based post-synaptic neuron (see Methods for details).

The pre-synaptic activity followed eq (17), with N = 1200, D = 10Sp/sec, A = 10Sp/sec and a varying angular frequency ν. Here we simulated the temporally anti-symmetric exponential STDP rule (eq (4) with τ± = 20ms), with α = 1, μ = 0, using a learning rate constant λ = 5·10−3. (A) The angular velocity V of the order parameter ψ (eq (34) in Revolutions per Hour—RPH) is plotted as a function of the oscillation frequency. (B, C and D) The synaptic weight profile as estimated numerically (calculated from the second half of the simulations, following convergence to the limit cycle), shown as a function of the pre-post phase difference, in red. Note the weight profile and the post phase drift at the same velocity, in red. The black curve shows the zero drift solution of eq (40). The vertical dashed gray line depicts the order parameter or “center of mass” ψ0 of the zero drift solution, the dashed blue line shows ψ0 + νd, and the pink line shows the “center of mass” ψ of the weight profile of the limit cycle solution. The panels differ in terms of the oscillation frequency of the pre-synaptic population and 36 Hz for panels B, C, and D, respectively.

https://doi.org/10.1371/journal.pcbi.1004878.g005

In the example of a temporally shifted Mexican hat learning rule (difference of Gaussians kernel with T+ = TT,) φc1,2 = −νT ± β; hence, Ψ = −νT. Thus, the condition for zero drift velocity is T = d. Fig 6A shows the drift velocity as a function of T under this learning rule, for a conductance-based integrate and fire post-synaptic neuron. As seen from the figure, the drift velocity vanishes at a delay d ≈ 9ms.

thumbnail
Fig 6. Numerical results of simulations as in Fig 5, with pre-synaptic activity following eq (17), with N = 1200, D = 10Sp/sec, A = 10Sp/sec, ν = 2π·10Hz.

The simulated STDP rule was the temporally shifted “Mexican hat” kernel (as in Fig 1D) with varying T± = T, and with α = 1, μ = 0. (A) The angular velocity (V of eq (34)–in Revolutions per Hour—RPH) as a function of the time shift T. (B, C and D) show the same plots as in Fig 5 for 3 of the simulations with T = 0,9,16ms, respectively. Here λ = 5·10−3 was used.

https://doi.org/10.1371/journal.pcbi.1004878.g006

Note that due to the non-linearity of the post-synaptic neuron and the conductance response of the synapses (see Methods), the evaluated delay between pre- to post- synaptic activities is not exactly the same in the two examples, which mainly differ by the oscillation frequency. Nevertheless, under these zero drift conditions the steady state of the linear solution given by eq (41) provides a good approximation of the measured profile, see e.g., Figs 5C and 6C. Part of the reason for such minor influences of this non-linearity is that higher order Fourier modes of the post-synaptic activity are orthogonal to the pre-synaptic activity.

How can one understand intuitively the source of this drifting behavior? Consider the case of a temporally symmetric Mexican hat STDP rule (eq (5) with T+ = T: = T = 0, red line in Fig 1D). This rule potentiates synapses with a similar phase preference as the post-synaptic neuron see e.g. Fig 2A2. Let us assume that by some mechanism of spontaneous symmetry breaking the synaptic weight profile develops a preferred phase, and without loss of generality assume ψ = 0, the vertical gray dashed line in Fig 6B. Consequently, the post-synaptic neuron will oscillate with a phase φpost = ψ + , the vertical blue dashed line in Fig 6B. This in turn will cause the synaptic dynamics to generate a ‘force’ that will pull the peak of the weight profile, ψ, towards φpost = ψ + and hence, will induce a positive drift velocity.

Discussion

STDP is known to be able to act as an unsupervised learning algorithm that learns prominent features of the input layer (pre-synaptic population) statistic; namely, the spatial structure of the correlations. Via a local synaptic STDP learning rule, spatial (or stimulus) selectivity can emerge [14, 15, 17, 20, 4749]. Here, we focused on temporal aspects and showed how temporal selectivity may emerge. Specifically, we found that although the net activity of the input layer (pre-synaptic population) was constant in time, the homogeneous state was not always stable and the post-synaptic neuron developed temporal phase preferences via a mechanism of spontaneous symmetry breaking. This instability depends on the sign of the real part of the Fourier transform of the STDP rule (potentiation minus depression) which is time shifted by the post-synaptic delay d at angular frequency ν (see eq (33)). However, in contrast to previous studies, we found that in many cases this selectivity is not static, but rather it drifts in time.

One can view Fig 6B as a graphic illustration of a search for a self-consistent solution with zero drift. Assume that the post-synaptic neuron oscillates at some constant phase, without loss of generality φpost = 0. The STDP will shape the synaptic weight profile symmetrically around φpost according to eq (40), the black line in Fig 6B. Accordingly, the synaptic weight profile and the input to the post synaptic neuron will have a phase of ψ = φpost = 0, the vertical dashed gray line in Fig 6B. However, the response of the post neuron to its inputs dictates φpost = ψ + (shown by the vertical blue line), which is inconsistent with our initial assumption φpost = 0. In order to satisfy the self-consistent condition we need to introduce a temporal shift to the STDP rule (eq (5) with T+ = TT ≠ 0) that will align the initial assumption of the post phase at φpost = 0 with ψ + , Fig 6C. Further increases in the temporal shift T of the STDP rule will generate a ‘force’ that will pull the weight profile to the left and will result in a negative drift velocity, see Fig 6D.

A quasi-periodic behavior of synaptic weights has been observed in the past. Gilson and colleagues [48], studied the effect of general correlation structure on STDP-driven synaptic dynamics. Analyzing the homogeneous fixed point they assumed the synaptic dynamics will flow in the direction of the strongest spectral component (of the input layer correlations), and consequently will converge to a stable fixed point that will reflect the structure of the spectral component. In the additive learning rule (i.e., μ = 0) or in the case where the neuronal response covariance is negative the existence of a stable fixed point is not guaranteed. Thus, it was shown that for the special case of additive STDP synaptic dynamics may be dominated by eigenvalues with large imaginary part that will result in quasi periodic behavior. This pathology disappears for any positive μ.

Here, the limit cycle solution is not a pathology but a robust feature of the synaptic dynamics (see for example limit cycle solution with μ > 0 in Fig 4). The main difference from the work of Gilson and colleagues is that, typically, here there is no stable fixed point solution (note that response covariance of input layer neurons can be negative). Even in the special case where a non-homogeneous fixed point exists it is: One, Not generic but requires a set of parameters that will solve V = 0. Two, due to the inherent U(1) symmetry of the problem this solution will be only marginally stable.

Theoretical investigations of STDP typically derive non-linear equations for the dynamics of the synaptic weights. In general, non-linear dynamics is known to give rise to a wide range of behaviors, including convergence to a fixed point, line attractors, oscillatory activity, and chaos. However, to the best of our knowledge, the existing theoretical research describes STDP as a process that relaxes to a steady state fixed point (except for pathological cases). On the other hand, empirical findings show synaptic dynamics as highly volatile and even chaotic [5058]. This raises the question of how can the central nervous system retain its functionality in the face of constant remodeling of synaptic weights? Here we have shown an example in which synaptic weights did not converge to a stable fixed point, but rather remained dynamic. Yet, functionality, in this case oscillating activity of the post-synaptic neuron, was maintained as an emergent property of global order parameters and that converge to a stable fixed point.

Methods

Details of the numerical simulations

Online supporting information.

This manuscript is accompanied by a complete software package that was used throughout the study. This package is a Matlab set of scripts and utilities that includes all the source codes for numerical simulations that were used to produce the figures in this manuscript. It also contains all the scripts that generated the figures.

The conductance based integrate-and-fire model.

The learning dynamics, eq (1), were simulated using a conductance based integrate-and-fire post-synaptic cell. Details of the simulations are as in previous studies [25, 26]. Briefly, the membrane potential of the post-synaptic cell V(t) obeys (43) where Cm = 200pF is the membrane capacitance, Rm = 100MΩ is the membrane resistance, the resting potential is Vrest = −70mV, and the reversal potentials are EE = 0mV and EI = −70mV. An action potential is generated once the membrane potential crosses the firing threshold Vth = −54mV, after which the membrane potential is reset to the resting potential without a refractory period.

The synaptic conductances, gE and gI, are given by: (44) where X = E, I denotes excitatory or inhibitory nature, NX is the number of synapses, [t]+max(t,0) and is measured in seconds, and are the spike times at synapse i. For the temporal characteristic of the α-shape response we chose to use τE = τI = 5ms. For the conductance coefficient we used the normalization factor of 1/N in eq (19). We used with , SE = 1000/NE, and SI = 400/NI, where NE, NI denote the number of excitatory and inhibitory pre-synaptic inputs, respectively.

In order to estimate the post-synaptic membrane potential in eq (43), the software performs the integration of the synaptic and leak currents using the Euler method with a Δt = 1ms step size. The rationale for using such a low resolution step size is justified in our previous work [26].

Modeling pre-synaptic activity.

Throughout the simulations in this work, pre-synaptic activities were modeled by independent inhomogeneous Poisson processes, with a modulated mean firing rate given by eq (17). To this end, each of the inputs was approximated by a Bernoulli process generating binary vectors defined over discrete time bins of Δt = 1ms. These vectors were then filtered using a discrete convolution α-shaped kernel (as defined by eq (44)) with a limited length of 10τX (after which this kernel function is zero for all practical purposes). In all simulations we used NI = 40, and selectively used NE ∈ {120, 1200} as expressed in the different figures. Note that the synaptic conductances are scaled inversely with the number of synapses.

Supporting Information

S1 Software. A Matlab set of scripts and utilities that includes all the source codes for numerical simulations that were used to produce the figures in this manuscript.

It also contains all the scripts that generated the figures.

https://doi.org/10.1371/journal.pcbi.1004878.s001

(ZIP)

Author Contributions

Conceived and designed the experiments: MS YL. Analyzed the data: YL. Contributed reagents/materials/analysis tools: YL. Wrote the paper: YL MS.

References

  1. 1. Hebb DO. The organization of behavior; a neuropsychological theory. New York,: Wiley; 1949. xix, 335 p. p.
  2. 2. Bi G, Poo M. Synaptic modification by correlated activity: Hebb's postulate revisited. Annu Rev Neurosci. 2001;24:139–66. Epub 2001/04/03. pmid:11283308.
  3. 3. Caporale N, Dan Y. Spike timing-dependent plasticity: a Hebbian learning rule. Annu Rev Neurosci. 2008;31:25–46. Epub 2008/02/16. pmid:18275283.
  4. 4. Bell CC, Han VZ, Sugawara Y, Grant K. Synaptic plasticity in a cerebellum-like structure depends on temporal order. Nature. 1997;387(6630):278–81. Epub 1997/05/15. pmid:9153391.
  5. 5. Woodin MA, Ganguly K, Poo MM. Coincident pre- and postsynaptic activity modifies GABAergic synapses by postsynaptic changes in Cl- transporter activity. Neuron. 2003;39(5):807–20. Epub 2003/09/02. pmid:12948447.
  6. 6. Haas JS, Nowotny T, Abarbanel HD. Spike-timing-dependent plasticity of inhibitory synapses in the entorhinal cortex. Journal of neurophysiology. 2006;96(6):3305–13. Epub 2006/08/25. pmid:16928795.
  7. 7. Vogels TP, Froemke RC, Doyon N, Gilson M, Haas JS, Liu R, et al. Inhibitory synaptic plasticity: spike timing-dependence and putative network function. Front Neural Circuits. 2013;7:119. Epub 2013/07/25. pmid:23882186; PubMed Central PMCID: PMC3714539.
  8. 8. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of neuroscience: the official journal of the Society for Neuroscience. 1998;18(24):10464–72. Epub 1998/12/16. pmid:9852584.
  9. 9. Magee JC, Johnston D. A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. Science. 1997;275(5297):209–13. ISI:A1997WC02200037. pmid:8985013
  10. 10. Zhang LI, Tao HW, Holt CE, Harris WA, Poo MM. A critical window for cooperation and competition among developing retinotectal synapses. Nature. 1998;395(6697):37–44. ISI:000075722200038. pmid:9738497
  11. 11. Dan Y, Poo MM. Spike timing-dependent plasticity: from synapse to perception. Physiol Rev. 2006;86(3):1033–48. pmid:16816145.
  12. 12. Zilberter M, Holmgren C, Shemer I, Silberberg G, Grillner S, Harkany T, et al. Input specificity and dependence of spike timing-dependent plasticity on preceding postsynaptic activity at unitary connections between neocortical layer 2/3 pyramidal cells. Cereb Cortex. 2009;19(10):2308–20. pmid:19193711; PubMed Central PMCID: PMC2742592.
  13. 13. Cateau H, Fukai T. A stochastic method to predict the consequence of arbitrary forms of spike-timing-dependent plasticity. Neural Comput. 2003;15(3):597–620. Epub 2003/03/07. pmid:12620159.
  14. 14. Gutig R, Aharonov R, Rotter S, Sompolinsky H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2003;23(9):3697–714. Epub 2003/05/09. pmid:12736341.
  15. 15. Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern. 2008;98(6):459–78. Epub 2008/05/21. pmid:18491160; PubMed Central PMCID: PMC2799003.
  16. 16. Rubin J, Lee DD, Sompolinsky H. Equilibrium properties of temporally asymmetric Hebbian plasticity. Phys Rev Lett. 2001;86(2):364–7. pmid:11177832.
  17. 17. Song S, Abbott LF. Cortical development and remapping through spike timing-dependent plasticity. Neuron. 2001;32(2):339–50. Epub 2001/10/31. S0896-6273(01)00451-2 [pii]. pmid:11684002.
  18. 18. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature neuroscience. 2000;3(9):919–26. WOS:000167177400019. pmid:10966623
  19. 19. Kempter R, Gerstner W, van Hemmen JL. Hebbian learning and spiking neurons. Physical Review E. 1999;59(4):4498–514. WOS:000079834600094.
  20. 20. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. Biol Cybern. 2009;101(5–6):411–26. ISI:000272176000007. pmid:19937071
  21. 21. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections. Biol Cybern. 2009;101(5–6):427–44. pmid:19937070.
  22. 22. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity—strengthening correlated input pathways. Biol Cybern. 2009;101(2):81–102. pmid:19536560.
  23. 23. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. II. Input selectivity—symmetry breaking. Biol Cybern. 2009;101(2):103–14. pmid:19536559.
  24. 24. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: self-organization schemes and weight dependence. Biol Cybern. 2010;103(5):365–86. ISI:000284507700003. pmid:20882297
  25. 25. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLoS computational biology. 2012;8(1):e1002334. Epub 2012/02/01. pmid:22291583; PubMed Central PMCID: PMC3266879.
  26. 26. Luz Y, Shamir M. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses. PloS one. 2014;9(7):e101109. Epub 2014/07/08. pmid:24999634; PubMed Central PMCID: PMC4085044.
  27. 27. Abbott LF, Nelson SB. Synaptic plasticity: taming the beast. Nature neuroscience. 2000;3 Suppl:1178–83. Epub 2000/12/29. pmid:11127835.
  28. 28. Kistler WM, van Hemmen JL. Modeling Synaptic Plasticity in Conjunction with the Timing of Pre- and Postsynaptic Action Potentials. Neural Comput. 2000;12(2):385–405. WOS:000084864300008. pmid:10636948
  29. 29. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science. 2011;334(6062):1569–73. ISI:000298091400057. pmid:22075724
  30. 30. Fremaux N, Sprekeler H, Gerstner W. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS computational biology. 2013;9(4):e1003024. pmid:23592970; PubMed Central PMCID: PMC3623741.
  31. 31. Zheng P, Dimitrakakis C, Triesch J. Network self-organization explains the statistics and dynamics of synaptic connection strengths in cortex. PLoS computational biology. 2013;9(1):e1002848. pmid:23300431; PubMed Central PMCID: PMC3536614.
  32. 32. Effenberger F, Jost J, Levina A. Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity. PLoS computational biology. 2015;11(9):e1004420. pmid:26335425; PubMed Central PMCID: PMC4559467.
  33. 33. Buzsáki G. Rhythms of the brain. Oxford; New York: Oxford University Press; 2006. xiv, 448 p. p.
  34. 34. Cateau H, Kitano K, Fukai T. Interplay between a phase response curve and spike-timing-dependent plasticity leading to wireless clustering. Phys Rev E Stat Nonlin Soft Matter Phys. 2008;77(5 Pt 1):051909. Epub 2008/07/23. pmid:18643104.
  35. 35. Gerstner W, Kempter R, van Hemmen JL, Wagner H. A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996;383(6595):76–81. Epub 1996/09/05. pmid:8779718.
  36. 36. Gilson M, Burck M, Burkitt AN, van Hemmen JL. Frequency selectivity emerging from spike-timing-dependent plasticity. Neural Comput. 2012;24(9):2251–79. Epub 2012/06/28. pmid:22734488.
  37. 37. Karbowski J, Ermentrout GB. Synchrony arising from a balanced synaptic plasticity in a network of heterogeneous neural oscillators. Phys Rev E Stat Nonlin Soft Matter Phys. 2002;65(3 Pt 1):031902. Epub 2002/03/23. pmid:11909104.
  38. 38. Kerr RR, Burkitt AN, Thomas DA, Gilson M, Grayden DB. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs. PLoS computational biology. 2013;9(2):e1002897. Epub 2013/02/15. pmid:23408878; PubMed Central PMCID: PMC3567188.
  39. 39. Lee S, Sen K, Kopell N. Cortical gamma rhythms modulate NMDAR-mediated spike timing dependent plasticity in a biophysical model. PLoS computational biology. 2009;5(12):e1000602. Epub 2009/12/17. pmid:20011119; PubMed Central PMCID: PMC2782132.
  40. 40. Masquelier T, Hugues E, Deco G, Thorpe SJ. Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: an efficient learning scheme. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2009;29(43):13484–93. Epub 2009/10/30. pmid:19864561.
  41. 41. Muller L, Brette R, Gutkin B. Spike-timing dependent plasticity and feed-forward input oscillations produce precise and invariant spike phase-locking. Front Comput Neurosci. 2011;5:45. Epub 2011/11/24. pmid:22110429; PubMed Central PMCID: PMC3216007.
  42. 42. Pfister JP, Tass PA. STDP in Oscillatory Recurrent Networks: Theoretical Conditions for Desynchronization and Applications to Deep Brain Stimulation. Front Comput Neurosci. 2010;4. Epub 2010/08/31. pmid:20802859; PubMed Central PMCID: PMC2928668.
  43. 43. Izhikevich EM, Edelman GM. Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(9):3593–8. ISI:000253846500074. pmid:18292226
  44. 44. Roberts PD, Bell CC. Spike timing dependent synaptic plasticity in biological systems. Biol Cybern. 2002;87(5–6):392–403. pmid:12461629.
  45. 45. Kempter R, Gerstner W, van Hemmen JL. Intrinsic stabilization of output rates by spike-based Hebbian learning. Neural Comput. 2001;13(12):2709–41. pmid:11705408.
  46. 46. Ben-Yishai R, Hansel D, Sompolinsky H. Traveling waves and the processing of weakly tuned inputs in a cortical network module. J Comput Neurosci. 1997;4(1):57–77. Epub 1997/01/01. pmid:9046452.
  47. 47. Bennett JE, Bair W. Refinement and Pattern Formation in Neural Circuits by the Interaction of Traveling Waves with Spike-Timing Dependent Plasticity. PLoS computational biology. 2015;11(8):e1004422. pmid:26308406; PubMed Central PMCID: PMC4550436.
  48. 48. Gilson M, Fukai T, Burkitt AN. Spectral analysis of input spike trains by spike-timing-dependent plasticity. PLoS computational biology. 2012;8(7):e1002584. pmid:22792056; PubMed Central PMCID: PMC3390410.
  49. 49. Kempter R, Gerstner W, von Hemmen JL. Hebbian learning and spiking neurons. Physical Review E. 1999;59(4):4498–514. ISI:000079834600094.
  50. 50. Ajemian R, D'Ausilio A, Moorman H, Bizzi E. A theory for how sensorimotor skills are learned and retained in noisy and nonstationary neural circuits. Proceedings of the National Academy of Sciences of the United States of America. 2013;110(52):E5078–87. pmid:24324147; PubMed Central PMCID: PMC3876265.
  51. 51. Loewenstein Y, Kuras A, Rumpel S. Multiplicative dynamics underlie the emergence of the log-normal distribution of spine sizes in the neocortex in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2011;31(26):9481–8. pmid:21715613.
  52. 52. Loewenstein Y, Yanover U, Rumpel S. Predicting the Dynamics of Network Connectivity in the Neocortex. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2015;35(36):12535–44. pmid:26354919.
  53. 53. Rokni U, Richardson AG, Bizzi E, Seung HS. Motor learning with unstable neural representations. Neuron. 2007;54(4):653–66. pmid:17521576.
  54. 54. Holtmaat A, Svoboda K. Experience-dependent structural synaptic plasticity in the mammalian brain. Nature reviews Neuroscience. 2009;10(9):647–58. pmid:19693029.
  55. 55. Moczulska KE, Tinter-Thiede J, Peter M, Ushakova L, Wernle T, Bathellier B, et al. Dynamics of dendritic spines in the mouse auditory cortex during memory formation and memory recall. Proceedings of the National Academy of Sciences of the United States of America. 2013;110(45):18315–20. pmid:24151334; PubMed Central PMCID: PMC3831433.
  56. 56. Keck T, Mrsic-Flogel TD, Vaz Afonso M, Eysel UT, Bonhoeffer T, Hubener M. Massive restructuring of neuronal circuits during functional reorganization of adult visual cortex. Nature neuroscience. 2008;11(10):1162–7. pmid:18758460.
  57. 57. Kasai H, Fukuda M, Watanabe S, Hayashi-Takagi A, Noguchi J. Structural dynamics of dendritic spines in memory and cognition. Trends in neurosciences. 2010;33(3):121–9. pmid:20138375.
  58. 58. Minerbi A, Kahana R, Goldfeld L, Kaufman M, Marom S, Ziv NE. Long-term relationships between synaptic tenacity, synaptic remodeling, and network activity. PLoS biology. 2009;7(6):e1000136. pmid:19554080; PubMed Central PMCID: PMC2693930.