Skip to main content
Advertisement

< Back to Article

Fig 1.

Learning kernels produced by the rules of Hebb, Kosco, and Porr-Wörgötter.

Each graph has been plotted by computing the connection weight update resulting from different Δt inter-event delays ranging in [−1.0, 1.0]. Events were represented by a cosine function ranging over (−π, +π) and suitably scaled and shifted (see Section 1.1 in S1 Supporting Information for details).

More »

Fig 1 Expand

Fig 2.

Superposition of learning kernels of the G-DHL rule components.

The learning kernels considered correspond to different inter-event intervals, with events represented by a cosine function as in Fig 1. The kernels are indicated with pairs of letters referring respectively to the pre- and post-synaptic neuron, where ‘S’ refers to [ui], ‘P’ to , and ‘N’ to . PS/SN kernels overlap, and so do SP/NS kernels.

More »

Fig 2 Expand

Fig 3.

Examples of learning kernels generated by the G-DHL rule.

The signals involved events generated with a cosine function (as in Fig 1). In the examples, the G-DHL coefficients were set as follows (the rule names are arbitrary): Causal rule: σp,p = σp,n = σn,p = σn,n = ηp,s = ηn,s = 0, ηs,p = 1, ηs,n = −1. Anticausal rule: σp,p = σp,n = σn,p = σn,n = ηs,p = ηp,s = 0, ηs,n = 1, ηn,s = −1. Coincidence rule: ηs,p = ηs,n = ηp,s = ηn,s = 0, σp,p = σn,n = 1, σp,n = σn,p = −1. Flat-at-zero rule: σp,p = σn,n = ηs,p = ηs,n = ηp,s = ηn,s = 0, σp,n = −1, σn,p = 1.

More »

Fig 3 Expand

Fig 4.

Different filters applied to the same neural signals detect different desired changes and produce different events on which the G-DHL rules can work.

The two columns of graphs refer to two different simulations. The simulations start from the same neural signals (top graphs) but use different filters (middle graphs) leading to a different synaptic update even if the same DHL rule is applied (bottom graphs). Top graphs: each graph represents two signals u1 and u2 each generated as an average of 4 cosine functions having random frequency (uniformly drawn in [0.1, 3]) and random amplitude (each cosine function was first scaled to (0, 1) and then multiplied by a random value uniformly drawn in (0, 1)). Middle graphs: events resulting from the filters and (left) and from the filters and (right; these filters should not be confused with the analogous filters used within the G-DHL rule). Bottom graphs: step-by-step update of the connection weight (thin curve), and its level (bold curve), obtained in the two simulations by applying the Porr-Wörgötter DHL rule to the filtered signals.

More »

Fig 4 Expand

Fig 5.

Example of how eligibility traces allow the G-DHL rule to capture temporal interactions between events separated by a time gap.

Left: Two neural signals exhibiting an event each, and the related traces. The trace signals mi,t at time step t were numerically computed by applying a leaky accumulator process to the initial signals ui,t as follows: mi,t = mi,t−1 + (Δt/τ) ⋅ (−mi,t−1 + ui,t−1), with Δt = 0.001 and τ = 1. Right: the connection weight resulting from the application of the G-DHL rule component to the initial signals or to their memory traces.

More »

Fig 5 Expand

Fig 6.

Learning kernels of the four G-DHL differential components for a pair of pre-/post-synaptic spikes.

The three columns of graphs refer respectively to: ; τ1 = τ2; . The four rows of graphs refer to the G-DHL different components: ‘p’ indicates the positive part of the eligibility-trace derivative and ‘n’ indicates its negative part. Small gray circles indicate maximum synaptic changes.

More »

Fig 6 Expand

Fig 7.

Learning kernels generated by the four mixed components of the G-DHL rule applied to a pair of pre-/post-synaptic spikes.

Graphs are plotted as in Fig 6, with ‘s’ indicating the eligibility-trace signal.

More »

Fig 7 Expand

Fig 8.

Results of the model comparison and fitting procedures used to regress the classic STDP data set from Bi and Poo [25].

(a) Data on the left: regression results. Top-right graph: BIC values obtained using 1 to 8 G-DHL components. Bottom-right graph: size of the parameters of the selected components. (b) Left graph: data points and exponential regression from [46] (reproduced from data). Right graph: G-DHL fit using the parameters in ‘a’.

More »

Fig 8 Expand

Fig 9.

G-DHL components, and related factors, for the Bi and Poo’s learning kernel.

(a) Components found by the G-DHL regression of the Bi and Poo’s data set [25]. (b) Temporal profile of the factors of the components shown in ‘a’, plotted for the Δt that causes the maximum synaptic change.

More »

Fig 9 Expand

Fig 10.

Different STDP data sets, representative of typical STDP learning kernels, fitted with the G-DHL rule.

Each group of graphs refers to one STDP class/subtype and shows: (1) left graph: data and fitting curve from the original article (in ‘a’ and ‘b’: reprinted with permission from respectively [71] and [72]; in ‘c’ and ‘d’: reproduced from data and graphs published in respectively [73] and [74]); (2) right graph: data and fitting curve obtained with the G-DHL regression; (3) top-central graph: learning curve suggested in [18] to capture the STDP kernel (reprinted with permission). When available, the G-DHL regression was based on the original data (graphs with a star: *), otherwise it used the data extrapolated from the published graphs: (a) data extrapolated from [71]; (b) data extrapolated from [72]; (c) original data from [73]; (d) data extrapolated from [74]. Section 3 in S1 Supporting Information presents more detailed data on the regressions as in Fig 8.

More »

Fig 10 Expand

Fig 11.

Other STDP data sets and classes/types fitted with G-DHL.

Graphs plotted as in Fig 10 (graphs on the left in ‘a’ and ‘b’: reprinted with permission from respectively [76] and [77]; graphs on the left in ‘c’ and ‘d’: reproduced from data and graphs from respectively [78] and [79]). Right graphs: (a) data extrapolated from [76]; (b) original data from [77]; (c) original data from [78]; (d) original data from [79]. Section 3 in S1 Supporting Information presents more detailed data on the regressions as in Fig 8.

More »

Fig 11 Expand

Table 1.

Summary of the regressions of the nine STDP data sets regressed with the G-DHL rule.

The table indicates: the species and brain area from which the neurons have been taken (Hip: hippocampus; VisCtx: visual cortex; EntCtx: enthorinal cortex; Tec: Tectum); the reference where the data were published (Ref.); the parameters of the G-DHL selected model (i.e., the 2 or 3 parameters of the components of the model chosen by the model comparison technique); the type of pre- and post-synaptic neuron (Exc: excitatory; Inh: inhibitory); the taxonomy with which the STDP data set has been classified in Caporale and Dan [18] (C.&D. classes); our taxonomy proposed on the basis of the components found by the G-DHL regression. Our classes: ‘E’ and ‘I’ refer to the excitatory/inhibitory neurons involved, specifying the class, and the numbers refer to the subtypes within the class.

More »

Table 1 Expand