Skip to main content
Advertisement

< Back to Article

General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain

Fig 4

Different filters applied to the same neural signals detect different desired changes and produce different events on which the G-DHL rules can work.

The two columns of graphs refer to two different simulations. The simulations start from the same neural signals (top graphs) but use different filters (middle graphs) leading to a different synaptic update even if the same DHL rule is applied (bottom graphs). Top graphs: each graph represents two signals u1 and u2 each generated as an average of 4 cosine functions having random frequency (uniformly drawn in [0.1, 3]) and random amplitude (each cosine function was first scaled to (0, 1) and then multiplied by a random value uniformly drawn in (0, 1)). Middle graphs: events resulting from the filters and (left) and from the filters and (right; these filters should not be confused with the analogous filters used within the G-DHL rule). Bottom graphs: step-by-step update of the connection weight (thin curve), and its level (bold curve), obtained in the two simulations by applying the Porr-Wörgötter DHL rule to the filtered signals.

Fig 4

doi: https://doi.org/10.1371/journal.pcbi.1006227.g004