Advertisement

< Back to Article

Learning spatiotemporal signals using a recurrent spiking network that discretizes time

Fig 1

Model architecture.

(A) The recurrent network consists of both inhibitory (in blue) and excitatory (in red) neurons. The connectivity is sparse in the recurrent network. The temporal backbone is established in the recurrent network after a learning phase. Inset: zoom of recurrent network showing the macroscopic recurrent structure after learning, here for 7 clusters. The excitatory neurons in the recurrent network project all-to-all to the read-out neurons. The read-out neurons are not interconnected. (B) All excitatory to excitatory connections are plastic under the voltage-based STDP rule (see Methods for details). The red lines are spikes of neuron j (top) and neuron i (bottom). When neurons j and i are very active together, they form bidirectional connections strengthening both Wij and Wji. Connections Wij are unidirectionally strengthened when neuron j fires before neuron i. (C) The incoming excitatory weights are L1 normalized in the recurrent network, i.e. the sum of all incoming excitatory weights is kept constant. (D) Potentiation of the plastic read-out synapses is linearly dependent on the weight. This gives weights a soft upper bound.

Fig 1

doi: https://doi.org/10.1371/journal.pcbi.1007606.g001