< Back to Article

Learning spatiotemporal signals using a recurrent spiking network that discretizes time

Fig 5

Scaling, robustness and time variability of the model.

(A) Change of the mean period of the sequential dynamics as the number of clusters grows with: (i) the total number of excitatory neurons kept constant (red line); (ii) the total number of neurons increasing with cluster size (blue line). Error bar shows a standard deviation. (B) Dynamics with varying level of external excitatory input for four different cluster sizes and NE = 2400. The external input can modulate the period of the sequential dynamics by ∼ 10%. (C) Recall performance of the learned sequence ABCBA for varying cluster sizes and NE = 30NC under synapse deletion (computed over 20 repeats). The learning time depends on the cluster size: Δt = 960s/NC. (D) The ABCBA sequence is learned with a network of 120 excitatory neurons connected in one large chain and read-out neurons with maximum synaptic read-out strength increased to . The network is driven by a low external input (). When, at t = 500 ms a single synapse is deleted, the dynamics breaks down and parts of the sequence are randomly activated by the external input. Top: spike raster of the excitatory neurons of the RNN. Bottom: spike raster of the read-out neurons. (E) (Left) Histogram of the variability of the period of the sequential activity of the RNN over 79 trials (Right) The standard deviation of the cluster activation time, σt, increases as the square root of μt, the mean time of cluster activation: (root mean squared error = 0.223 ms).

Fig 5