Skip to main content
Advertisement

< Back to Article

Figure 1.

Autoassociative memory with bounded synapses.

A. Memories are stored in the recurrent collaterals of a neural network. Five example synapses are shown, each in a different state (colors from panel C). B. During storage, a sequence of items, ( indexes time backwards from the time of recall), induces changes to the internal states, , and thus to the overt efficacies, , of recurrent synapses in the network. During retrieval, the dynamics of the network should identify the pattern to be recalled given a cue and information in the synaptic efficacies. C. The cascade model of synaptic metaplasticity [17]. Colored circles are latent states, , that correspond to two different synaptic efficacies, ; arrows are state transitions (blue: depression, red: potentiation). Tables show different variants of mapping pre- and post-synaptic activations to depression (D) and potentiation (P) under the pre- and postsynaptically-gated learning rules. D. Left: the evolution of the expected distribution over synaptic states (thickness of stripes is proportional to the probability of the corresponding state, see panel C for color code) after a potentiation event at time (marked by the vertical arrow) and the storage of random patterns in subsequent steps, and the distribution of times at which this memory may need to be recalled (white curve). Middle: the time-averaged expected distribution of hidden synaptic states at the unknown time of recall of this memory. Right: the corresponding distribution over overt synaptic efficacies.

More »

Figure 1 Expand

Figure 2.

Optimal recall.

A. Optimal neural transfer function: the total somatic current combines the recurrent contribution and a persistent external input corresponding to the recall cue. B. An example retrieval trial, from left to right: the pattern to be retrieved; the recall cue; activity of a subset of neurons during retrieval; final answer to the retrieval query obtained by temporally averaging the activity of the population; evolution of r.m.s. retrieval error over time in a trial. C. Recall performance as a function of pattern age (blue). As a reference, performance when the age of the pattern is known to the network is also shown (black, see Text S3). Gray filled curve shows distribution of retrieval times. D. Average performance as a function of the number of synapses per neuron in fully connected networks of different sizes (blue), or a sparsely connected network of fixed size varying the number of connections (red). E. Average performance as a function of average pattern age in fully-connected networks of different sizes for balanced patterns (coding level = , black, gray), and sparse patterns (coding level = , green). Dashed lines in panels C–E show, as a control, the performance of an optimised feed-forward network without synaptic plasticity (see main text for why this is a relevant upper bound on average recall error).

More »

Figure 2 Expand

Figure 3.

The advantage of metaplastic synapses.

A. Recall performance in simple two-state (cascade depth ) versus metaplastic () synapses. B. Scaling of memory span, defined as the maximum age for which patterns can be recalled reliably within the allowable error, , for two-state (left) and metaplastic (right) synapses.

More »

Figure 3 Expand

Figure 4.

Intrinsic plasticity.

A. Effects of different forms of IP (rows) for different forms of synaptic plasticity (columns). Recall performance is shown for different variants of each form of IP (bars), entailing different approximations of the exact (optimal) dynamics. Dashed lines show control performance of an optimized feedforward network, as in Fig. 2C–E; solid lines show performance of exact dynamics, asterisks mark neural dynamics that are formally equivalent to the exact case. B. Recall performance as a function of pattern age with neuron-independent (black) and -specific (blue) variants of for the postsynaptically-gated learning rule. Gray filled curve shows distribution of pattern age. C. Recall performance for an online implementation of the two forms of IP. D. Net change in excitability induced by the two forms of IP together as a function of time since memory storage for neurons that were either active (gray) or inactive (black) in the originally stored pattern. Lines correspond to different random sequences of consecutively stored patterns.

More »

Figure 4 Expand

Figure 5.

Dynamic feedback inhibition.

A. Example statistics of inhibitory vs. excitatory currents to three example neurons during a recall trial. Blue: neuron correctly recalling a bit in the originally stored pattern, correctly recalled; red: neuron correctly recalling a bit in the originally stored pattern; gray: neuron with high variability during the trial, corresponding to an incorrectly recalled bit. Individual dots correspond to different time steps within the same recall trial. B. Effect of replacing feedback inhibition by tonic inhibition with the same average level. C. Evolution of the mean population activity during retrieval when network dynamics involve feedback (red) versus tonic inhibition (blue). Lines correspond to different trials. D. Schematic view of inhibitory connectivity in the network. Pyramidal neurons sending or receiving monosynaptic excitation (disynaptic inhibition) to example neuron 2 (black) are colored red (blue). Blue circle: local interneuron (not explicitly modeled) mediating disynaptic lateral inhibition received by neuron 2. E/I overlap is measured as the ratio of presynaptic pyramidal neurons colored both blue and red, 0% is chance. E. Total somatic current (through recurrents) to an example cell in a sparsely connected network (20% connectivity) with full (x-axis) or partial E/I overlap (y-axis, colors); different points correspond to different time steps. F. Recall performance as a function of E/I overlap. Asterisks in B and F indicate network configurations that are formally equivalent to the exact dynamics.

More »

Figure 5 Expand

Figure 6.

Combining different circuit motifs for approximately optimal retrieval.

Retrieval performance with individual approximations (left), and all approximations combined (right), compared with a hypothetical scenario cumulating errors additively (middle). All networks are 50% sparsely connected. Dashed and solid lines show performance of exact dynamics and control network. Approximations used: online neuron-specific pre- (red) and postsynaptic IP (pink) with an online integration window of 10 patterns, 50% E/I overlap (yellow), all combined (blue), with additional population oscillations (green, see also Fig. 7).

More »

Figure 6 Expand

Figure 7.

Population oscillations.

A. Recall performance as a function of pattern age with optimal network dynamics without oscillations (blue, cf. Fig. 2C), with medium- (purple) or large-amplitude (red) oscillations, and with an artificial sampling algorithm (gray). B. Average recall performance with the artificial sampling algorithm (gray) and with different levels of amplitude modulation for network oscillations (amplitude 0.75 corresponds to the ‘medium oscillation’ in panel A). C. Average normalised population activity and response entropy at different phases during a cycle of a large-amplitude oscillation.

More »

Figure 7 Expand

Figure 8.

Network flickering.

A. Hippocampal population dynamics during a single retrieval trial, reproduced from Ref. [46]. Correlation of the instantaneous population vector to the stereotypical responses of the network in the two contexts are shown (red vs. blue) Top: flickering (box) following the switching of visual cues at time 0 (green vertical line), bottom: spontaneous flickering (box) without external cue switching. B. Dynamics of population responses in the model showing flickering (boxes) after cue switching (top), and spontaneously, without cue switching (bottom).

More »

Figure 8 Expand