Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Illustration of the hippocampus model.

The sensory input pattern xSI(t) is transformed to a binary representation through the mapping between SI and EC. DG then transforms the pattern xEC to a sparse representation xDG via the non-plastic pathway EC → DG which is subsequently propagated to CA3 via the plastic pathway DG → CA3. Here, N denotes the number of neurons in EC, and t denotes the time within the sequence. Dashed green arrows denote plastic pathways, while solid gray arrows denote fixed pathways, i.e. not plastic on the fast timescale of learning we consider in this work. The forward path represents the encoding path, while the backward path represents the decoding path. Prediction of the next pattern happens in CA3. For illustration purposes, example patterns in the corresponding subregions are shown.

More »

Fig 1 Expand

Fig 2.

Illustration of (a) storage and (b) retrieval of a pattern sequence in our model. Solid black arrows denote propagation of activities, while dashed green arrows denote hetero-association in the corresponding pathway. Notice that the very first pattern xCA3(t − 1) is picked randomly from the intrinsic sequence and that denotes a potentially corrupted input cue pattern, and later denotes a potentially corrupted retrieved pattern. During storage, SI via EC to DG serves as an encoder, and EC to SI serves as a decoder, while from DG to EC3 and from CA3 to EC is just hetero-association. However during retrieval, from SI to CA3 is encoding, while from CA3 via EC to SI is decoding. And if we use artificial stimuli that are not defined by images but by activities in EC, the encoding or decoding between SI and EC is discarded.

More »

Fig 2 Expand

Fig 3.

Illustration of hetero-association and forgetting over time in the hippocampus model.

At time step t, the DG pattern, inferred via EC from the SI pattern, is hetero-associated with the CA3 pattern, which in turn is hetero-associated with the EC pattern (indicated by the green dashed arrows). The learned associations weaken over time (indicated by the increasing transparency of the orange arrows) leading to a degraded reconstruction (forgetting) in EC (indicated by the increasing blurring), which is stronger for remote patterns, like t-6, than for more recent patterns, like t-1. The figure illustrates those connections that are relevant to the current activity.

More »

Fig 3 Expand

Fig 4.

Visualization of MNIST sequences, which is read by rows left to right, top to bottom, so that the earliest pattern (index = 1) is located top left and the latest pattern (index 200) bottom right.

(a) Sequence of input images as provided to SI. (b) Reconstruction of the same sequence encoded and directly decoded through SI → EC → SI (visualized EC ground truth). (c) Full intrinsic recall with 200 transitions for each pattern separately, which is visually very close to the reconstruction from the CA3 ground truth (CA3 → EC → SI, not shown). The patterns highlighted in green are examples of cues that have a doppelganger, highlighted in red, in the training sequence, and whose subsequence could be as a result recalled instead, especially for early (weakly remembered) patterns.

More »

Fig 4 Expand

Fig 5.

Illustration of the sequence relaxation scenarios in CA3.

The solid black arrows represent the transitions between the patterns of the intrinsic sequence. If a corrupted cue is given to CA3 the system either (i) converges within a short transient to the right position in the intrinsic sequence if the cue is similar enough to the ground truth pattern (green dotted arrows), or (ii) converges to a wrong position in the intrinsic sequence if the cue is more similar to another pattern in the intrinsic sequence (orange dashed arrows), or (iii) converges to a spurious intrinsic sequence if the cue is similar to a pattern within a spurious sequence (red dash-dotted arrows).

More »

Fig 5 Expand

Fig 6.

llustration of the recalled subsequences for different cues and noise levels for the HC model on the MNIST for N = 200.

Each cue is propagated via pathway SI → EC → DG → CA3. The intrinsic transition is iterated for 15 transitions, in which in each transition the current pattern in CA3 is decoded via pathway CA3 → EC → SI to visualize the reconstructed digit pattern in SI. The first row shows the visualized EC ground truth cue and subsequence that have been decoded via pathway EC → SI to SI. The second row shows the reconstructions in SI when the exact pattern is presented in EC, and the remaining two rows show the results when either 10% or 20% binary noise is added to the corresponding EC pattern.

More »

Fig 6 Expand

Fig 7.

Maximum of the correlation between each pattern and all the other patterns in subregions SI, EC, and DG for the HC-model on the MNIST dataset for N = 200.

For some of the peak values, the corresponding patterns are shown above the corresponding position (pattern index).

More »

Fig 7 Expand

Fig 8.

Illustration of recalled subsequences for different pairs of cues that are highly correlated on the MNIST for N = 200.

Cue (185) with 0% noise is an example of correct relaxation and for relaxation to a spurious sequence with 20% noise. Cue (9) is an example of a relaxation to a wrong position (See Fig 5). Cue (3) does not have a doppelganger and recalls the correct subsequence, however Cue (4) has a doppelganger which causes a more degraded transient phase and needs more time to properly converge to the exact patterns of the intrinsic sequence.

More »

Fig 8 Expand

Fig 9.

Illustration of the recalled subsequences, when cued with novel patterns (test data) for the HC-model on the MNIST for N = 200.

For the first pattern, the stored sequence is recalled starting from a similar stored pattern, while for the second pattern, there exists no similar ground truth pattern and a spurious sequence is recalled.

More »

Fig 9 Expand

Fig 10.

Encoding, intrinsic dynamics, and decoding performance on the RAND-CORR dataset with (left) and without (right) DG.

The plots show the correlation between retrieved and ground truth patterns in CA3 or EC in solid blue. Additionally, a corresponding trend line is given in dashed orange as well as a baseline in dash-dotted green. The baseline denotes the correlation between retrieved patterns and the mean pattern of the entire sequence of ground truth patterns, i.e. and , respectively. It thus reflects the trivial solution, if the network just generated the mean pattern as an output. (a) Encoder () (b) Encoder without DG (). (c) Intrinsic performance in CA3, where each pattern is encoded and the intrinsic dynamic is iterated once () (d) Intrinsic performance in CA3 excluding DG. () (e) Decoder (). In all plots, the input pattern is the ground truth, as well as the pattern that is compared to.

More »

Fig 10 Expand

Fig 11.

Recall performance of a simplified HC-model (excluding DG subregion) on the RAND-CORR (left) and RAND (right) datasets.

(a,b) Each pattern is encoded and directly decoded without intrinsic dynamics, (), for RAND-CORR and for RAND respectively. (c,d) Each pattern is encoded, the intrinsic transition is iterated five times, and the corresponding pattern is decoded, (), for RAND-CORR and for RAND, respectively. (e,f) Each pattern is encoded, the intrinsic transition is looped fully through (T transitions) arriving at pattern t again, and the corresponding pattern is decoded, (), for RAND-CORR and for RAND, respectively.

More »

Fig 11 Expand

Fig 12.

Illustration of hetero-association and forgetting over time in the standard framework.

At time step t the current EC pattern (inferred from the corresponding SI pattern) is projected to CA3 via a fixed pathway EC→CA3 (here we simply used the identity mapping) and hetero-associated (indicated by the green dashed arrows) with the previous CA3 pattern. The learned associations weaken over time indicated by the increasing transparency of the arrows, leading to a degraded reconstruction (forgetting) in CA3 and thus also in EC indicated by the increasing blur.

More »

Fig 12 Expand

Fig 13.

Recall performance in the standard framework, where uncorrelated CA3 patterns have been hetero-associated online one pattern pair at a time with a learning rate of (a) 0.01 and (b) 0.025. Shown is the network performance after a varying number of iterations through the recurrent CA3 dynamics (e.g. for 5 transitions ). Notice, that the curves for 25, 100, and 200 transitions in (b) are almost perfectly overlaid by the curve for 500 transitions and thus not visible.

More »

Fig 13 Expand

Fig 14.

Full intrinsic performance of the HC-model (excluding subregion DG) on RAND> (a,b), MNIST (c,d), and RAND-CORR (e,f), with N = 200 with CA3 activity 10% (left column), and N = 1000 with CA3 activity 3.2% (right column). Apart from the CA3 activity and N the same setup has been used for all networks.

More »

Fig 14 Expand

Fig 15.

Illustration of the replay process excluding the DG subregion.

The intrinsic sequence is looped through several times where at each time step t the EC pattern is reconstructed from the intrinsic pattern via pathway CA3 → EC and the forward pathway EC → CA3 is updated (indicated by the green dashed arrow) by hetero-associating the reconstructed EC pattern with the corresponding CA3 pattern. Through this process the quality of the EC → CA3 pathway increases leading to a better recall performance without any external input from SI.

More »

Fig 15 Expand

Fig 16.

Recall performance of the simplified HC-model (excluding the DG subregion) on the RAND-CORR dataset (a) after retraining/replay, and (b) without replay.

More »

Fig 16 Expand