Skip to main content
Advertisement

< Back to Article

Table 1.

Description of mathematical symbols used in the main text.

More »

Table 1 Expand

Figure 1.

Model of environment.

Allocentric representation (left panel) and egocentric view (right panel). The agent (white triangle) is at allocentric location and oriented at degrees (clockwise relative to the positive axis). The environment contains two inner walls and four boundary walls. The agent is equipped with whiskers that detect the minimum Euclidian distance to a wall, . It is also equipped with a nose that detects the signal from an olfactory source placed at , in the south-west corner of the maze (white circle). The agent also has a retina that is fixed in orientation and always aligned with the direction of heading, . The retina provides one-dimensional visual input, (displayed as a one-dimensional image in the right panel), from −45 to +45 degrees of visual angle around and comprising pixels.

More »

Figure 1 Expand

Figure 2.

Generative model for spatial cognition.

The agent's dynamical model is embodied in the red arrows, , and its observation model in the blue arrows, . All of the agent's spatial computations are based on statistical inference in this same probabilistic generative model. The computations are defined by what variables are known (gray shading) and what the agent wishes to estimate. Sensory Imagery Given a known initial state, , and virtual motor commands , the agent can generate sensory imagery . Decision Making Given initial state , a sequence of putative motor commands (eg. left turn), and sensory goals , an agent can compute the likelihood of attaining those goals given and , . This computation requires a single sweep of forward inference. The agent can then repeat this for a second putative motor sequence (eg. right turn), and decide which turn to take based on the likelihood ratio. Model Selection Here, the agent has made observations and computes the likelihood ratio under two different models of the environment. Planning can be formulated as estimation of a density over actions given current state and desired sensory states, . This requires a forward sweep to compute the hidden states that are commensurate with the goals, and a backward sweep to compute the motor commands that will produce the required hidden state trajectory.

More »

Figure 2 Expand

Figure 3.

Visual imagery.

(A) Control sequence used to generate visual imagery for the ‘north-east’ trajectory. The input signals are acceleration, , and change in direction, . These control signals change the agent's state according to equation 3. (B) The state variables speed and direction produced by the control sequence in A. (C) The state variables and shown as a path (red curve). This is the ‘north-east’ trajectory. The state variable time series in B and C were produced by integrating the dynamics in equation 3 using the local linearisation approach of equation 5. (D) Accuracy of visual imagery produced by agent as compared to sensory input that would have been produced by the environmental model. The figure shows the proportion of variance, , explained by the agent's model as a function of retinal angle, . This was computed separately for the north-east (black), north-west (red), south-east (blue) and south-west (green) trajectories. Only activity in the centre of the retina is accurately predicted.

More »

Figure 3 Expand

Figure 4.

Localisation.

Left: Representative result from a single trial showing true route computed using noiseless path integration (black curve), localisation with a noisy path integrator and no Hippocampus (blue curve) and localisation with a noisy path integrator and a Hippocampus (red curve). Right: Boxplots of localisation error over trials with medians indicated by red bars, box edges indicating 25th and 75th percentiles, whiskers indicating more extreme points, and outliers plotted as red crosses.

More »

Figure 4 Expand

Figure 5.

Decision making.

The task of decision making is to decide whether to make a left or a right turn (hence the question mark in the above graphic). Top Left: Locations on the route of the ‘left turn’ or north-west trajectory (red curve) Top Right: The markers A, B, C, D and E denote locations on the ‘right turn’ or north-east trajectory corresponding to time points and respectively. Bottom: The log likelihood ratio (of north-east versus north-west), , as a function of the number of time points along the trajectory.

More »

Figure 5 Expand

Figure 6.

Model selection.

The task of model selection is for the agent to decide which environment it is in (hence the question mark in the above graphic). Top Left: North-east trajectory in maze 2, Top Right: North-east trajectory in maze 1. The mazes have different coloured east and west walls. The markers on the trajectories (A, B, C, D and E) denote locations corresponding to different time points ( and ). Bottom: The log likelihood ratio (of maze 1 versus maze 2), , as a function of the number of time points along the trajectory. At n = 1000, the LogLR is approximately 3. This allows the agent to infer, with 95% probability, that it is located in maze 1 rather than maze 2.

More »

Figure 6 Expand

Figure 7.

Route and motor planning.

Right: The figure shows the planned route traced out by forward (red) and backward (green) inference. For forward inference we are plotting the and elements of , and for backward inference the and elements of . The agent is located at (white cross) and the goal is at (white circle). Left: The figure shows the estimated motor control sequence for producing the desired sensory goals. This sequence corresponds to the mean from backward inference, , as described in the theory section on ‘Inference over Inputs’.

More »

Figure 7 Expand

Figure 8.

Neuronal implementation.

Here indexes time and we have control signals , path integral hidden state estimates , Bayesian state estimates, , non-spatial sensory states, and predictions of non-spatial sensory states . During Localisation, path integration in MEC combines previous state estimates and motor efference copy to produce a new state estimate, with mean as described in equation 23. Bayesian inference in CA3-CA1 combines path integration with sensory input to get an improved state estimate as described in equation 24. LEC sends a prediction error signal to CA3-CA1. The computations underlying ‘sensory imagery’, ‘decision making’ and ‘model selection’ are discussed in the main text in the section on ‘Neural Implementation’. CA: Cornu Ammonis, LEC/MEC: Lateral/Medial Entorhinal cortex.

More »

Figure 8 Expand

Figure 9.

Motor and route planning.

Route planning can be implemented using Forward inference, in which sensory goals are instantiated in LEC (or projections to it), and the recurrent circuitry produces state estimates from path integration , and Bayesian estimation , that are consistent with those goals. Backward inference takes as input the result of the forward sweep. It produces improved estimates of the hidden states, given by the recursion , and estimates of control signals given by . We propose that the prediction error is computed in MEC and propagated to CA3-CA1 for computation of and to prefrontal regions for computation of . See equation 34 for more details.

More »

Figure 9 Expand