Skip to main content
Advertisement

< Back to Article

Fig 1.

Embodied decision setup and active inference model.

(a) Experimental setup, during three consecutive discrete time steps . The agent controls a 3-DoF arm (the three segments in blue), which starts at a home position (blue dot) at an equal distance from the two targets (red and green circles). The current cue is represented with a big purple dot, while the old cues are represented with smaller gray dots. For each trial, the agent has to reach with the hand the target it believes will contain more cues. The hand trajectory is represented with a thinner blue line. (b) Hybrid active inference model for embodied decisions. The model comprises four processes, numbered from 1 to 4. In the first process, discrete hidden states , encoding the probability that each target is the correct choice for the current trial, are iteratively inferred by discrete cues by inverting the cue likelihood matrix . In the second process, the hidden states generate a particular combination of discrete hand dynamics through the extrinsic likelihood matrix . Each hand dynamics oh,m is related to a continuous dynamics function , where the target positions are defined (see [36, 37] for more details). A forward message imposes a prior over the hand velocity (in Cartesian coordinates) , while a backward message infers the related Cartesian position of the hand , ready for kinematic and dynamic inversions. In the third process, for each continuous time step t, the current position and velocity of the hand (in a visual domain) are inferred by continuous observations and , via the corresponding likelihood functions ( and ). The inferred hand trajectory flows back toward the prior for action execution (see Section 4.2). For each discrete time step , the target probabilities are also inferred by the current motor trajectory. Finally, in the fourth process, the prior over the correct target is updated across trials, implementing habitual learning. Of note, some readers may find the edge from (the agent’s belief about the latent state) to (an observation) somewhat unconventional. This is because, in many contexts, observations are typically assumed to be generated from the true latent states rather than the agent’s beliefs about them. However, this formulation is standard in active inference studies, even when representing the agent’s generative model as in this figure.

More »

Fig 1 Expand

Fig 2.

Evidence accumulation during (a) congruent trials, (b) neutral trials, and (c) incongruent trials.

The first row shows the dynamics of a sample trial for each condition: specifically, the first plot shows the discrete hidden states encoding the two target probabilities over continuous time; the second plot shows the cumulative sum of the cue observations ; the third plot shows the distances between the hand and the two targets. Note that the discrete signals are maintained for a whole discrete period , generating a stepped behavior. The second row shows the agent’s average trajectory (in dark blue) across 100 trials for each condition. A dotted line of minimum distance between the initial hand position and the left target is also displayed.

More »

Fig 2 Expand

Fig 3.

Motor planning with (a) a risky strategy (high urgency, or ), (b) a medium strategy (medium urgency, or ), and (c) a conservative strategy (low urgency, or ). (d) First panel: dynamics of the discrete hidden state of the first target over discrete time , which in this case are the same for every strategy.

Second panel: dynamics of the discrete variable ot1 of the first target. Third panel: dynamics of the discrete variable os of staying in position. The fourth panel shows the L2-norm of the belief over the hand velocity in continuous time t. The vertical dashed lines represent the movement onset for each strategy (using as threshold).

More »

Fig 3 Expand

Fig 4.

Interaction between urgency and speed of evidence accumulation.

(a) Fast evidence accumulation () and low urgency to move (). (b) Slow evidence accumulation () and high urgency to move (). (c) While movement dynamics look similar, the evolution of the hidden states and hand dynamics are different.

More »

Fig 4 Expand

Fig 5.

Commitment during an incongruent trial, with (a) low, (b) medium, and (c) high distance between the two targets, with and .

For each condition, the first plot shows the discrete hidden states ; the second plot shows the L2-norms of the estimated and true hand velocities and ; the third plot shows the distances between the hand and the two targets. For each condition, the second row also shows the agent’s final trajectories, in dark blue.

More »

Fig 5 Expand

Fig 6.

Commitment during an incongruent trial, with (a) low, (b) medium, and (c) high distance between the two targets; continued from Fig 5.

(a–c) The panels show the direction and magnitude of the estimated velocity (in dark blue), the potential trajectory needed to reach the first target (in red) and the second target (in green), for each condition. These potential trajectories are used to estimate which of the two targets is more likely to have generated the current trajectory. (d) The top panel shows the discrete hand dynamics oh,t1 of the first target over discrete time ; the middle panel shows the log evidence of the hand trajectory associated with the first target, on a logarithmic scale; the bottom panel shows the normalized log evidence of the first target.

More »

Fig 6 Expand

Fig 7.

Commitment to an initially selected target resulting from motor inference.

Panels ab compare two agents—without motor inference (a, kh = 0) and with motor inference (b, kh = 0.2)—during a congruent trial. Panels cd compare the same two agents—without motor inference (c, kh = 0) and with motor inference (d, kh = 0.2)—during an incongruent trial. In all conditions, . For each panel, the first plot shows the discrete hidden states ; the second plot shows the discrete hand dynamics for reaching both targets and staying in position; the third plot shows the L2-norms of the estimated hand velocity , compared with the three potential trajectories , , and . Note that although the stay dynamics function is (initially) the closest to the actual trajectory, the related probability oh,s decreases rapidly, as soon as the predictions shift toward one of the targets, showing the top-down influence from choice to movement. The fourth (bottom) plot shows the agent’s movement trajectories, in dark blue.

More »

Fig 7 Expand

Fig 8.

(a) Speed-accuracy curves for four models: two serial models (decide-only, decide-then-act) and two active inference models (with and without motor inference).

The former two were sampled by varying the decision threshold in 500 trials, while the latter two were sampled by running 100 trials for different values of urgency (from medium-high urgency, i.e., , to low urgency, i.e., ). The samples were then fitted into a curve. Motor inference was realized with kh = 0.1. For all conditions, . To allow the agents to complete the trials on time with low levels of urgency, we set the maximum trial duration to 630 time steps. (b) Pearson product-moment correlation coefficient between the belief over the hand trajectory and the probability of the correct choice, computed for two conditions (with and without motor inference), with 500 neutral trials per condition, and with different levels of urgency.

More »

Fig 8 Expand

Fig 9.

Statistical learning of the prior for the correct choice over 50 incongruent trials.

The correct choice is fixed in the first 10 trials, but is reversed in the next 40 trials. (a) Hand trajectories in equally spaced trials, during learning (top) and reversal learning (bottom). Dark blue trajectories represent early trials, while dark red trajectories are late trials. Here, kh = 0, , , , and . (b) The five panels show Dirichlet counts ; discrete priors ; time step of movement onset across trials; discrete hidden state s1 in 5 equally spaced trials for learning; and for reversal learning over discrete time . The vertical dashed lines indicate the time step when reversal occurs.

More »

Fig 9 Expand