Skip to main content
Advertisement

< Back to Article

Table 1.

The core assumption of the LVOC model explains the learning effects observed in five different cognitive control experiments.

More »

Table 1 Expand

Fig 1.

Learning to control the allocation of attention.

a) Visual search task used by Lin et al. (2016). b) Human data from Experiment 1 of Lin et al. (2016). c) Predictions of the LVOC model. d) Fit of Win-Stay Lose-Shift model. e) Fit of Rescorla-Wagner model.

More »

Fig 1 Expand

Fig 2.

LVOC model captures that in the paradigm by Krebs et al.

(a) People learn to exert more cognitive control on stimuli whose features predict that performance will be rewarded which manifests in faster responses (b) and fewer errors (c).

More »

Fig 2 Expand

Fig 3.

Metacognitive reinforcement learning captures the effect of reward on learning from experienced conflict observed by Braem et al.

(2012). a) Illustration of the Flanker task by Braem et al. (2012). b) Human data by Braem et al. (2012). c) Fit of LVOC model. d) Fit of Rescorla-Wagner model. e) Fit of Win-Stay Lose-Shift model.

More »

Fig 3 Expand

Fig 4.

The LVOC model captures the finding that people learn to adjust their control intensity based on features that predict incongruence.

a) Color-Word Stroop paradigm by Bugg et al. (2008). b-c) LVOC model captures that people learn to exploit features that predict incongruency to respond faster and more accurately on incongruent trial. d) Picture-Word Stroop paradigm by Bugg, Jacoby, and Chanani (2011). e-f) Just as human participants, the LVOC model responds more quickly and accurately to novel exemplars from animal categories that it previously learned to associate with more frequent incongruent trials.

More »

Fig 4 Expand

Table 2.

Model parameters used in the simulations of empirical findings.

More »

Table 2 Expand