Figures
Abstract
Within predictive processing two kinds of learning can be distinguished: parameter learning and structure learning. In Bayesian parameter learning, parameters under a specific generative model are continuously being updated in light of new evidence. However, this learning mechanism cannot explain how new parameters are added to a model. Structure learning, unlike parameter learning, makes structural changes to a generative model by altering its causal connections or adding or removing parameters. Whilst these two types of learning have recently been formally differentiated, they have not been empirically distinguished. The aim of this research was to empirically differentiate between parameter learning and structure learning on the basis of how they affect pupil dilation. Participants took part in a within-subject computer-based learning experiment with two phases. In the first phase, participants had to learn the relationship between cues and target stimuli. In the second phase, they had to learn a conditional change in this relationship. Our results show that the learning dynamics were indeed qualitatively different between the two experimental phases, but in the opposite direction as we originally expected. Participants were learning more gradually in the second phase compared to the first phase. This might imply that participants built multiple models from scratch in the first phase (structure learning) before settling on one of these models. In the second phase, participants possibly just needed to update the probability distribution over the model parameters (parameter learning).
Citation: Rutar D, Colizoli O, Selen L, Spieß L, Kwisthout J, Hunnius S (2023) Differentiating between Bayesian parameter learning and structure learning based on behavioural and pupil measures. PLoS ONE 18(2): e0270619. https://doi.org/10.1371/journal.pone.0270619
Editor: Anthony C. Constantinou, Queen Mary University of London, UNITED KINGDOM
Received: June 13, 2022; Accepted: January 18, 2023; Published: February 16, 2023
Copyright: © 2023 Rutar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Raw, pre-processed data and all analyses scripts can be found here: https://doi.org/10.34973/t41p-hx94.
Funding: DR was supported by the Donders Centre of Cognition Grant (Understanding predictive processing in development: Modelling the generation of generative models) awarded to JK and SH. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Imagine you recently moved from one desert city in Australia to another. In your old neighbourhood all front yards were brown due to severe drought. However, in your new neighbourhood, to your surprise, all the front yards appear to be green. You wonder how this is possible since both cities have the same climate. You come up with potential reasons for this and decide that probably the new neighbourhood has just experienced a period of heavy rain. It is not until a week later that you see your neighbour changing their lawn for a new one. Now, you realise that all the front yards in this town have artificial green lawns. At that moment you add a new parameter to your model that can explain the situation which was unexplainable under the old model.
What challenge does the example above pose? We make sense of the world through models and use them for generating predictions about the incoming sensory evidence ([1, 2]). When a model fails to adequately predict the sensory evidence, it needs to be adjusted. In the case above, a new model parameter needs to be added to account for the new observation. How does this happen? What mechanism allows for adding a new parameter to an existing model? This question has been one of the central questions in cognitive science, but it remains, at least empirically, understudied.
Here, we address this long-standing question within the predictive-processing framework, a popular and influential theoretical framework in computational cognitive neuroscience ([1, 3–7]). The predictive-processing framework aims to be a unifying framework for understanding the entirety of human cognition and behaviour from visual processing ([8–10]) and action [11] to mentalizing ([12, 13]). According to the theory, the brain embodies a hierarchical generative model that “aim[s] to capture the statistical structure of some set of observed inputs by tracking (one might say, by schematically recapitulating) the causal matrix responsible for that very structure” [3]. Based on this hierarchical model, the brain generates top-down predictions, which are compared to the incoming sensory input. The difference between the predicted and the actual sensory input, that is the prediction error, is computed. From a predictive processing perspective, minimising reducible prediction error is the primary goal of computations in the brain and occurs mainly as a result of learning ([1, 4, 14–16]).
Until recently, learning in predictive processing was cast as parameter learning, where parameters under a specific generative model are updated in light of new evidence using Bayes’ rule ([4, 15, 17]). Such a formalism is well suited for explaining how learning proceeds when the generative model contains all relevant parameters for a particular learning task. In other words, parameter learning can only ensue when the structure of a generative model is established. Unless we assume that learners are equipped from the start with the complete set of parameters that can explain every situation they will ever encounter, we need to explain how novel parameters are added to a generative model or removed from it ([18–20]). To account for this, a new type of learning has been proposed within predictive processing: structure learning ([2, 5, 15, 17, 21]). This type of learning changes the structure of a generative model by changing the number of parameters in a model or by altering their functional dependencies ([2, 5, 15, 17, 21]). Depending on the type of structural change we can further differentiate between two types of structure learning–model reduction and model expansion. One can start from an overcomplete generative model and then eliminate redundant parameters (i.e., Bayesian model reduction) ([22]), or one starts with a crude model and then add new parameters or functional dependencies (i.e., model expansion) ([5, 15]) using non-parametric methods for example ([23, 24]), but see [21] for a critique of the non-parametric methods for explaining certain aspects of structure learning. Structural changes occur if the addition or the removal of a parameter yields a larger marginal likelihood compared to the marginal likelihood of the structurally unchanged generative model ([5, 15, 17]).
Building on the formal distinction between the two learning mechanisms, the aim of this study was to investigate whether parameter learning and structure learning can be empirically distinguished. To investigate this, we created an experiment with two phases. Before the task, participants were presented with all model variables (i.e., the different predictive cues and the target stimulus) to ensure that they were familiar the basic model structure prior to the experiment. In the first phase of the experiment, participants were expected to acquire the relationship between the cues and the target stimulus as sensory evidence was accumulating. In the second phase, the need for a structural change was induced by adding a new conditional dependency. In short, the first experimental phase was designed to elicit parameter learning and the second phase to trigger structure learning.
As a result of learning a more adequate model of the world, predictions should become better over time, and, simultaneously, uncertainty in each prediction should decrease ([3, 7]). Crucially, we expected that the two experimental phases of our task would lead to two distinct dynamics of this learning process. Gradual updating of the probability distributions of existing model parameters was expected to occur in the first phase, indicated by a gradual increase in predictive accuracy of the cue-target relationship. An abrupt change from incorrect to correct predictions, once a new model parameter has been added resulting from a conditional rule change, was expected to occur in the second phase (see section Hypotheses for a more detailed description).
Assuming we could instantiate such a learning trajectory in our task, we aimed to investigate the dynamics of a physiological correlate of information gain following the presentation of the outcome on each trial, for which there is no overt behavioural marker. Pupil dilation under constant luminance is a well-known indirect measure of the brain’s neuromodulatory arousal systems, including the noradrenergic locus coeruleus and the cholinergic basal forebrain ([25–29]). Subcortical arousal systems may be involved in transmitting internal uncertainty signals and (reward) prediction errors to circuits necessary for inference and action selection ([25, 30–37]). In line with this, several studies have shown that after a person is given feedback on the accuracy of a decision they just made, their pupil dilation scales with the amount of novel information gained as a result of the feedback ([16, 38–47]). Therefore, we reasoned that the target-locked pupil response in our current task might similarly reflect how informative the target was on each trial relative to the prior prediction made by the participant. We expected that target-locked pupil responses would gradually decrease in the first phase as participants learned the task contingencies over time and abruptly decrease in the second phase of the experiment once a new model parameter has been added resulting from a conditional rule change.
Method
Participants
Participants were recruited using Radboud University’s online recruitment system. The only restriction for participation was a minimum age of 16 years. Thirty-three healthy adults with normal or corrected-to-normal vision participated in our study. One participant was excluded for not following the instructions properly. Two participants were excluded due to equipment malfunction. The final sample consisted of 30 participants (24 women, aged 19–42 years, M = 23.3, SD = 4.5). The Ethics Committee of the Social Sciences Faculty at Radboud University approved the study, and all participants gave written informed consent. Participants received 15 euros for participating in the experiment.
Task and procedure
Task instructions.
All participants were instructed to seat approximately 50 cm from the screen and place their chin in a chin rest. Participants carried out a computer-based two-alternative forced-choice (2AFC) task on the expected orientation (left vs. right) of the target stimulus (Gabor patches, Fig 1A). The experiment consisted of two phases with 200 trials each (400 trials in total). It took 1.5 hours to complete the experiment and there were three breaks (two short breaks halfway each phase and one longer break between the two phases) during the experiment. After each break, recalibration of the eye-tracker took place (see below). At the beginning of the experiment, participants were told that they would be presented with auditory and visual cues. The instructions were to use the cues to predict the orientation of the target stimulus in each trial. Before the start of the experiment, participants were presented with an example trial. They indicated their prediction by pressing either the right or left button on a button box for left or right orientation, respectively. Participants were instructed to press the button as soon as they thought they knew which target orientation would appear. At the end of the first phase, participants were told that the cue-target contingency would change (in the second phase) but not what the change would be. In the second phase, participants were similarly instructed to continue to use the cues to predict the orientation of the target stimulus.
(A) Trial structure of the behavioural task. Participants performed a 2AFC task on the expected orientation (left/right) of upcoming Gabor patches while pupil dilation was recorded. Each trial consisted of a fixation period, a cue period, a response window followed by a delay period, and finally a target period. The decision interval ranged from onset of the cue to the participant’s response. The target interval ranged from target onset into the subsequent inter-trial interval (3 s). The target served as feedback on the accuracy of participants’ predictions in the decision interval. (B) An illustration of one of the two counterbalanced cue-target mappings. The participants had to learn cue-target contingencies to accurately predict the orientation of the upcoming Gabor patch (target). Mapping 1 was defined as the visual cue-target pairs that occurred in 80% of trials in the first phase; Mapping 2 was defined as the visual cue-target pairs that occurred in 20% of trials in the first phase. Mappings were counterbalanced between participants (i.e., half of the participants received the square -> left, diamond -> right mapping in the 80% condition in phase 1). At the start of the second phase, the frequencies (80% vs. 20%) of the cue-target mappings were reversed for trials containing the auditory tone cue only. (C) Main hypotheses for the dynamics of accuracy and information gain following the target presentation over the course of the 2AFC task. The first 200 trials of the task represent the first phase in which a gradual increase in accuracy and a gradual decrease in the absolute value of information gain were expected (represented by exponential curves). Within the second phase (the last 200 trials), an abrupt increase in accuracy and an abrupt decrease in information gain were expected (represented by sigmoidal curves).
Trial structure and experimental stimuli.
Stimuli were isoluminant and the environmental illumination was the same for all participants. Stimuli were presented on a computer screen with the spatial resolution of 1920 × 1080 pixels. One trial lasted approximately nine seconds. Each trial consisted of a fixation period, a cue period, a response window followed by a delay period, and a target period. For all periods, except for the target period, a vertically oriented Gabor patch was presented. The target stimulus was a Gabor patch oriented to the left or to the right, with a special frequency of 0.033 and opacity 0.5. Each trial started with a fixation cross on the vertical Gabor patch, which was shown on the screen for 1500 ms. Afterward, the visual cue, which was either a square or a diamond, was presented in the middle of the screen for 1000 ms. In 50% of the trials the visual cue was paired with an auditory cue (tone), which was presented for 300 ms. In the other 50% of the trials, the auditory cue was absent. During the subsequent interval, the fixation cross and the vertically oriented Gabor patch were presented, and participants were asked to indicate their prediction about the upcoming orientation of the target by pressing a button. There was no maximum response window. After a button had been pressed, a delay period started with the vertical Gabor patch on the screen for additional 3000 ms. After the delay period, the target stimulus (the Gabor patch tilted either left or right) was shown for 3000 ms. The durations of the response period and target window were chosen in order to avoid contamination of the pupil dilation response to a previous event. The delay period following the response window was sufficiently long to ensure that the pupil response to the target stimulus would not be contaminated by the motion response of the button press [39]. It is important to note that the target Gabor patch served as trial-by-trial feedback on the accuracy of participants’ cue-target predictions.
Task structure.
To disambiguate structure learning from parameter learning, it was necessary that our experimental paradigm after a phase of gradual learning induced an “aha” moment when participants suddenly realized a novel contingency. This required a paradigm that goes beyond conventional reversal learning (i.e., where contingencies simply change and the parameters encoding those contingencies are updated via parameter learning).
We devised a two-phase experimental paradigm during which participants first learned a simple model of cue-target mappings. In the second phase, we introduced a structural change in the cue-target mapping by adding a conditional dependency. Whereas in the first phase there was no interaction between the predictive validity of visual and auditory cues, in the second phase the predictive validity of visual cues depended on the presence of auditory cues, as the visual cue-target mappings were reversed for trials in which the auditory cue was present.
The design contained probabilistic cue-target mappings to introduce uncertainty in the predictions, simulating uncertainty that is inherent to perception in the real world. The visual and auditory cues predicted whether the target Gabor patch was tilted to the right or to the left with either an 80% or 20% probability (Fig 1B). Note that we define mapping 1 (M1) to correspond to the 80% visual cue-target pairs with respect to the first phase and mapping 2 (M2) to correspond to the 20% visual cue-target pairs with respect to the first phase. Cue-target mappings were counterbalanced between participants such that half of the participants saw the square followed by a right-oriented Gabor patch and a diamond followed by a left-oriented grating in 80% of the trials, and, for the other half of the participants, this mapping was reversed (i.e., square–> left and diamond–> right in the 80% condition). In the remaining 20% of the trials, the participants received the reversed cue-target mapping with respect to their 80% mapping condition.
Hypotheses
We first examined learning during the two experimental phases based on the behavioural responses. Participants had to indicate by a button press which Gabor patch orientation they predicted based on the cue(s). The first phase was designed to induce parameter learning, therefore, participants were expected to gradually learn the probabilistic relationship between the cues and the Gabor patch orientation (target). Post-decision sensory evidence, in this case the target-stimulus, should improve future predictions and hence increase accuracy values for the high-frequency trials. We thus expected that in the first phase, predictive accuracy would show a gradual increase over time, illustrated by an exponential curve.
At the beginning of the second phase, participants were instructed that something had changed during this phase. We expected that participants would discover the new rule by integrating the now meaningful tone into their predictive models, resulting in structure learning. This novel model parameter should account for observations that could not be predicted correctly before the parameter was added. After an initial decrease (relative to the final accuracy in the first phase) in predictive accuracy in the second phase, the addition of a new model parameter should lead to an abrupt increase in predictive accuracy (i.e., an “aha” moment), illustrated by sigmoidal curves in Fig 1C. Fig 1C (top row) illustrates the expected accuracy on predictions during the 2AFC task. The learning curves for the tone and no tone trials may have differed during the second experimental phase as compared with the first due to the change in contingency. Therefore, we investigated the tone and no-tone trials separately in the main analysis.
After exploring the learning dynamics in behavioural data, we investigated how learning in the two experimental phases was reflected in the pupil data. With learning, participants are expected to become better at making cue-target predictions. As a consequence of sensory evidence accumulating over trials, the amount of novel sensory evidence needed to update current beliefs will become smaller over time. We hypothesised that pupil responses signalling information gain [16] would decrease for the high-frequency trials as a result of learning the cue-target contingencies. More specifically, we hypothesised that in the first phase, pupil responses would decrease gradually and then plateau, while in the second phase, pupil responses would abruptly decrease until they plateau again as soon as the change in the cue-target contingency is learned (see Fig 1C).
Data acquisition and analyses
Data acquisition and pre-processing.
Changes in pupil dilation were recorded using an SMI RED500 eye-tracker (SensoMotoric Instruments, Teltow/Berlin, Germany) with a sampling rate of 500 Hz. We analysed the pupil dilation data of the right eye for each participant. The timing of blinks and saccade events was not saved in the output of the eye-tracker; therefore, we did not attempt to categorize separate blink and saccade events for pre-processing purposes. Pre-processing was applied to the entire pupil dilation time series of each participant and consisted of: i) interpolation around missing samples (0.15 s before and after each missing period), ii) interpolation around blinks or saccade events based on spikes in the temporal derivative of the pupil time series (0.15 s before and after each blink or saccade period), iii) band-pass filtering (third-order Butterworth, passband: 0.01–6 Hz), iv) removing responses to nuisance events using multiple linear regression (missing periods and blink or saccade events were all categorized together as a single ‘nuisance’ event type; responses were estimated by deconvolution) [48], and v) the residuals of the nuisance regression were transformed to percent signal change with respect to the temporal mean of the time series.
For each trial, intervals corresponding to the onset of the cue were extracted from each participant’s pupil dilation time series (cue-locked and target-locked, respectively). The cue-locked and target-locked intervals were baseline-corrected separately for each trial. The baseline pupil was defined as the mean pupil in the time window -0.5 to 0 seconds with respect to the cue or target for the cue-locked and target-locked intervals, respectively. The cue-locked pupil response was analysed for data quality purposes, while the target-locked pupil response was the main dependent variable of interest.
The temporal window of interest was independently defined as 1 to 2 seconds after the target onset based on the pupil’s canonical impulse response function ([48–51]). For each trial, a single value for the target-locked pupil response was computed as the mean pupil dilation within this temporal window of interest.
Trials were excluded if the reaction time (RT) was more than three standard deviations above the participant’s mean RT or lower than 200 ms (minimal time needed for the necessary encoding and preparation of a motor response; ([52–55]).
Data quality checks.
Behaviour. We expected that participants would learn the cue-target contingencies in both phases of the task, which would be reflected in higher accuracy and faster responses for high frequency trials as compared with low frequency trials. The effect of the visual cue-target mapping condition was expected to interact with the auditory cue condition and task phase, reflecting the reversal of the visual cue-target contingencies in the second phase during the tone trials only. These hypotheses were tested in two 3-way repeated measures ANOVAs, separately for accuracy (as percentage of correct trials) and RT with factors: cue-target mapping (M1 vs. M2), auditory cue (tone vs. no tone), and phase (first vs. second).
Target-locked pupil response time courses. The data acquisition quality was assessed with several analyses on the time courses of the pupil response to the cue and target presentation. The visual and auditory cues indicated to the participants that they should make a button press based on their prediction. A button press in the response phase was expected to evoke a motor-driven impulse response which should be reflected in the mean cue-locked pupil response ([39, 50, 56, 57]). In addition, we expected to see larger pupil dilation on average during tone trials as compared with no tone trials in the cue-locked pupil response, as auditory cues are known to be arousing ([58, 59]). The cue-locked effect of the tone was expected to return to baseline before the target was presented on screen. Finally, we expected to see larger pupil dilation on average during erroneous predictions as compared with correct predictions in the evoked target-locked pupil response ([39, 57, 60–65]).
Target-locked pupil response scalar averages. Using the scalar target-locked pupil response averages within our time window of interest, we expected to see an interaction between the cue-target mapping, auditory cue, and task phase in the target-locked pupil response. This was tested with a 3-way repeated measures ANOVA. In the first phase, the average pupil dilation for the M2 mapping was expected to be larger as compared with the M1 mapping, because low frequency trials (M2 in the first phase) should contain more errors overall. The direction of the mapping effect was expected to reverse in the second phase for tone trials only.
Main analyses.
Psychometric curve fits on accuracy data. To test our hypothesis concerning the dynamics of the target-orientation predictions, we assessed whether the range parameter, σ, of the psychometric curve fits differed between the first and the second phase. In a psychometric curve, accuracy is plotted against signal intensity [66]. We explored the resulting psychometric curves when the number of trials completed over time was taken as a proxy for signal intensity. The sigmoid function used for the psychometric curve fits is given in Eq 1.
We fit three parameters, μ, σ, and a0, to the individual participant’s response accuracy across all trials for the tone condition only (i.e., the frequency conditions were not differentiated), separately for the first and second phase of the experiment. We placed linear constraints on the curve fits so that σ could not exceed three times the value of μ. We bound both μ and σ so they could not exceed the number of trials in each phase of the experiment (range between 1 and 200 trials). The starting point, a0, was bound between 0 and 1. The parameters were determined by minimising the negative log-likelihood cost function.
If our hypotheses about the difference between parameter learning and structure learning are correct, then σ should be higher for the tone trials in the first phase as compared with the tone trials in the second phase.
Curve fits on target-locked pupil data. To test our main hypothesis concerning the target-locked pupil responses across the first and the second experimental phase, we assessed whether the time course of the target-locked pupil dilation showed the difference in the range parameter, σ, of the sigmoid curve fits. The target-locked pupil dilation is taken as a proxy of information gain within the two phases of the experiment. The logic is that sensory evidence for the cue-target continencies is accumulated as the task progresses over time, and learning should be evident in a reduction in the amount of novel sensory evidence needed to update current beliefs as a function of trial order. The sigmoid function used for the curve fits on target-locked pupil responses is given in Eq 2.
We fit four parameters, μ, σ, a0 and G, of the above sigmoid function to the target-locked pupil dilation across the high-frequency trials (80%) for the tone condition only, comparing the first and second phases of the experiment. We differentiated between the high-frequency as compared with low-frequency trials, in order to fit only a single direction of the (expected) change in information gain ([39, 67]). The starting point of the curve is reflected in the a0 parameter. The inflection point of the curve is reflected in the μ parameter. The gain parameter, G, allowed for negative scaling of the curves given the nature of the pupil signal as dependent variable (i.e., percent signal change). The range parameter, σ, reflects the range over where the curve rises. A larger σ parameter is associated with a larger range over which the transition (i.e., from f(x) = a0 to f(x) = 1) takes place. We placed linear constraints on the curve fits so that σ could not exceed three times the value of μ. We bound both μ and σ so they could not exceed the number of trials in each phase of the experiment (range between 1 and 200 trials). The parameters were furthermore not constrained or bounded. The parameters were determined by minimising the ordinary least squares cost function.
Our main hypothesis was that learning would extend across more trials in the first phase as compared with in the second phase, reflecting the difference between parameter learning and structure learning. Therefore, we expected that the σ would be larger for the high frequency tone trials in the first phase as compared with the high frequency tone trials in the second phase (note that the cue-target mappings were flipped for tone trials in the second phase; see Fig 1B). We furthermore expected the sign of the G to be negative in both phases, indicating a decreasing trend. A larger value of σ together with a negative gain parameter, G, thus reflects a more gradual reduction of target-locked pupil responses across trials. This analysis enables us to examine whether pupil dilation depended on the experimental phase and if it scaled with our hypotheses in Fig 1C. Descriptive statistics for all free parameters are presented in S2 Table in S1 File.
Software.
The prediction task was administered with Psychopy [68]. The behavioural and pupil data were processed with custom software using Python [69]. The evoked pupil responses were statistically assessed with a cluster-level permutation test as part of the MNE-Python package [70]. Repeated measures ANOVAs and Bayesian tests were carried out in JASP [71]. All data and code are publicly available (https://doi.org/10.34973/t41p-hx94).
Results
The current study aimed to compare predictive accuracy over trials during parameter learning and structure learning. Structure learning, unlike parameter learning, changes the structure of a generative model by altering causal connections between parameters or by adding and removing parameters in a model ([2, 5, 15, 17, 21]). Participants performed a 2AFC task on the expected orientation (left vs. right) of an upcoming Gabor patch (target) task while pupil dilation was recorded (Fig 1A). The participants had to learn cue-target contingencies in order to accurately predict the orientation of the target. Cue-target contingencies changed at the start of the second phase in the following way: the cue-target mapping was reversed but only for trials containing the auditory tone cue. The target served as feedback on the accuracy of participants’ predictions in the decision interval.
Main effects and interactions in the cue-target prediction task
We first evaluated whether participants performed the 2AFC task as expected. Main effects and interactions between the visual cue-target mapping condition (M1 vs. M2), task phase (first vs. second), and the presence of the auditory cue (tone vs. no tone) were assessed in three independent 3-way repeated measures ANOVAs on the dependent variables of mean accuracy (Fig 2A), RT (Fig 2B), and target-locked pupil dilation (Fig 2C) (see Fig 2 for the analysis of evoked pupil responses). The ANOVA results are presented in Table 1, and relevant post hoc comparisons are presented in Fig 2. At the mean group level, a significant 3-way interaction was obtained between visual cue-target mapping condition, auditory cue condition, and task phase for accuracy and target-locked pupil response (but not for RT). Participants accurately predicted the cue-target contingencies in both phases of the experiment, illustrated by the main effect of visual cue-target mapping condition in the first phase and the mapping reversal in the second phase for tone trials only (Fig 2A). As expected, the target-locked pupil response was larger for the M2 trials as compared with the M1 trials during the first phase, and the presence of the auditory cue reversed the direction of the target-locked pupil response in the second phase for the tone trials (Fig 2C). S1 Fig in S1 File illustrates how the behaviour and target-locked pupil dilation changed as a function of time across 16 bins of trials (25 trials per bin). We confirmed that the target-locked pupil responses were “mirroring” the learning trajectory obtained in the accuracy of the behavioural responses. This correspondence between learning and pupil dilation was indicated by the presence of a negative monotonic relationship between the accuracy of predictions and the target-locked pupil response across these 16 trial bins for the tone trials (see Fig 1C for hypotheses, S1 File, and S2 Fig in S1 File). Finally, we explored whether the target-locked pupil response, on average, differentiated between the difference in the error and correct responses for each of the frequency conditions and experimental phases. The target-locked pupil response did show sensitivity to both the predictive accuracy and cue-target frequency, but these factors did not interact (see S3 Fig and S1 Table in S1 File).
(A) Prediction accuracy, (B) mean RT, and (C) target-locked pupil dilation as a function of visual cue-target mapping condition (M1 vs. M2), the presence of the auditory cue (tone vs. no tone), and task phase (first vs. second). Results of the 3-way repeated measures ANOVAs are given in Table 1. Significance refers to post hoc t-tests: **p < .01, ***p < .001. Error bars, s.e.m. (N = 30). Note that the frequencies of the visual cue-target mappings change in the second phase for the tone trials.
Factors of interest were cue-target mapping condition (levels: M1 vs. M2), auditory cue (levels: tone vs. no tone), and task phase (levels: first vs. second). Accuracy data were percentage of correct trials; RT was analysed in seconds. *p < .05, **p < .01, ***p < .001.
The data quality of the pupil dilation measures was assessed with several data quality checks before testing our hypotheses about the dynamics of target-locked pupil responses across the experimental phases. First, evoked pupil dilation was present in response to the (visual and auditory) cue onsets as expected in the decision phase, here reflecting both decision preparation as well as the upcoming motor output in the form of a button press (Fig 3A). Furthermore, the temporal window (1 to 2 s) independently chosen for the target-locked pupil analysis contained the peak of the group-level cue-locked evoked response (Fig 3A, grey box). Second, as expected, errors resulted in larger pupil dilation as compared with correct trials following the target presentation, and this accuracy effect was significant within the temporal window of interest (Fig 3B, grey box). Third, the presence of the auditory cue during the decision interval (cue-locked) was associated with larger pupil dilation as compared with the absence of the tone (Fig 3C), likely reflecting a difference in phasic arousal state during tone trials. Importantly, the (unwanted) arousal effect related to the auditory tone was no longer present by the time the target was presented for the participants (Fig 3D). We note that all further pupil analyses used the target-locked pupil dilation, averaged within the temporal window of interest (Fig 3B and 3D, grey boxes), as the dependent variable. In sum, the pupil data fit all the data quality checks.
All trials within the first and second phase of the prediction task were included in the evoked pupil response analysis. (A) Mean cue-locked pupil responses in the prediction interval. Black bar indicates main effect of cue, p < 0.05 (cluster-based permutation test). (B) Evoked pupil responses for correct and error trials in the feedback interval (target-locked). Black bar indicates correct vs. error effect, p < 0.05 (cluster-based permutation test). (C) Evoked pupil responses for tone and no tone trials in the prediction interval (cue-locked) and (D) in the feedback interval (target-locked). Black bar indicates tone vs. no tone effect, p < 0.05 (cluster-based permutation test). In all panels: variability around the mean responses is illustrated as the 68% confidence interval (bootstrapped; N = 30); the grey box indicates the temporal window of interest (1–2 s) with respect to event onset for the target-locked pupil responses. Note that the temporal window of interest was independently defined based on the pupil’s canonical impulse response function.
Psychometric curve fits on accuracy data
For the accuracy data, psychometric curves were fit for each participant’s response accuracy for the tone trials only, separately per phase. Individual curve fits are shown in S4 Fig in S1 File. Descriptive statistics for all free parameters are presented in S2 Table in S1 File. The null hypothesis stated that there would be no difference in the σ parameters between phases and our alternative hypothesis was that there would be difference in the σ parameters. Particularly, we expected the σ parameter to be larger for the tone trials in the first phase as compared with the tone trials in the second phase (i.e., the cue-target mappings were flipped for tone trials only in the second phase). For the trials without an auditory cue, we did not have expectations about the difference in σ between the two experimental phases of the task. Therefore, we tested only tone trials for phase-dependent differences (first vs. second) of the mean σ parameter (Fig 4A). To examine whether the σ parameters differed, we used a Bayesian Wilcoxon Signed-Rank Test (Fig 4A, right column). Bayes factor indicated evidence for the alternative hypothesis (BF10 = 17300). This means the data were approximately 17300 times more likely to occur under the alternative hypothesis (that there would be a difference in the σ parameters between phases). However, the difference between the σ parameters was in the opposite direction as expected, since σ in the second phase was larger than in the first phase.
For each participant, sigmoid curves were fit to the tone trials (i.e., trials with an auditory cue) and compared between the first and the second phase of the experiment. (A) For accuracy data, all tone trials (i.e., the high- and low-frequency conditions) were used to fit the curves. (B) For the target-locked pupil response data, curves were fit to the high-frequency (80%) tone trials only. Note that the cue-target mappings (M1 and M2) which correspond to the high-frequency trials differ per phase depending on the presence of the auditory cue. Results of the Bayesian Wilcoxon Signed-Rank Test are shown for the accuracy and pupil data (right column). Error bars, s.e.m. (N = 30).
Psychometric curve fits on target-locked pupil data
Finally, we tested our main hypothesis concerning the target-locked pupil responses across the two experimental phases. Sigmoid curves were fit for each participant’s target-locked pupil response for the tone trials in the high-frequency condition only, separately per phase. Individual curve fits are shown in S5 Fig in S1 File. Descriptive statistics for all free parameters are presented in S2 Table in S1 File. As with behavioural responses, our hypothesis was that σ would be larger for the high-frequency tone trials in the first phase as compared with the high-frequency tone trials in the second phase (i.e., the cue-target mappings were flipped for tone trials only in the second phase). For the trials without an auditory cue, we did not have expectations about the difference in σ between the phases of the task. Therefore, we tested only tone trials for phase-dependent differences (first vs. second) of the mean σ parameter (Fig 4B). To examine whether the σ parameters differed, we used Bayesian Wilcoxon Signed-Rank Test (Fig 4B, right column). The Bayes factor indicated evidence for the alternative hypothesis (BF10 = 930). This means that the data were approximately 930 times more likely to occur under the alternative hypothesis (that there is a difference between the σ parameters) than under the null hypothesis. However, like in the accuracy data, the difference between the σ parameters was in the opposite direction as expected, since σ in the second phase was larger than in the first phase.
We also expected that the target-locked pupil responses would decrease as a result of learning, reflected in a negative gain parameter (G) in our curve fits for both phases. In the first phase, G was negative on average as expected (M = -4.1, SD = 11.94), however, G was positive on average in the second phase (M = 2.4, SD = 9.3), against our hypothesis of the target-locked pupil responses decreasing in both phases of the experiment. Furthermore, it was apparent that the sign of G was not consistent at the individual level (see S5 Fig in S1 File).
Discussion
Our research was motivated by the observation that the two kinds of learning within predictive processing, whilst recently formally differentiated, have not been empirically distinguished. Parameter learning, on the one hand, refers to updating of probability distribution of model parameters in light of new evidence using Bayes’ rule ([4, 15]). Structure learning, on the other hand, pertains to altering the structure of the generative model by changing the number of parameters in the generative model or by altering their functional dependencies ([5, 15, 17]). Related proposals have been put forward by Kwisthout and colleagues [2] and Rutar and colleagues [21], who developed a formal proposal for structural changes that go beyond parameter addition and removal in a generative model. Similarly, Heald and colleagues [72] have recently presented a theory for sensorimotor learning, called contextual inference, that differentiates between the adaptation of behaviour based on updating of existing and creation of new motor memories and adaptation due to changes in the relative weighting of these motor memories.
Building on this, we investigated whether we could empirically distinguish between parameter learning and structure learning. We expected to be able to differentiate between the two mechanisms based on their learning trajectories as measured in accuracy of participants’ predictions. We furthermore hypothesized that the different learning trajectories would be reflected in target-locked pupil dilation given its potential for signalling information gain [16]. To investigate our question, we created a within-subject computer-based experiment with two phases. In the first phase, participants had to learn and predict the probabilistic relationship between the cues and a target stimulus, and in the second phase, a conditional change occurred in the cue-target relationship learnt previously.
To test whether participants performed the task as expected, we performed some basic quality controls on behavioural and pupil data. Behavioural data showed that participants on average correctly predicted the cue-target contingencies in both experimental phases. This was reflected in the cue-target mapping effect for tone and no tone trials in the first phase, and a reversed effect of cue-target mapping effect was observed in the second phase for tone trials only (when the mapping switched). As expected, a similar pattern was also observed in pupil data.
Before turning to the main research question, the quality of the pupil measures was checked and compared with effects observed in previous literature. Decision preparation and the preparation of the motor response were reflected in the evoked pupil response to the visual and auditory cue onsets replicating previous work ([39, 50, 56, 57]). We also observed the peak of the group-level cue-locked evoked response around 1–2 seconds, which is in line with previous findings ([48, 49–51]). Furthermore, erroneous responses following the presentation of explicit feedback resulted in larger pupil dilation as compared to correct responses, an effect that has also been consistently reported ([39, 57, 60–65]). Finally, we found a well-known auditory effect on pupil responses with pupil responses being bigger on trials with a tone compared to trials without a tone ([58, 59]).
Given that participants understood and correctly performed the task, we turn to the results related to our hypotheses. We predicted that in the first and the second experimental phase, different temporal dynamics in predictive accuracy would be observed. We hypothesised that in the first phase, participants would be gradually learning the probabilistic relationship between the cues and the target orientation, leading to parameter learning. As a result of parameter learning, participants should become better at predicting future sensory input, resulting in a gradual increase in predictive accuracy over time. In the second phase the rules switched for the tone trials, which should initially lead to a decrease in predictive accuracy. When the change in rules is learned, a new parameter is added to a learner’s model, resulting in structure learning. An integration of a new parameter should lead to an abrupt increase in predictive accuracy (i.e., an “aha” moment).
Finally, to assess the main hypothesis, that parameter learning and structure learning are empirically differentiable, we performed curve fitting first on the behavioural and then on the pupil data. Curve fitting revealed that there is substantially more evidence in support of the hypothesis that there exists a difference between the phases of the task in accuracy and pupil data as reflected in the σ parameter. However, the difference was in the opposite direction as expected; the σ was significantly smaller in the first phase compared to the second phase, in accuracy and pupil data. These results suggest that the participants were learning more gradually in the second phase compared to the first phase of the experiment, contrary to our expectations.
One possible interpretation of these results is that our experimental manipulation induced structure learning in the first phase and parameter learning in the second phase. It might have been that in the first phase participants built multiple internal models, that they thought could capture the structure of the task, from scratch. An idea, that is reminiscent of Pouncy and Gershman’s work [73] where participants are considering several models or competing theories at each point in time. As participants were learning our task, they were alternating between these models and upon the realisation of the rule participants settled for the correct model, resulting in a rapid increase in predictive accuracy and a decrease in the target-locked pupil responses as the data in the first phase shows. We assumed we prevented participants from learning models from scratch in the first phase by providing them with detailed instructions and pictorial representation of the stimuli resented in the task, before the task started. By that, we thought, we equipped participants with a crude model that would contain hypotheses about all the relevant variables of the task. However, in light of the current results we believe that our instructions did not result in the construction of a simple model that participants could use as a baseline upon entering the task. Importantly, whilst the above interpretation of the results is in principle plausible, further empirical investigation needs to be conducted to confirm that participants were indeed building multiple models in the first phase and then in the second phase selected a model (they had already constructed in the first phase) and started updating the parameters of that model.
At the beginning of the second phase, participants expected the rules of the task to change. All the experimental variables (e.g., target, visual cues) in the second phase were the same as in the first phase, possibly signalling to the participants that one of the models that they had already constructed in the first phase could be suitable for explaining the change in the second phase. If this was the case, then participants in the second phase merely reused a correct, existing model, and started gradually updating parameters of that model rather than adding a new parameter (based on a new experimental variable) to a model.
We could possibly have avoided participants building models from scratch in the first phase had we ran a computerised familiarisation phase with all the relevant experimental variables with the participants prior to the experiment. This would make sure that participants have constructed crude models of the task before the experiment started and that they could then use in the first phase of the experiment. Additionally, to make sure that participants in the second phase were not just reusing one of the models they had constructed in the first phase but instead build on an existing model, we could have introduced a new experimental variable in the second phase that was present in the familiarisation but not in the first phase. In that case, participants would have to add a new model parameter (instantiating structure learning) constructed in the first phase, if they were to successfully learn the new rule in the second phase.
Our results also revealed that the gain parameter, G, which indicates the direction of the σ parameter, was negative in the first phase as expected and positive in the second phase contrary to our expectations. This suggests that pupil dilation, for the high-frequency condition, was on average decreasing in the first phase and increasing in the second phase. These results are unexpected if the target-locked pupil dilation reflects novel information gain (see Fig 1C). The gain in information following the outcome of an event should decrease as a result of increasing accuracy for predicting the contingent relationships ([1, 4, 14–16]). When trials where binned across both experimental phases, the target-locked pupil responses mirrored the participants’ accuracy such that when participants had a larger difference between cue-target frequency conditions in accuracy, they also tended to have a smaller difference between cue-target frequency conditions in the target-locked pupil responses (see S2 Fig in S1 File). These results are generally in line with the assumption that the presentation of the target stimulus became less informative as the participants learned to predict the cue-target contingencies. However, a look at the individual participants’ pupil responses at the single-trial level for the high-frequency condition only (see S5 Fig in S1 File) reveals large variability in the sign of the σ parameter for both phases, potentially suggesting individual differences in the size of pupil dilation over time. This suggestion is in line with recent findings of substantial inter- and intra-individual variation in the size of pupil dilation over trials [74]. More specifically, the study shows that in a simple digit-span memory task pupil dilation was consistently increasing over trials for some participants and for others it was decreasing. There were also participants for whom the trend was changing throughout the task. Another factor that could explain why pupil dilation was increasing in the second phase is that participants in this phase were more fatigued than in the first phase at the beginning of the experiment. As a consequence, they had to exert more effort to maintain concentration on the task and process the change in the task rules. Increased cognitive effort would result in increasing pupil dilation as has been shown many times before ([75–77]).
All in all, our data shows that there exists a qualitative difference between parameter learning and structure learning, following the theoretical proposal ([2, 5, 15, 17, 21]). However, parameter learning seemed to have occurred in the second phase and structure learning in the first phase of the experiment for reasons described above. Future studies should therefore make sure to induce experimental manipulations that have the initially intended effect. Alternatively, future studies might investigate empirically the interpretation of the current results: that participants construct multiple models at the beginning and later choose among them and update their parameters. Lastly, our study is one of the few that studied how target-locked pupil responses change over time due to learning, on a trial-to-trial basis, with some exceptions ([41, 42, 47]). Therefore, little is known about how pupil dynamics change over extended periods of time and whether individual differences exist in this process. More studies should thus examine pupil dilation in such a manner in the future.
References
- 1.
Clark A. (2015). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press.
- 2. Kwisthout J., Bekkering H., & Van Rooij I. (2017). To be precise, the details don’t matter: On predictive processing, precision, and level of detail of predictions. Brain and Cognition, 112, 84–91. pmid:27114040
- 3. Clark A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. pmid:23663408
- 4. Friston K., FitzGerald T., Rigoli F., Schwartenbeck P., & Pezzulo G. (2016). Active inference and learning. Neuroscience & Biobehavioral Reviews, 68, 862–879.
- 5. Friston K., Lin M., Frith C. D., Pezzulo G., Hobson J. A., & Ondobaka S. (2017). Active inference, curiosity and insight. Neural Computation, 29(10), 2633–2683. pmid:28777724
- 6. Friston K., & Kiebel S. (2009). Predictive coding under the free-energy principle. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1211–1221. pmid:19528002
- 7.
Hohwy J. (2013). The predictive mind. OUP Oxford
- 8. Edwards G., Vetter P., McGruer F., Petro L. S., & Muckli L. (2017). Predictive feedback to V1 dynamically updates with sensory input. Scientific Reports, 7(1), 1–12.
- 9. Petro L. S., & Muckli L. (2016). The brain’s predictive prowess revealed in primary visual cortex. Proceedings of the National Academy of Sciences, 113(5), 1124–1125. pmid:26772315
- 10. Rao R. P. N., & Ballard D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. pmid:10195184
- 11. Friston K., Daunizeau J., Kilner J., & Kiebel S. J. (2010). Action and behavior: A free-energy formulation. Biological Cybernetics, 102(3), 227–260. pmid:20148260
- 12. Kilner J. M., Friston K. J., & Frith C. D. (2007). Predictive coding: An account of the mirror neuron system. Cognitive Processing, 8(3), 159–166. pmid:17429704
- 13. Koster-Hale J., & Saxe R. (2013). Theory of mind: A neural prediction problem. Neuron, 79(5), 836–848. pmid:24012000
- 14. FitzGerald T. H., Dolan R. J., & Friston K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 136. pmid:26581305
- 15. Smith R., Schwartenbeck P., Parr T., & Friston K. J. (2020). An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case. Frontiers in Computational Neuroscience, 14. pmid:32508611
- 16. Zénon A. (2019). Eye pupil signals information gain. Proceedings of the Royal Society B: Biological Sciences, 286(1911), 20191593. pmid:31530143
- 17. Da Costa L., Parr T., Sajid N., Veselic S., Neacsu V., & Friston K. (2020). Active inference on discrete state-spaces: A synthesis. Journal of Mathematical Psychology, 99, 102447. pmid:33343039
- 18. Christie S., & Gentner D. (2010). Where hypotheses come from: Learning new relations by structural alignment. Journal of Cognition and Development, 11(3), 356–373.
- 19. Gentner D., & Hoyos C. (2017). Analogy and abstraction. Topics in Cognitive Science, 9(3), 672–693. pmid:28621480
- 20.
Schulz L. (2012). Chapter Ten—Finding New Facts; Thinking New Thoughts. In Xu F. & Kushnir T. (Eds.), Advances in Child Development and Behavior (Vol. 43, pp. 269–294). JAI. https://doi.org/10.1016/B978-0-12-397919-3.00010-1
- 21. Rutar D., de Wolff E., van Rooij I., & Kwisthout J. (2022). Structure Learning in Predictive Processing Needs Revision. Computational Brain & Behavior, 5(2), 234–243.
- 22. Friston K., Parr T., & Zeidman P. (2018). Bayesian model reduction. ArXiv Preprint ArXiv:1805.07092. https://doi.org/10.48550/arXiv.1805.07092
- 23. Gershman S. J., & Blei D. M. (2012). A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology, 56(1), 1–12.
- 24.
Goldwater S. J. (2007). Nonparametric Bayesian Models of Lexican Acquisition. Brown University.
- 25. Aston-Jones G., & Cohen J. D. (2005). An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annu.Rev.Neurosci., 28, 403–450. pmid:16022602
- 26. Joshi S., & Gold J. I. (2020). Pupil Size as a Window on Neural Substrates of Cognition. Trends in Cognitive Sciences, 24(6), 466–480. pmid:32331857
- 27. Larsen R. S., & Waters J. (2018). Neuromodulatory correlates of pupil dilation. Frontiers in Neural Circuits, 12, 21. pmid:29593504
- 28. McGinley M. J., Vinck M., Reimer J., Batista-Brito R., Zagha E., Cadwell C. R., et al. (2015). Waking state: Rapid variations modulate neural and behavioral responses. Neuron, 87(6), 1143–1161. pmid:26402600
- 29. Murphy P. R., O’Connell R. G., O’Sullivan M., Robertson I. H., & Balsters J. H. (2014). Pupil diameter covaries with BOLD activity in human locus coeruleus. Human Brain Mapping, 35(8), 4140–4154. pmid:24510607
- 30. Bouret S., & Sara S. J. (2005). Network reset: A simplified overarching theory of locus coeruleus noradrenaline function. Trends in Neurosciences, 28(11), 574–582. pmid:16165227
- 31. Doya K. (2008). Modulators of decision making. Nature Neuroscience, 11(4), 410–416. pmid:18368048
- 32. Glimcher P. W. (2011). Understanding dopamine and reinforcement learning: The dopamine reward prediction error hypothesis. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15647. pmid:21389268
- 33. Lak A., Nomoto K., Keramati M., Sakagami M., & Kepecs A. (2017). Midbrain Dopamine Neurons Signal Belief in Choice Accuracy during a Perceptual Decision. Current Biology, 27(6), 821–832. pmid:28285994
- 34. Montague P. R., Hyman S. E., & Cohen J. D. (2004). Computational roles for dopamine in behavioural control. Nature, 431, 760. pmid:15483596
- 35. Parikh V., Kozak R., Martinez V., & Sarter M. (2007). Prefrontal Acetylcholine Release Controls Cue Detection on Multiple Timescales. Neuron, 56(1), 141–154. pmid:17920021
- 36. Schultz W. (2005). Behavioral Theories and the Neurophysiology of Reward. Annual Review of Psychology, 57(1), 87–115. https://doi.org/10.1146/annurev.psych.56.091103.070229
- 37. Yu A. J., & Dayan P. (2005). Uncertainty, Neuromodulation, and Attention. Neuron, 46(4), 681–692. pmid:15944135
- 38. Browning M., Behrens T. E., Jocham G., O’reilly J. X., & Bishop S. J. (2015). Anxious individuals have difficulty learning the causal statistics of aversive environments. Nature Neuroscience, 18(4), 590–596. pmid:25730669
- 39. Colizoli O., de Gee J. W., Urai A. E., & Donner T. H. (2018). Task-evoked pupil responses reflect internal belief states. Scientific Reports, 8(1), 13702. pmid:30209335
- 40. de Gee J. W., Correa C. M. C., Weaver M., Donner T. H., & van Gaal S. (2021). Pupil Dilation and the Slow Wave ERP Reflect Surprise about Choice Outcome Resulting from Intrinsic Variability in Decision Confidence. Cerebral Cortex, 31(7), 3565–3578. pmid:33822917
- 41. Kayhan E., Heil L., Kwisthout J., van Rooij I., Hunnius S., & Bekkering H. (2019). Young children integrate current observations, priors and agent information to predict others’ actions. PloS One, 14(5), e0200976. pmid:31116742
- 42. Koenig S., Uengoer M., & Lachnit H. (2018). Pupil dilation indicates the coding of past prediction errors: Evidence for attentional learning theory. Psychophysiology, 55(4), e13020. pmid:29023832
- 43. Nassar M. R., Rumsey K. M., Wilson R. C., Parikh K., Heasly B., & Gold J. I. (2012). Rational regulation of learning dynamics by pupil-linked arousal systems. Nature Neuroscience, 15, 1040. pmid:22660479
- 44. O’Reilly J. X., Schüffelgen U., Cuell S. F., Behrens T. E. J., Mars R. B., & Rushworth M. F. S. (2013). Dissociable effects of surprise and model update in parietal and anterior cingulate cortex. Proceedings of the National Academy of Sciences, 110(38), E3660–E3669. pmid:23986499
- 45. Preuschoff K., ‘t Hart B., & Einhauser W. (2011). Pupil Dilation Signals Surprise: Evidence for Noradrenaline’s Role in Decision Making. Frontiers in Neuroscience, 5, 115. pmid:21994487
- 46. Satterthwaite T. D., Green L., Myerson J., Parker J., Ramaratnam M., & Buckner R. L. (2007). Dissociable but inter-related systems of cognitive control and reward during decision making: Evidence from pupillometry and event-related fMRI. NeuroImage, 37(3), 1017–1031. pmid:17632014
- 47. Van Slooten J. C., Jahfari S., Knapen T., & Theeuwes J. (2018). How pupil responses track value-based decision-making during and after reinforcement learning. PLOS Computational Biology, 14(11), e1006632. pmid:30500813
- 48. Knapen T., de Gee J. W., Brascamp J., Nuiten S., Hoppenbrouwers S., & Theeuwes J. (2016). Cognitive and Ocular Factors Jointly Determine Pupil Responses under Equiluminance. PLoS ONE, 11(5), e0155574. pmid:27191166
- 49. Burlingham C. S., Mirbagheri S., & Heeger D. J. (2022). A unified model of the task-evoked pupil response. Science Advances, 8(16), eabi9979. pmid:35442730
- 50. Hoeks B., & Levelt W. J. M. (1993). Pupillary dilation as a measure of attention: A quantitative system analysis. Behavior Research Methods, Instruments, & Computers, 25(1), 16–26. https://doi.org/10.3758/BF03204445
- 51. Mathot S. (2018). Pupillometry: Psychology, Physiology, and Function. 1(1), 16. https://doi.org/10.5334/joc.18
- 52. Ashby F. G., & Townsend J. T. (1980). Decomposing the reaction time distribution: Pure insertion and selective influence revisited. Journal of Mathematical Psychology, 21(2), 93–123. https://doi.org/10.1016/0022-2496(80)90001-2
- 53. Berger A., & Kiefer M. (2021). Comparison of Different Response Time Outlier Exclusion Methods: A Simulation Study. Frontiers in Psychology, 12. pmid:34194371
- 54. Falmagne J.-C. (1987). Response times; their role in inferring elementary mental organization. Science, 237, 1060. Gale OneFile: Health and Medicine.
- 55. Whelan R. (2008). Effective Analysis of Reaction Time Data. The Psychological Record, 58(3), 475–482.
- 56. de Gee J. W., Knapen T., & Donner T. H. (2014). Decision-related pupil dilation reflects upcoming choice and individual bias. Proceedings of the National Academy of Sciences of the United States of America, 111(5), E618–25. pmid:24449874
- 57. Urai A. E., Braun A., & Donner T. H. (2017). Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nature Communications, 8, 14637. PMC. pmid:28256514
- 58. Liao H.-I., Yoneya M., Kidani S., Kashino M., & Furukawa S. (2016). Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention. Frontiers in Neuroscience, 10, 43. pmid:26924959
- 59. Zekveld A. A., Koelewijn T., & Kramer S. E. (2018). The Pupil Dilation Response to Auditory Stimuli: Current State of Knowledge. Trends in Hearing, 22, 2331216518777174. pmid:30249172
- 60. Braem S., Coenen E., Bombeke K., Van Bochove M. E., & Notebaert W. (2015). Open your eyes for prediction errors. Cognitive, Affective, & Behavioral Neuroscience, 15(2), 374–380. pmid:25588818
- 61. Critchley H. D., Tang J., Glaser D., Butterworth B., & Dolan R. J. (2005). Anterior cingulate activity during error and autonomic response. NeuroImage, 27(4), 885–895. pmid:15996878
- 62. Maier M. E., Ernst B., & Steinhauser M. (2019). Error-related pupil dilation is sensitive to the evaluation of different error types. Biological Psychology, 141, 25–34. pmid:30597189
- 63. Murphy P. R., van Moort M. L., & Nieuwenhuis S. (2016). The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors. PLOS ONE, 11(3), e0151763. pmid:27010472
- 64. Rondeel E., Van Steenbergen H., Holland R., & van Knippenberg A. (2015). A closer look at cognitive control: Differences in resource allocation during updating, inhibition and switching as revealed by pupillometry. Frontiers in Human Neuroscience, 9. https://www.frontiersin.org/articles/10.3389/fnhum.2015.00494
- 65. Wessel J. R., Danielmeier C., & Ullsperger M. (2011). Error Awareness Revisited: Accumulation of Multimodal Evidence from Central and Autonomic Nervous Systems. Journal of Cognitive Neuroscience, 23(10), 3021–3036. pmid:21268673
- 66. May K. A., & Solomon J. A. (2013). Four Theorems on the Psychometric Function. PLOS ONE, 8(10), e74815. pmid:24124456
- 67. Den Ouden H. E., Kok P., & De Lange F. P. (2012). How prediction errors shape perception, attention, and motivation. Frontiers in Psychology, 3, 548. pmid:23248610
- 68.
Psychopy [Computer software] (1.81). (2018). University of Nottingham. https://psychopy.org/index.html
- 69.
Python [Computer software] (3.6). (2016). Python Software Foundation. https://www.python.org/downloads/release/python-360/
- 70. Gramfort A., Luessi M., Larson E., Engemann D. A., Strohmeier D., Brodbeck C., et al. (2013). MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience, 267. pmid:24431986
- 71.
JASP Team. (2020). JASP (0.13.1).
- 72. Heald J. B., Lengyel M., & Wolpert D. M. (2021). Contextual inference underlies the learning of sensorimotor repertoires. Nature, 600(7889), 489–493. pmid:34819674
- 73. Pouncy T., & Gershman S. J. (2022). Inductive biases in theory-based reinforcement learning.
- 74. Sibley C., Foroughi C. K., Brown N. L., Phillips H., Drollinger S., Eagle M., et al. (2020). More than Means: Characterizing Individual Differences in Pupillary Dilations. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 64(1), 57–61. https://doi.org/10.1177/1071181320641017
- 75. Hyönä J., Tommola J., & Alaja A.-M. (1995). Pupil dilation as a measure of processing load in simultaneous interpretation and other language tasks. The Quarterly Journal of Experimental Psychology, 48(3), 598–612. pmid:7568993
- 76. Porter G., Troscianko T., & Gilchrist I. D. (2007). Effort during visual search and counting: Insights from pupillometry. Quarterly Journal of Experimental Psychology, 60(2), 211–229. pmid:17455055
- 77. van der Wel P., & van Steenbergen H. (2018). Pupil dilation as an index of effort in cognitive control tasks: A review. Psychonomic Bulletin & Review, 25(6), 2005–2015. pmid:29435963