Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Visuomotor predictors of interception

  • Inmaculada Márquez,

    Roles Data curation, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization

    Affiliations Departamento de Ciencias Médicas y de la Vida, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, México, Laboratorio de Conducta Animal, Departamento de Psicología, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, México

  • Mario Treviño

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    mariomtv@hotmail.com

    Affiliation Laboratorio de Plasticidad Cortical y Aprendizaje Perceptual, Instituto de Neurociencias, Universidad de Guadalajara, Guadalajara, Jalisco, México

Abstract

Intercepting moving targets is a fundamental skill in human behavior, influencing various domains such as sports, gaming, and other activities. In these contexts, precise visual processing and motor control are crucial for adapting and navigating effectively. Nevertheless, there are still some gaps in our understanding of how these elements interact while intercepting a moving target. This study explored the dynamic interplay among eye movements, pupil size, and interceptive hand movements, with visual and motion uncertainty factors. We developed a simple visuomotor task in which participants used a joystick to interact with a computer-controlled dot that moved along two-dimensional trajectories. This virtual system provided the flexibility to manipulate the target’s speed and directional uncertainty during chase trials. We then conducted a geometric analysis based on optimal angles for each behavior, enabling us to distinguish between simple tracking and predictive trajectories that anticipate future positions of the moving target. Our results revealed the adoption of a strong interception strategy as participants approached the target. Notably, the onset and amount of optimal interception strategy depended on task parameters, such as the target’s speed and frequency of directional changes. Furthermore, eye-tracking data showed that participants continually adjusted their gaze speed and position, continuously adapting to the target’s movements. Finally, in successful trials, pupillary responses predicted the amount of optimal interception strategy while exhibiting an inverse relationship in trials without collisions. These findings reveal key interactions among visuomotor parameters that are crucial for solving complex interception tasks.

Introduction

Intercepting moving targets is a crucial ability in human behavior as it impacts various aspects of our daily lives, such as sports, gaming, and other common activities [1, 2]. Tasks like catching a ball, driving a car, and navigating through crowded places rely on successfully intercepting (and avoiding) moving objects. The complexity of visuomotor coordination when solving these tasks influences our ability to interact and adapt to dynamic environments [35]. Visuomotor coordination involves neuronal activity in visual and motor-related brain areas. Some authors suggest that such coordination triggers a sequence of events, organized in parallel and processed hierarchically [6]. This specialized coordination enables individuals to continuously adapt their motor commands based on feedback, thus refining accuracy over time. Therefore, the efficiency of target interception is influenced by a combination of strategy, kinematic ability, and environmental conditions. Some theories suggest that interception strategies are built upon cues such as the target’s position and its kinematic derivatives [7]. An intriguing possibility emerges from the potential connection between distinct cognitive and adaptive strategies that may underlie interceptive behavior.

Various operational control models can be employed for interception. In pursuit guidance, for example, the velocity vector of the approach consistently aligns with the target. This often results in a tail-chase towards the end of the target’s trajectory, which can require longer chasing trajectories than other guidance laws. Indeed, predictive elements can be incorporated into these guidance laws to enhance their effectiveness [8]. Predictive models are based on the idea of predicting the future position of the target. This model is more complex than simple pursuit but potentially more efficient, as it can anticipate the target’s movements and intercept it using a shorter path to its destination. Numerous predictive models exist, sharing a common foundation: evaluating the target’s current position and velocity to predict its future location. Different interception modes, which use pre-existing information and real-time adjustments in varying ways to determine the interception point and trajectory, have been discussed elsewhere [914].

Visual processing plays a crucial role in interception by contributing to the estimation of the target’s position, velocity, and timing prediction. Additionally, the sudden occlusion of the target just before collision increases interception variability, highlighting the contribution of visual inputs for making predictions in these tasks [15]. Eye movements provide essential cues for interception and can involve anticipatory saccades reflecting the prediction of future target locations [1618]. They provide visual feedback for hand movements and contribute to motion prediction, a process that can be modulated by altering the variability of the target’s trajectory [19]. In interception tasks, individuals adjust their hand movements based on the target’s position, responding to positional changes within approximately 100 ms [20]. Similarly, saccades, particularly those occurring around 100 ms before interception, are crucial for timing and accuracy in such dynamic tasks. Active gaze movements, as opposed to fixating on a point, reduce errors, making them a beneficial strategy for enhancing visuomotor coordination [21, 22]. Pupillary responses, often linked to changes in luminance levels, are also associated with decision-making processes [2325]. For instance, research indicates that cognition can regulate pupil size to selectively filter visual information for specific objectives, influencing visual perception. Additionally, pupil dilation has been proposed as a psychophysiological measure for dynamic difficulty adjustment. Pupil responses correlate with game difficulty, suggesting that pupil dilation can be used to adaptively adjust task difficulty, ultimately enhancing user engagement and performance [26]. How cognition impacts visual perception through changes in pupil size is a topic of ongoing exploration [27].

Despite acknowledging their critical role in guiding motor commands during visuomotor tasks, substantial gaps exist in understanding how ocular movements relate to interception strategies under diverse task conditions and their precise role in guiding such strategies. Moreover, the potential interplay between gaze and interceptive strategies, influenced by visual and motion uncertainty, remains relatively unexplored. For example, it is unclear how task demands, individual preferences, and eye movement strategies influence the specific relationship between pupillary responses and moving target interception. In this study, we investigated the visuomotor strategies underlying interceptive actions in a virtual task with participants using a joystick to chase a moving target along unpredictable trajectories. Our objective was to assess how alterations in target speed (and directional uncertainty) impact visuomotor responses and target interception. We hypothesized that dynamic interactions among eye movements, pupil size, and interceptive hand movements should be critical for effectively intercepting a moving target under varying conditions of visual and motion uncertainty. Alternatively, without such adjustments, we would expect participants to fail in adapting their visuomotor strategies to changes in target speed and directional uncertainty, resulting in unsuccessful interception attempts. To quantify the impact of task parameters on interceptive performance, we employed a geometric model to distinguish between manual pursuit and interception trajectories. Using eye-tracking data, we also characterized the gaze adjustments based on the target’s position and velocity. Finally, we investigated pupillary responses and their correlation with ‘collision’ trials, examining their dynamics as the user-controlled dot approached and touched the computer-controlled dot. Our findings carry potential implications for interventions in visuomotor coordination impairments, advancements in robotics, and human-computer interfaces. They also contribute to our understanding of how humans engage with the world at the millisecond scale.

Materials and methods

Participants

We conducted experiments with 248 adults (139 women, 109 men) aged 18.9 to 26.2 (mode = 23). The testing period spanned from February 10th to August 6th, 2023. An informed, written consent was obtained from all participants. We set specific inclusion criteria for the study: being born at term, and having no reported prenatal, perinatal, or postnatal complications or trauma that could potentially impact nervous system development. By applying these inclusion criteria, we aimed to create a study group of participants whose neurological development could represent a ‘typical’, neurologically healthy population. None of the participants had a diagnosed psychiatric, neurological, or neurodevelopmental disorder nor a history of substance use. All participants included in the analysis were right-handed (i.e., using their right hand for writing; 5 left-handed participants were excluded from the analysis) and had either normal vision or vision corrected to normal, as assessed by the Snellen test. Participants received written instructions for completing questionnaires, including demographic information, and for performing the task. Participation was voluntary and did not involve any monetary incentives. Participants were selected using a non-probability convenience sampling method due to their accessibility and proximity to the researchers. The sample size of our experiments (see below) allowed a robust analysis of individual differences, consistent with previous studies conducted in our laboratory [28]. All procedures were non-invasive, complied with our country’s local guidelines and regulations, and were approved by the ethics committee of the Instituto de Neurociencias, Universidad de Guadalajara, México: ET102021-330 and ET122023-382.

Visuomotor task

We described the task in detail previously [29]. Participants sat 60 cm from a 27-inch computer monitor (1920 x 1080 pixels, 60 Hz). They were engaged in a visuomotor task involving chasing a computer-controlled black dot by means of a user-controlled white dot (0.3° of visual angle each). Participants utilized a joystick, held in their dominant hand, to control the white dot with a third-person viewpoint. The task involved chasing and colliding with the computer-controlled black dot, referred to as the ’target,’ which followed piecewise linear trajectories (S1A, S1B Fig in S1 File). We presented both dots in a circular arena that covered the entire height of the monitor, set against a 50% gray background. The circular geometry of our task requires a control mechanism capable of moving in all directions with uniform effort. This factor led us to choose a joystick for tracking participants’ responses, as it ensures consistent effort exertion irrespective of the chosen direction. We calibrated the joystick daily and recorded its relative position, allowing participants to vary and control the speed of the white dot. The angular gradient of the joystick’s tilt determined the speed, which was linearly proportional to the deviation from the vertical axis. At the start of each trial, we positioned the participant’s dot at the center of the screen, and the target appeared at a random location along the circular limits of the arena. A ‘collision’ involved a successful trial where both dots collided within 5 seconds, the designated maximum trial duration. We chose the term collision based on its physics origin, where it implies the convergence of participant and computer dots. The term collision accurately reflects dot convergence, regardless of intended interception outcomes, recognizing that not every collision leads to successful interception and allowing for random or chance occurrences. Trials that did not result in a collision were categorized as unsuccessful chasing trials. We provided auditory feedback (pure tones) to indicate successful or unsuccessful trials. Following each trial, we required the participants to return the joystick to its initial centered position before initiating the subsequent trial.

We manipulated task difficulty by controlling three main parameters. First, we varied the target’s direction using random numbers within an angular range (AR) of ±75° (uniform distribution). Second, we implemented three ‘fixed-direction intervals’ (FDI) of 100 ms, 200 ms, or 500 ms, determining how long the target’s movement direction remained constant before changing to a new random angle within the specified range. Hence, the AR influenced the predictability of the target’s direction changes, whereas the FDI controlled the frequency of these changes (S1A Fig in S1 File). Lastly, we used multiple target speeds (vT): 10°/s, 20°/s, 30°/s, 40°/s, 50°/s, and 60°/s. The joystick did not influence the velocity or movement of the target. Instead, the target velocity was controlled by the experimenter and remained constant during the entire trial, although it could vary between trials. If the target reached the borders of the circular arena, we produced a bounce using the law of reflection: taking the angle of incidence plus the corresponding angular variability specified for each trial (±75°). In a set of experiments, we examined how the dynamics of chasing trials were influenced when either the user’s or target’s dot temporarily vanished from view. Thus, in these occlusion experiments, some trials involved masking one of the two dots by switching their contrast to 0% (i.e., making it invisible) at different randomly permuted inter-dot distances (IDD). With this procedure, we investigated how participants engaged with the moving target, and whether they relied on predictive signals or anticipated states, even when confronted with the lack of visual information. We instructed the participants to maintain stillness and minimize head motion while allowing them to move their gaze during the experiment.

To ensure engagement and optimal task performance, we used a maximum of 900 trials per session, with one or two resting periods (each lasting 5 minutes) [28, 30]. Experimental sessions had a duration of less than an hour. We made six experiments (E1-E6) due to the constraints of each session’s duration. Furthermore, some measurements utilized an eye tracker while others did not, requiring additional experiments for the analyses presented in our Results section. E1 and E2 explored the impact of different vT values on the interception task. E1 tested trials with fixed directional intervals (FDI) of 100 or 200 ms, while E2 tested trials with an FDI = 500 ms. E3 and E6 involved visual occlusion with vT of 30°/s and 10°/s, respectively. In these experiments, either the user or the target were occluded at four different IDD values using permutations of occlusion conditions. E4 explored various permuted FDI values (FDI: 100 ms, 300 ms, 400 ms, 500 ms, 1000 ms, 1500 ms) with a vT = 10°/s, while E5 replicated this setup with a vT = 30°/s. Table 1 details the number of participants in each experiment, a general description of the conditions and repetitions for each condition, and references to the corresponding result illustrations. We programmed the visuomotor task in MATLAB R2022a using the Psychophysics Toolbox extensions MATLAB (MathWorks, Natick, MA, United States; PTB-3 [31, 32].

thumbnail
Table 1. Participant distribution and experimental conditions.

https://doi.org/10.1371/journal.pone.0308642.t001

Eye tracking

We utilized eye tracking to investigate the oculomotor correlates of successful interception. We recorded binocular eye movements using a commercial eye tracker (Tobii, Pro Fusion, Tobii Pro SDK for Windows; Stockholm, Sweden) operating at a sampling rate of 60 Hz. We fixed the tracker to the lower part of the monitor. The participants were seated comfortably ensuring we centered the eye tracker headbox with their eye level. We minimized participants’ head movements during eye-tracking experiments using a chin holder with a forehead rest. We used this chin rest to maintain consistent and precise eye-tracking data, minimize artifacts, and enhance overall data quality and experimental control. With two cameras capturing stereo images of both eyes, the tracker measured gaze position, and pupil diameter. Throughout the study, we maintained a consistent luminance level in the room where we conducted our experiments. Before initiating calibration, the operator inspected the eye images to ensure clear visibility of relevant eye features, such as the pupil and corneal reflections (c0). The next steps were as follows: first, participants underwent a standard 4-point calibration directly from the eye tracker (Tobii Pro Eye Tracker Manager 2.6.0), with fixation points positioned on the corners of the screen (c1). Subsequently, we performed a calibration using the Psychophysics Toolbox, involving four fixation white dots (with an outer diameter of 0.6°) placed on the periphery of the circular arena of our visuomotor task to ensure proper alignment between the eye-tracking data and the visual stimuli projection on the screen (c2, S4A Fig in S1 File). A third calibration step involved verifying participants’ smooth pursuit response as they tracked a white dot moving horizontally or vertically along the circular arena at a speed of 10°/s (c3, S4B Fig in S1 File).

Pupillometry

The infrared eye tracker recorded binocular pupil diameters with an average background illuminance of ~100 lux at 60 cm from the monitor [33, 34]. We performed a fourth calibration routine of six cycles of alternating contrast conditions (alternating black and white full screens) for each participant to elicit the pupillary light reflex. We repeated this routine four times, each iteration lasting ~35 seconds, enabling us to characterize the averaged participants’ responses (c4, S4C Fig in S1 File). We calculated a binocular average of both pupil diameters and performed a normalization by subtracting the average pupil size and then normalizing the participants’ responses using the range of changes in pupil diameter observed during the calibration routine [35]. We established this range by measuring the spread between the 5th and 95th percentiles of the pupillary light reflex responses. We addressed missing data and blink artifacts (identified using the eye tracker’s software) by employing linear interpolation to fill in the gaps in the dataset. These normalized pupillary diameters represent changes in pupil size during our experimental trials, which we analyzed over time in relation to key events such as the start or end of collision trials. For simplicity, we refer to these traces throughout this manuscript as ’pupillary responses’. We processed all pupillary responses using custom routines written in MATLAB. We divided the experimental sessions that employed eye-tracking into three blocks, with two five-minute breaks in between. Before each block, we performed calibration routines (c1-c4) to ensure precise tracking throughout the entire session.

Analysis

We continuously tracked the chasing trajectories, allowing us to extract various parameters from consecutively acquired frames (S1A, S1B Fig in S1 File). These included the location of the dots in Cartesian (and polar) coordinates, the IDD, user velocity (vU), and the absolute angle between the dots. We also calculated the optimal manual pursuit angle (i.e., simple manual pursuit, involving the ideal direction of manual tracking), which represents the line connecting the participant’s joystick position on the screen to the target, and the optimal interception angle (ϕ), representing the shortest path for interception: (1) where vT is the target speed, vU is the instantaneous user’s speed, and λ is the difference between the optimal manual pursuit angle and the target’s direction [29, 36]. In our context, the term "optimal pursuit angle" specifically refers to the angle formed by the line connecting the user-controlled dot to the target’s position (left panel in S1C Fig in S1 File). Concerning Eq 1, if vU < vT sin λ, then this equation has no solution. In such instances, we assigned ϕ as NaN (i.e., ’not a number’). However, if vUvT sin λ, two solutions are possible, and ϕ corresponds to the one that results in a decrease in IDD [36] (middle panel in S1C Fig in S1 File). We then calculated the distributions of manual pursuit (εpursuit) and interception (εinterception) errors by subtracting the observed participants’ directional angles from optimal ones. To quantify the participants’ preference for interception behavior, we calculated the amount of optimal interception strategy, measured as the percentage of the optimal interception angle (%OIA, right panel in S1C Fig in S1 File) using the angular errors obtained from each frame, as follows: (2) where an %OIA value > 50% indicates that the participants’ directional angles leaned towards interception behavior; conversely, %OIA < 50% suggests a tendency toward manual pursuit behavior [29]. This approach, which evaluates angular errors frame by frame based on immediate and perfect adherence to each strategy, does not consider the intrinsic dynamics of behavior, including the inherent variability and complexity in individuals’ approaches to our task. Furthermore, the %OIA is influenced by IDD, which implies that when IDD is high, the distinction between manual pursuit and tracking becomes less clear as angular gradients decrease, resulting in %OIA → 50%. Conversely, with smaller IDD (closer dots), angular gradients increase, allowing %OIA to effectively differentiate between manual pursuit (%OIA < 50%) and interception (%OIA > 50%) strategies. Random maneuvers in our task result in indiscriminate increases in both εpursuit and εinterception. However, as εinterception does not converge to a low value during these instances, %OIA would tend to be less than 50% for all these frames (relative to εpursuit). In other words, an %OIA > 50% cannot be attributed to random maneuvers. Therefore, to evaluate the influence of ‘interception-independent’ maneuvers during our task, we generated 100 randomized versions of %OIA (%OIArnd) per trial by assigning random user orientations, derived from a uniform distribution to each frame to establish a statistical baseline. In summary, calculating the %OIA served three main purposes: firstly, to identify deviations from %OIArnd; secondly, to quantify the extent to which participants utilized an ’interception strategy’; and thirdly, to compare the values of this index between collision and non-collision trials. An %OIA greater than 50% indicates that participants’ chasing trajectories approached the optimal interception angles described in Eq 1, particularly as collision time approached.

Our geometric model computed optimal manual pursuit and interception angles for each frame using the positions of both dots, vU, and vT, and then compared these values with the user’s direction on that frame. Therefore, manual pursuit errors (εpursuit) quantified the angular disparity between the chaser’s current direction and the optimal manual pursuit angle, while interception errors (εinterception) measured the angular deviation between the chaser’s direction and the optimal interception angle (S1C Fig in S1 File). We confirmed the sensitivity of these measures to the interception task. In the upper panels of S1D Fig in S1 File, we illustrate examples of εpursuit and εinterception group frequency distributions captured at two different temporal frames: 750 ms and 50 ms before the actual collision (with vT = 30°/s, FDI = 100 ms, AR = ± 75°, data from participants from E1). The manual pursuit error distributions exhibited directional disparities relative to the optimal interception angle well in advance of the collision, but these differences became smaller just before the collision (not illustrated, Watson-Williams test for equal mean directions; 45 frames ≈ 750 ms before collision: P < 0.001; 3 frames ≈ 50 ms before collision: P > 0.5, 2816 trials, n = 15). To explore the dynamic changes in these error distributions, we gathered the error data at different frame intervals relative to the collision time (or the end of the trial for cases without collision). Additionally, to establish a statistical reference, we generated manual pursuit and interception error distributions based on random directions at the same instances as the empirically observed changes (gray, labeled ‘Random’ in the lower panels from S1C Fig in S1 File). The εpursuit and εinterception distributions were different from those obtained with random directions, indicating that they could not be explained by chance (Watson-Williams test, pursuit: 45 frames ≈ 750 ms before collision: P < 0.001; 3 frames ≈ 50 ms before collision: P < 0.0001; interception: 45 frames ≈ 750 ms before collision: P < 0.009; 3 frames ≈ 50 ms before collision: P < 0.001, n = 15). There were similar differences between observed and random directions for all vT and FDI tested (not illustrated, Watson-Williams test, P < 0.05 for all cases).

IDD, %OIA, and pupillary traces are dynamic responses that evolve uniquely depending on participants’ approach towards the target, displaying particular trajectories over time. Therefore, we calculated the area under the curve (AUC, 800 ms before trial end) for IDD, %OIA (using the positive differences between the observed and random OIA traces), and pupillary response traces as proxies to distinguish conditions that reflect participants’ efficiency in intercepting the moving target. We fitted performance data with psychometric curves using the following equation: (3) where L is the curve’s maximum value, γ is the curve’s logistic growth rate or slope, x50 is the x value of the sigmoid’s midpoint, and o is the offset of the entire curve [37]. The psychometric fits, depicted in S2C Fig in S1 File, served to identify the critical FDI value at which the %OIA saturation occurred.

Gaze traces were aligned and averaged for each participant and across experiments, with the alignment performed to the beginning or the end of each trial (i.e., event-triggered averages). No filtering was applied to the eye-tracking data. We incorporated two additional visuomotor metrics into our analysis: gaze-to-target distance (GTD), and gaze-to-user distance (GUD). These metrics represent the Euclidean distances from the current gaze position to the target and user positions, respectively. GTD measures the precision of visual tracking by indicating how closely participants’ gaze follows the target. However, GUD is a less informative and more delicate metric to handle: while it can reflect the degree of self-monitoring during the task, it may also covary with IDD simply because the user is closer to or farther from the target.

Statistical analysis

We employed descriptive and inferential statistics of directional data [38]. We transformed the trajectory directions into unit vectors to calculate the average directional angles and computed the mean resultant vector to extract the mean orientation. We utilized the Rayleigh test to assess departures from circular uniformity (for data exhibiting a unimodal distribution). We used the Watson-Williams test to evaluate whether the mean directions of two or more experimental conditions were identical. This test is a circular analog of the one-factor ANOVA and assumes underlying von Mises distribution with equal concentration parameter [38]. We used a multi-sample non-parametric test to compare multiple samples to explore for equality of means or medians across conditions (circular analog to the Kruskal-Wallis test). We used multivariate analysis of variance (MANOVA) to test whether experimental conditions had the same mean. We also used linear regression models to evaluate whether predictors explained the variability in the dependent variable and employed ANOVA tests to assess the overall significance of the models. We employed the Friedman test as a non-parametric alternative to the repeated-measures ANOVA, to assess the equality of medians across several repeated-measures univariate groups. We illustrate our data as mean values ± standard error of the mean (S.E.M.) and consider statistical significance at a threshold of P ≤ 0.05. We report the test statistic (F-statistic along with the degrees of freedom for the model and residuals for ANOVA or χ2 for Friedman tests), and the P-value for the main effect for analyzed factors. We applied Bonferroni corrections to control for Type I errors following some tests. We indicate the number of participants included in the analysis for each task within the corresponding figure panels.

Results

Manual pursuit and interception behavior during chasing trials

We employed a simple visuomotor task in which participants used a joystick to interact with a computer-controlled dot moving along two-dimensional trajectories (S1 Fig in S1 File). This virtual setup allowed us to manipulate the speed of the target (vT) and the characteristics of its movement, including the magnitude (AR) and frequency (FDI) of directional changes (see Methods). Using data collected from participants performing this task, we first quantified the Optimal Interception Angle (%OIA, [29]), which reflects the degree of interception strategy employed on each frame. %OIA values range from 0 to 1, where 0 represents simple manual pursuit, 1 represents pure interception, and values above 50% indicate a tendency toward the interception strategy in the observed trajectories. Next, we analyzed the dynamics of angular errors from these distributions as participants approached the target, explicitly investigating whether there was a gradual tendency to produce smaller angular errors (i.e., an increase in precision before collision). We calculated the probability distributions of εpursuit and εinterception over time, referenced to the moment when the trials ended. We illustrate these distributions as colormaps in Fig 1A. As expected, for collision trials, the probability distributions converged towards a mean zero error as the user approached the target (a 0 error denotes optimal pursuit or interception angle, respectively). However, this trend was strongly reduced in trials without collision (right panels in Fig 1A). Therefore, the %OIA progressively increased as effective collision approached. In contrast, %OIA did not exhibit such an increase in trials without collision (lower-right panel in Fig 1A), or in trials with simulated randomized directions (%OIArnd, gray traces in the lower panels of Fig 1A).

thumbnail
Fig 1. Virtual task to study chasing behavior.

(A) Probability distributions of manual pursuit and interception errors for trials with (left column) and without (right column) collision. Traces below show corresponding % Optimal interception angle (%OIA) traces. Statistical tests below the panel are used to detect the onset of the %OIA trace. (B) Collision-triggered group averages of user speed (vU), inter-dot distance (IDD), and %OIA as a function of vT (depicted in the color bar at the bottom). Panels arranged from lowest to highest FDI suggest that participants initiated the interception strategy sooner with higher FDI values. Panels to the right depict the average %OIA from trials with no collision.

https://doi.org/10.1371/journal.pone.0308642.g001

We utilized the %OIA to distinguish between manual pursuit and interception behaviors, and to detect frames when participants employed the interception strategy. A %OIA < 50% suggests a preference for a manual pursuit strategy, as participants directed the joystick toward the moving target. On the other hand, a %OIA > 50% indicates a tendency towards an interception trajectory, with participants aiming the joystick towards the expected future position of the moving target. As before, the group averaged %OIA traces revealed that participants progressively increased their use of the interception strategy (i.e., %OIA > 50%) as they neared the target, especially towards the end of the chasing trial (lower-left panel in Fig 1A). We employed two-sample Kolmogorov-Smirnov (KS) tests across temporal frames to compare: i) the εpursuit vs. εinterception, and ii) the %OIA vs.%OIArnd distributions. By adopting a 5% significance level, we defined the onset of the interception strategy when both tests showed differences for two consecutive frames. This approach allowed us to detect the moment when participants started exhibiting interception behavior (for example, the black arrow in the left lower panel from Fig 1A is at 341.9 ms before collision). By differentiating pursuit-dominated from interception-dominated chasing frames, this method constitutes a valuable tool for studying the adoption of interception strategies.

Next, searching for conditions that could improve interception, we investigated the influence of task parameters, such as vT and FDI, on the peak amplitude of the %OIA trace. We extracted data from participants solving the task at different FDI values (100 ms, 200 ms, or 500 ms), while vT varied across a range of speeds (from 10°/s to 60°/s) in a randomly permuted fashion across trials (data from E1 and E2, refer to Table 1 for additional details regarding the experiments). We focused on examining the effects (and evolution) of three dependent variables: user speed (vU), inter-dot distance (IDD), and %OIA, during the final ~800 ms leading up to the collision. The vU traces were proportional to the vT values employed and were relatively stable throughout the analyzed time window leading up to the collision (first column in Fig 1B, [29]). The IDD traces were also scaled by vT, but decreased as the collision time approached and the participant neared the target (panels in the second column). Lastly, the %OIA traces showed a ~20% increase just before the collision (comparison between same temporal frames as in S1D Fig in S1 File; FDI = 100, 200, 500 ms, change in %OIA from 50.94% ± 0.13% to 61.23% ± 0.18%; Kruskal-Wallis test, with Mann-Whitney post hoc test, F2,2241 = 899.03, P < 0.001, panels in the third column; participants from E1 and E2), but such an increase was absent in trials where no collision occurred (change in %OIA from 50.06% ± 0.14% to 49.97% ± 0.14%, F2,2133 = 0.9759, P = 0.32; panels in the fourth column in Fig 1B). Furthermore, the variable onsets of the interception strategy (~539 ms, ~532 ms, and ~786 ms before collision, indicated by black arrows in panels from the third column in Fig 1B) suggests that participants initiated the interception strategy earlier with higher FDI values, in agreement with the target’s more predictable motion.

To examine %OIA in relation to space instead of time, we sorted the collision-triggered %OIA based on the distance to the target (IDD) within a time window of 25 frames (≈ 416 ms) before the collision. This approach ensured that parameters were arranged and averaged relative to IDD within the pre-collision time window. Once again, as participants neared the target, the sorted %OIA increased by ~10% (F2,1107 = 351.80, P < 0.001), but this increase was absent in trials without collision (F2,1056 = 0.05, P = 0.80; not illustrated). This enabled us to corroborate the observed increments in %OIA both temporally and spatially as participants approached the collision point. In our Supplementary Material, readers can access S1-S3 Tables in S1 File containing probabilities of successful interception across all examined experiments. Additionally, a dedicated section on ‘Task parameters that regulate interception behavior’ provides detailed insights into how various task factors influenced %OIA in both collision and no-collision trials.

Gaze predictors of efficient interception

Although prior studies have suggested a connection between interception strategies and smooth pursuit eye movements [19], the exact relationship between gaze behavior and these strategies remains unclear. We used an eye-tracking system to investigate the impact of vT on oculomotor responses during chasing trials (data from E2, FDI = 500 ms; calibration routines exemplified in S4 Fig in S1 File). As additional visuomotor metrics, we assessed the ‘gaze-speed’ (vG), the ‘gaze-target distance’ (GTD), and the ‘gaze-user distance’ (GUD), representing the Euclidean distances between the gaze and the target’s or the user’s positions, respectively. These metrics allowed us to analyze the participant’s proximity to the target. Fig 2A illustrates how vT exerted control over motor and visuomotor performance, while the collision outcome also determined the observed behavioral responses. Regardless of the collision outcome, all visuomotor metrics increased with vT, indicating a consistent effect of target velocity on the participants’ visuomotor performance (Kruskal-Wallis test, with Mann-Whitney post hoc test, vGcollision: F1,4631 = 147.34, P < 0.0001; vGno-collision: F1,4631 = 46.85, P < 0.002; GTDcollision: F1,4631 = 587.08, P < 0.0001; GTDno-collision: F1,4631 = 649.81, P < 0.002; GUDcollision: F1,4631 = 1565.1, P < 0.0001; GUDno-collision: F1,4631 = 1330.50, P < 0.002, n = 33; Fig 2B).

thumbnail
Fig 2. Oculomotor predictors of efficient interception.

(A) Group averaged traces for user speed (vU), inter-dot distance (IDD), %OIA, gaze speed (vG), ‘gaze-target distance’ (GTD), and ‘gaze-user distance’ (GUD) as a function of vT. Note the differences between trials that lead to collisions and no collisions (labels on top of the panels). (B) Group averaged parameters extracted from the eye-tracking system. Increased vT leads to increased gaze speeds, GTD, and GUD.

https://doi.org/10.1371/journal.pone.0308642.g002

Next, we searched for changes in oculomotor responses associated with spatial occlusion experiments. We reasoned that masking should affect visualization and interception. Notably, we found that masking the user had minimal impact on the GTD (collision trials; data from E3 illustrated in Fig 3A). However, masking the target impaired the participants’ ability to approach the target efficiently, and the GTD increased with the target’s masking distance (Friedman test with two factors: masking distance, and masking condition [i.e., masked participant or target], with Bonferroni correction, χ2(2) = 813.75, P < 0.001 for all cases, n = 20; Fig 3B). As expected, participants showed reduced accuracy in approximating the target in trials without collisions, as evidenced by the IDD traces. However, despite IDD not showing a decrease over time, the GTD traces consistently displayed a negative slope, indicating an attempt to intercept the target. This finding suggests that participants continuously adjusted their gaze behavior to aim at the target whenever it was visible, exhibiting varying levels of effectiveness.

thumbnail
Fig 3. Visuomotor performance during spatial masking experiments.

Panels (arranged as in previous figure) show group averaged traces for user speed (vU), inter-dot distance (IDD), %OIA, gaze speed (vG), ‘gaze-target distance’ (GTD), and ‘gaze-user distance’ (GUD) as a function of vT for experiments where the user (A) or the target (B) were masked. Note that masking the user does not influence the GTD, whereas masking the target does impact the GTD. Gaze speed is not affected by masking distance. Experiments were performed at constant vT = 30°/s, and FDI = 500 ms.

https://doi.org/10.1371/journal.pone.0308642.g003

Pupillary responses linked to efficient interception

While joystick and eye movement data offer valuable information about an individual’s actions and gaze orientation, pupil data serves as an additional window into their cognitive state. For example, pupil dilation data can reveal the subjective difficulty of cognitive tasks, their intensity, and the underlying deployment of visual attention [25, 39]. Moreover, researchers have suggested that there is an increased cognitive effort with uncertainty processing, leading to larger pupillary diameters [3941]. We investigated the influence of vT and collision outcome on pupillary responses from participants solving our task (data from E2). We observed that the pupil contracted during the first ~50 frames ≈ 800 ms of the trial and dilated after that, showing similar patterns for collision and no-collision trials (Fig 4A). However, dilation amplitudes showed differences between collision and no-collision trials, consistent with a different build-up of decision uncertainty. More specifically, pupil responses were smaller for trials resulting in collisions (average diameter: 63.44% ± 0.33%, 5820 trials) but larger for trials without collisions (69.82% ± 0.38%, 4632 trials). Notably, these differences in pupil response diameters between collision and no-collision trials emerged approximately ~35 frames ≈ 730 ms after the beginning of the trial (Friedman test with vT and collision outcome as factors, and with a Bonferroni correction applied, χ2 (2) = 1574.8, P < 0.001, n = 33; right panel in Fig 4B).

thumbnail
Fig 4. Pupillary responses predict efficient interception.

(A) Normalized pupillary responses, aligned to the start of chasing trials, are illustrated for collision (left) and no-collision (right) trials, distinguished by various vT values indicated by different colors. (B) Discernible differences between collision (black line) and no-collision (dotted line) trials became apparent around 300 ms after the start of the trial. (C) Normalized pupillary responses, aligned to the end of chasing trials, illustrated for collision (left) and no-collision (right) trials. Lower panels show the area under the curve of pupillary responses as a function of vT for collision (left) and no-collision (right) trials. Gray dots represent the averaged values from collision trials (for comparison on the same y-scale). (D) Normalized pupillary responses against AUC %OIA (top) and AUC IDD (bottom) for collision (left) and no-collision (right) trials.

https://doi.org/10.1371/journal.pone.0308642.g004

Subsequently, we investigated whether the pupillary differences linked to collision outcome were also present in the averaged traces aligned to the moment of collision (or to the end of the trial for cases with no collision). Pupillary responses increased as participants approached the moment of collision, whereas trials with no collision exhibited already elevated pupillary diameters that remained stable towards the end of the trial (Fig 4C). Interestingly, these pupillary responses were proportional to vT for trials with collision but negatively proportional to vT for trials without collision (slope of linear regression model, AUC PUP vs. vT; collision: ANOVA test, F1,4 = 18.94, m = 0.06, P = 0.01; no collision: F1,4 = 42.69, m = -0.14, P = 0.002; lower panels in Fig 4C). Furthermore, in trials with successful collisions, the AUC of pupillary responses (AUC PUP) increased with the AUC %OIA (slope of linear regression model, F1,8 = 11.41, m = 0.18, P = 0.009), but decreased with the AUC of IDD (slope of linear regression model, F1,8 = 15.84, m = -5.29⋅10−4, P = 0.004). For trials without collision, AUC PUP decreased with AUC %OIA (slope of linear regression model, F1,8 = 12.85, m = -0.28, P = 0.008) but increased with AUC IDD (slope of linear regression model, m = 5.05⋅10−4, F1,8 = 28.28, P = 0.001; Fig 4D). Therefore, there was a positive correlation between pupil dilation and vT /%OIA for collision trials but a negative correlation for trials without collision.

Using data from spatial occlusion experiments, we next analyzed the impact of the masking condition, masking distance, and collision outcome on pupillary responses (data from E3, vT = 30°/s, FDI = 500 ms; Fig 5). Pupillary responses aligned to the beginning of the trials exhibited a similar pattern as previously described: an initial contraction followed by subsequent dilation for collision (average diameter: 47.26% ± 0.36%, 4720 trials) and no-collision (51.65% ± 0.43%, 3765 trials) trials (Fig 5A). When the target was masked, pupil responses were smaller for trials resulting in collisions, while they were larger for trials with no collisions. However, when masking the user, the trend reversed, with greater pupillary responses for collision than for no-collision trials. When analyzing traces aligned to the moment of collision, we observed, again, a similar trend as described earlier: pupillary responses increased as participants approached the collision moment, while trials without collisions consistently displayed elevated pupillary responses. Notably, when masking the user or the target, pupillary responses were reduced by increasing masking distance (Kruskal-Wallis test, with Mann-Whitney post hoc test, F3,115697 = 2.392⋅104, P < 0.001, n = 20; Fig 5B). Therefore, in experiments with variable vT, pupil responses were smaller for collision trials and larger for no-collision trials (Fig 4C). In contrast, in spatial occlusion experiments, pupil responses were larger for collision trials compared to no-collision trials, and they were reduced by increasing masking distance (Fig 5B). The different observations in pupil responses between collision and no-collision trials and the masking condition could be attributed to the different cognitive demands and processing involved in each experimental condition.

thumbnail
Fig 5. Masking distance decreases pupillary responses.

(A) Normalized pupillary responses, aligned to the start of chasing trials, when masking the user (left) or the target (right). (B) Normalized pupillary responses, aligned to the end of chasing trials, when masking the user (left) or the target (right).

https://doi.org/10.1371/journal.pone.0308642.g005

To investigate the utilization of visual cues and gaze behavior during interception, we used AUC %OIA from individual traces to generate ranges of varying interception proficiency levels among trials, enabling us to detect potential patterns associated with successful interception. Using data from 13,449 collision trials (from E2), we established these ranges and categorized trials into five subgroups based on quantiles from the overall %OIA distribution (upper left panel in Fig 6). We then averaged all attributes from those traces for each participant and calculated the group-averaged traces for each attribute and each quantile category. By tagging the data from all these trials, we explored and compared the average visuomotor collision-triggered responses across categories. We employed a gradual color transition to represent each range, illustrating the different patterns of interception behavior among the various AUC %OIA categories (Fig 6). Notably, there were detectable differences across groups in %OIA, gaze speed, and GTD (Kruskal-Wallis test, with Mann-Whitney post hoc test, F1,709 ≥ 12.76, P ≤ 0.01 for all cases), but not so in pupillary responses and GUD (F1,709 ≤ 0.38, P ≥ 0.06, n = 33). These results reveal important interdependencies among %OIA, gaze speed, and GTD, suggesting that various aspects of interception skills are closely linked and could contribute to the overall performance in the task.

thumbnail
Fig 6. Distinct interception behavior patterns among graded interceptors.

The distribution of AUC %OIA cases is divided into five subgroups based on ascending quantile ranges, color-coded in the accompanying colorbar. The gray tick marks above the distribution denote the start and end points for each category within each group. Each subgroup exhibits a unique profile in %OIA, gaze speed (vG), normalized pupillary diameter, ‘gaze-target distance’ (GTD), and ‘gaze-user distance’ (GUD). Detectable variations were observed across groups in %OIA, vG, and GTD.

https://doi.org/10.1371/journal.pone.0308642.g006

Discussion

Chasing moving objects can involve a combination of pursuit and interception strategies. In joystick-based tasks, manual pursuit entails aiming directly at the target’s current position [11], while interception requires predicting the future location of the target and adjusting movements accordingly to reach the interception point [14]. Several elements, including target proximity, predictability of the target’s movement, speed, and external conditions, impact the selection between manual pursuit or interception. For instance, pursuing the target can be effective, especially when the target moves slowly and maintains a constant direction. However, in conditions with more unpredictability, a combination of predictive and prospective control can lead to more successful outcomes [42, 43]. Predictive control requires knowing the physical laws governing the target’s motion and the ability to make accurate predictions about future events [44, 45]. This approach involves estimating the contact point and time of the collision, enabling individuals to anticipate the trajectory of the target and execute their actions accordingly. However, a purely predictive control strategy also implies that adjustments to movement should not occur late in the execution, as the plan is well-established in advance [12, 14]. In contrast, prospective control does not explicitly predict the final collision point; instead, it relies on information about the future interception under the current conditions. Thus, prospective strategies enable continuous movement adjustments to achieve successful interception [911, 14, 46].

By calculating the optimal interception angle (%OIA, see Methods), we identified a time window within chasing trials when participants employed an interception strategy. To determine the onset and duration of interception, we developed a method that tracked the %OIA over time based on participants’ joystick behavior. Indeed, as participants approached the target, their %OIA increased in collision trials, especially with smaller target speed (vT) values, while this increase was absent in trials without collision. We hypothesized that task parameters like vT and the fixed-direction interval (FDI, see Methods) influenced gaze behavior and interceptive performance. Our findings revealed a U-shaped relationship between the area under the curve (AUC) of the %OIA and vT, indicating that the interception strategy was most effective at intermediate target speeds. Furthermore, increasing the FDI led to higher AUC %OIA values. These longer FDI likely gave participants more exposure to the target’s movement direction, facilitating better adaptation to its motion dynamics. Probably, such extended exposure allowed participants to adjust their chasing behavior strategically, resulting in an enhanced ability to predict the optimal interception angle and, therefore, increased their %OIA. In our spatial occlusion experiments, we masked the user or the target during the interception phase. The absence of these visual cues impeded participants’ ability to assess and adjust their chasing orientation angles, leading to a relative decrease in %OIA. These findings highlight a robust bidirectional influence of task parameters on the interception strategy, contingent on the amount of available information. We have shown the relationship between successful target interception and experimental parameters, such as vT, FDI, and masking distance, in a previous publication [29]. These findings contribute to the interpretation of comparisons between collision and non-collision trials.

We reasoned that ocular movements should precede and inform joystick control during the interception task. We developed a method to detect the onset and duration of significant changes in %OIA and combined it with eye-tracking recordings to investigate the visuomotor predictors of interception. We analyzed participants’ eye movements and explored the interrelationship between interception and visuomotor performance. As vT increased, we observed proportional effects on gaze speed (vG), gaze-to-target distance (GTD), and gaze-to-user distance (GUD, see Methods). This increase in vT forced participants to adjust their vG to cover larger distances between the target and the user, thereby enhancing their ability to effectively intercept the moving target. Additionally, when masking the user, the main impact was an increase in GUD, whereas masking the target resulted in a rise in GTD. These results confirm that eye movements were closely linked to interception performance, further suggesting that interception strategies were optimized to match (and constrained by) the proficiency of the ocular pursuit system [19]. Humans exhibit anticipatory eye movements toward the future positions of moving objects, even without intending to intercept them. As eyes typically lead hand movements, it’s reasonable that GTD was smaller than the inter-dot-distance (IDD, see Methods) before interception [1518, 22]. However, larger GTD values don’t necessarily indicate poorer performance. Quite the opposite: they may reflect larger predictions of future positions. Indeed, in our case, the larger GTD values suggest stronger predictions for collision trials compared to no-collision trials. Additional metrics, including initial saccade latency, the precision of predictive saccades, pursuit gain, and tracking errors, among others, have also served as indirect indicators of prediction processes [4750]. Even in the simplest manual pursuit tasks, predictive processes are likely involved due to sensorimotor processing delays, necessitating prediction for successful interception [20, 51].

When analyzed by percentiles, the strong relationship between %OIA and gaze-target distances suggests a potential influence of eye movements on interception %OIA. These findings reveal the integrated role of oculomotor behavior in visual perception and motor control during our interception task. Some critics argue that the computational demands linked to internal representations for planning and executing movements could limit flexibility across different tasks. To address this, continuous online monitoring of movement properties, such as trajectory and velocity, becomes essential for future control planning [14]. From this perspective, the interception task is transformed into a dynamic process, with the timing and location of interception gradually emerging as the movement unfolds [52]. The main benefit of this approach is that it ‘liberates’ performers from relying solely on single cues and allows them to adapt and respond to unexpected changes. Numerous studies support the notion that interception involves real-time decision-making, which can lead to increased errors under challenging conditions [9, 10, 14, 46, 53].

The average human pupil size is around 3 mm, but it can vary between 2 to 8 mm when exposed to different light intensities [54]. In fixed luminance conditions, however, pupillary changes are typically much more modest than those observed in response to light, rarely exceeding 0.5 mm. Believed to reflect attentional processes, these changes in pupil diameter occur binocularly and without conscious control [23]. Researchers have linked pupillary diameter changes to surprise processing, uncertainty [25], and decision bias-related variables [24]. A prevailing view suggests that the brain communicates uncertainty signals throughout neural circuits that span the entire brain via low-level arousal systems [55, 56]. Consequently, alterations in the global brain state caused by uncertainty may impact adjustments in decision-making. Interestingly, pupil size is coupled (at least to some degree) with the locus coeruleus (LC) activity, which serves as the central hub of the norepinephrine system [24, 55]. Studies have suggested that micro-stimulation of the LC or the superior and inferior colliculi induces pupil dilations, highlighting its role as an indirect measure of brain activity and arousal [5759] (but see [24]). The experimental results from these previous studies illustrate the multifaceted nature of pupillary responses in capturing both cognitive and affective states.

Our results revealed that pupillary responses exhibited distinct patterns depending on the collision outcome. The pupil diameter initially reduced in size but then increased for all chasing trials. Notably, pupil diameters were smaller for collision trials and larger for no-collision trials, supporting the notion that, in our task, pupillary diameter may be inversely related to decision uncertainty. More specifically, the larger pupil size observed in no collision trials may indicate that participants “gave up” or disengaged from the current trial as they became increasingly confident that they had failed to intercept the target. We also found that the extent of pupil dilation was positively correlated with vT and the AUC %OIA and negatively correlated with the AUC IDD for collision trials. Pupil dilation showed a negative correlation with vT and AUC %OIA for trials with no collision but a positive correlation with AUC IDD. Interestingly, our spatial occlusion experiments revealed smaller pupillary responses in trials with larger masking distances. These findings suggest that uncertainty about the target’s and user’s location (due to increased masking distances) was associated with reduced pupillary responses during the interception task. Therefore, pupil dilation in our task was associated with task difficulty, providing insights into the dynamic relationship among visual attention, cognitive effort, and interception. Previous works have reported that pupil diameter is related to both response accuracy and the probability of correct choices [60]. In other tasks, early pupil dilation is larger when the value of the chosen option is more uncertain, while late pupil constriction is greater when the outcome violates the expected value, indicating that pupil responses could track surprise and feedback valence [61].

Various limitations in our study should be highlighted. Firstly, the eye tracker’s low sampling frequency limits reporting on saccades, potentially impacting the interpretation of dynamic eye movements in our study. Another critical factor arises from the implicit human visuomotor delay, which typically ranges from 100–200 ms for continuous target motion and 200–300 ms for abrupt changes. Therefore, our experimental design, using FDI values of 100 and 200 ms (data from E1), may not adequately capture stimulus-induced changes in movement direction. Regarding pupillary responses, one possibility is that the observed differences in pupillary diameter could be attributed to variations in luminance. Participants who efficiently kept the target (black dot) closer to the fovea might have experienced reduced net luminance reaching the eye compared to those who focused or foveated more on the background. However, this explanation seems unlikely. Given that pursuit gain is higher at slower temporal frequencies [62], we would expect pursuit efficiency to be negatively related to vT. Yet, we observed greater pupillary diameters with higher vT. Another concern is that the difference in trial durations between collision and no-collision trials could introduce a potentially confounding factor when comparing pupillary traces. This highlights the need for careful experimental design to ensure a balanced comparison. Therefore, it is essential to account for these and other confounding variables that may impact pupillary changes in our interception task. Future experiments should better control these variables to enable a more thorough exploration of the relationship between pupil diameter and uncertainty [35]. Finally, we acknowledge our study’s limitation in external validity, and endorse the idea of integrating more ecologically valid tasks, such as those created in virtual reality environments [6366].

Various algorithms can be employed to implement successful interception behavior [12, 14, 66, 67]. Studying the visuomotor predictors of interception is crucial for understanding how the human brain coordinates eye and hand movements for precise interception. This work offered analytic tools to investigate interception behaviors and their underlying mechanisms. Future studies could explore the impact of perceptual biases, assess gaze training effects, and explore cognitive mechanisms involved in interception strategies, advancing our understanding of human adaptability in dynamic environments. The relationship between %OIA and gaze behavior could inform training protocols and assistive technologies, benefiting fields like sports science, neuro-motor rehabilitation, and robotics. The study of sensorimotor integration could also have broader implications for cognitive abilities and academic performance [68].

Our study explored visuomotor predictors of interception, revealing how the target’s speed and predictability influenced manual pursuit and interception strategies. Through meticulous motor and visuomotor performance analysis, we found that participants consistently adapted their gaze behavior in collision trials, demonstrating their capacity to adjust to changes in the target’s direction throughout chasing trials. Pupillary responses were also influenced by task conditions and collision outcomes, suggesting different levels of uncertainty processing. Additionally, the utilization of visual information varied, resulting in distinct patterns of interception performance. Our research contributes to understanding effective interception strategies and the intricate relationship between eye movements and interception performance.

Acknowledgments

We thank Dr. Humberto Salgado (Universidad Autónoma de Yucatán, México) for help in implementing statistical tests. Thanks to I. Sevilla, and J. Verdugo for helping to conduct some experiments. Thanks to I. Zepeda and F. Cabrera for sharing lab space during the initial phase of the experiments. We extend our gratitude to the administrative staff at our campuses (CUCBA, and CUCIÉNEGA), and the ‘Instituto de Neurociencias’ for their invaluable support. We appreciate the time the reviewers dedicated to helping us improve our manuscript.

References

  1. 1. Fialho JVAP, Tresilian JR. Intercepting accelerated moving targets: effects of practice on movement performance. Exp Brain Res. 2017;235: 1257–1268. pmid:28197673
  2. 2. Zago M, Bosco G, Maffei V, Iosa M, Ivanenko YP, Lacquaniti F. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions. J Neurophysiol. 2004;91: 1620–1634. pmid:14627663
  3. 3. Grafton ST, Hamilton AF de C. Evidence for a distributed hierarchy of action representation in the brain. Hum Mov Sci. 2007;26: 590–616. pmid:17706312
  4. 4. Pruszynski JA, Scott SH. Optimal feedback control and the long-latency stretch response. Exp Brain Res. 2012;218: 341–359. pmid:22370742
  5. 5. Wolpert DM, Landy MS. Motor control is decision-making. Current Opinion in Neurobiology. 2012;22: 996–1003. pmid:22647641
  6. 6. Arbib MA. Visuomotor Coordination: From Neural Nets to Schema Theory. In: Selfridge OG, Rissland EL, Arbib MA, editors. Boston, MA: Springer US; 1984. pp. 207–225.
  7. 7. Fajen BR, Warren WH. Behavioral dynamics of intercepting a moving target. Exp Brain Res. 2007;180: 303–319. pmid:17273872
  8. 8. Liang C, Wang W, Liu Z, Lai C, Zhou B. Learning to Guide: Guidance Law Based on Deep Meta-Learning and Model Predictive Path Integral Control. IEEE Access. 2019;7: 47353–47365.
  9. 9. Bastin J, Craig C, Montagne G. Prospective strategies underlie the control of interceptive actions. Human Movement Science. 2006;25: 718–732. pmid:16730090
  10. 10. Ledouit S, Casanova R, Zaal FTJM, Bootsma RJ. Prospective control in catching: the persistent Angle-of-approach effect in lateral interception. PLoS One. 2013;8: e80827. pmid:24278324
  11. 11. Varennes L, Krapp HG, Viollet S. Two pursuit strategies for a single sensorimotor control task in blowfly. Sci Rep. 2020;10: 20762. pmid:33247176
  12. 12. Chai R, Savvaris A, Chai S. Integrated missile guidance and control using optimization-based predictive control. Nonlinear Dyn. 2019;96: 997–1015.
  13. 13. Delle Monache S, Paolocci G, Scalici F, Conti A, Lacquaniti F, Indovina I, et al. Interception of vertically approaching objects: temporal recruitment of the internal model of gravity and contribution of optical information. Front Physiol. 2023;14: 1266332. pmid:38046950
  14. 14. Panchuk D, Vickers JN. Using spatial occlusion to explore the control strategies used in rapid interceptive actions: Predictive or prospective control? J Sports Sci. 2009;27: 1249–1260. pmid:20213920
  15. 15. Brenner E, Smeets JBJ. Continuous visual control of interception. Hum Mov Sci. 2011;30: 475–494. pmid:21353717
  16. 16. Heeman J, Van der Stigchel S, Munoz DP, Theeuwes J. Discriminating between anticipatory and visually triggered saccades: measuring minimal visual saccadic response time using luminance. J Neurophysiol. 2019;121: 2101–2111. pmid:30785808
  17. 17. Pfeuffer CU, Aufschnaiter S, Thomaschke R, Kiesel A. Only time will tell the future: Anticipatory saccades reveal the temporal dynamics of time-based location and task expectancy. J Exp Psychol Hum Percept Perform. 2020;46: 1183–1200. pmid:32614216
  18. 18. Pfeuffer CU, Kiesel A, Huestegge L. A look into the future: Spontaneous anticipatory saccades reflect processes of anticipatory action control. J Exp Psychol Gen. 2016;145: 1530–1547. pmid:27797559
  19. 19. Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res. 2021;183: 81–90. pmid:33743442
  20. 20. Brenner E, Smeets JBJ. Fast Responses of the Human Hand to Changes in Target Position. Journal of Motor Behavior. 1997;29: 297–310. pmid:12453772
  21. 21. Goettker A, Brenner E, Gegenfurtner KR, de la Malla C. Corrective saccades influence velocity judgments and interception. Sci Rep. 2019;9: 5395. pmid:30931972
  22. 22. Malla C de la, Smeets JBJ, Brenner E. Potential Systematic Interception Errors are Avoided When Tracking the Target with One’s Eyes. Sci Rep. 2017;7: 10793. pmid:28883471
  23. 23. Laeng B, Sulutvedt U. The eye pupil adjusts to imaginary light. Psychol Sci. 2014;25: 188–197. pmid:24285432
  24. 24. Megemont M, McBurney-Lin J, Yang H. Pupil diameter is not an accurate real-time readout of locus coeruleus activity. Elife. 2022;11: e70510. pmid:35107419
  25. 25. Yokoi A, Weiler J. Pupil diameter tracked during motor adaptation in humans. J Neurophysiol. 2022;128: 1224–1243. pmid:36197019
  26. 26. Strauch C, Barthelmaes M, Altgassen E, Huckauf A. Pupil Dilation Fulfills the Requirements for Dynamic Difficulty Adjustment in Gaming on the Example of Pong. ACM Symposium on Eye Tracking Research and Applications. New York, NY, USA: Association for Computing Machinery; 2020. pp. 1–9.
  27. 27. Ebitz RB, Moore T. Both a Gauge and a Filter: Cognitive Modulations of Pupil Size. Front Neurol. 2018;9: 1190. pmid:30723454
  28. 28. Treviño M, Castiello S, Arias-Carrión O, D la Torre-Valdovinos B, C y León RM. Isomorphic decisional biases across perceptual tasks. PLOS ONE. 2021;16: e0245890. pmid:33481948
  29. 29. Treviño M, Medina-Coss y León R, Támez S, Beltrán-Navarro B, Verdugo J. Directional uncertainty in chase and escape dynamics. Journal of Experimental Psychology: General. 2024;153: 418–434. pmid:37956078
  30. 30. Treviño M, Medina-Coss y León R, Haro B. Adaptive Choice Biases in Mice and Humans. Front Behav Neurosci. 2020;14. pmid:32760255
  31. 31. Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997;10: 433–436. pmid:9176952
  32. 32. Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997;10: 437–442. pmid:9176953
  33. 33. Treviño M, Frey S, Köhr G. Alpha-1 adrenergic receptors gate rapid orientation-specific reduction in visual discrimination. Cereb Cortex. 2012;22: 2529–2541. pmid:22120418
  34. 34. Treviño M, Oviedo T, Jendritza P, Li S-B, Köhr G, De Marco RJ. Controlled variations in stimulus similarity during learning determine visual discrimination capacity in freely moving mice. Sci Rep. 2013;3: 1048. pmid:23308341
  35. 35. Fink L, Simola J, Tavano A, Lange E, Wallot S, Laeng B. From pre-processing to advanced dynamic modeling of pupil data. Behav Res. 2024;56: 1376–1412. pmid:37351785
  36. 36. Ghose K, Horiuchi TK, Krishnaprasad PS, Moss CF. Echolocating bats use a nearly time-optimal strategy to intercept prey. PLoS Biol. 2006;4: e108. pmid:16605303
  37. 37. Treviño M, Fregoso E, Sahagún C, Lezama E. An Automated Water Task to Test Visual Discrimination Performance, Adaptive Strategies and Stereotyped Choices in Freely Moving Mice. Front Behav Neurosci. 2018;12: 251. pmid:30467467
  38. 38. Berens P. CircStat: A MATLAB Toolbox for Circular Statistics. Journal of Statistical Software. 2009;31: 1–21.
  39. 39. Zénon A. Eye pupil signals information gain. Proc Biol Sci. 2019;286: 20191593. pmid:31530143
  40. 40. Vincent P, Parr T, Benrimoh D, Friston KJ. With an eye on uncertainty: Modelling pupillary responses to environmental volatility. PLoS Comput Biol. 2019;15: e1007126. pmid:31276488
  41. 41. Lavín C, San Martín R, Rosales Jubal E. Pupil dilation signals uncertainty and surprise in a learning gambling task. Front Behav Neurosci. 2013;7: 218. pmid:24427126
  42. 42. Kumar P, Sonkar S, Ghosh AK, Philip D. Nonlinear Model-Predictive Integrated Guidance and Control Scheme applied for Missile-on-Missile Interception. 2020 International Conference on Emerging Smart Computing and Informatics (ESCI). 2020. pp. 318–324.
  43. 43. Zhao H, Straub D, Rothkopf CA. People learn a two-stage control for faster locomotor interception. Psychol Res. 2024;88: 167–186. pmid:37083875
  44. 44. Delle Monache S, Lacquaniti F, Bosco G. Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion. J Vis. 2019;19: 13. pmid:30952164
  45. 45. Jörges B, López-Moliner J. Earth-Gravity Congruent Motion Facilitates Ocular Control for Pursuit of Parabolic Trajectories. Sci Rep. 2019;9: 14094. pmid:31575901
  46. 46. Barany DA, Gómez-Granados A, Schrayer M, Cutts SA, Singh T. Perceptual decisions about object shape bias visuomotor coordination during rapid interception movements. J Neurophysiol. 2020;123: 2235–2248. pmid:32374224
  47. 47. Diaz G, Cooper J, Rothkopf C, Hayhoe M. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. Journal of Vision. 2013;13: 20. pmid:23325347
  48. 48. Goettker A, Pidaparthy H, Braun DI, Elder JH, Gegenfurtner KR. Ice hockey spectators use contextual cues to guide predictive eye movements. Current Biology. 2021;31: R991–R992. pmid:34428418
  49. 49. Land MF, McLeod P. From eye movements to actions: how batsmen hit the ball. Nat Neurosci. 2000;3: 1340–1345. pmid:11100157
  50. 50. Shalom DE, Dagnino B, Sigman M. Looking at Breakout: urgency and predictability direct eye events. Vision Res. 2011;51: 1262–1272. pmid:21458476
  51. 51. Fiehler K, Brenner E, Spering M. Prediction in goal-directed action. J Vis. 2019;19: 10. pmid:31434106
  52. 52. Dessing JC, Peper CLE, Bullock D, Beek PJ. How position, velocity, and temporal information combine in the prospective control of catching: data and model. J Cogn Neurosci. 2005;17: 668–686. pmid:15829086
  53. 53. Montagne G. Prospective control in sport. International Journal of Sport Psychology. 2005;36: 127–150.
  54. 54. Watson AB, Yellott JI. A unified formula for light-adapted pupil size. J Vis. 2012;12: 12. pmid:23012448
  55. 55. Atzori M, Cuevas-Olguin R, Esquivel-Rendon E, Garcia-Oscos F, Salgado-Delgado RC, Saderi N, et al. Locus Ceruleus Norepinephrine Release: A Central Regulator of CNS Spatio-Temporal Activation? Front Synaptic Neurosci. 2016;8: 25. pmid:27616990
  56. 56. Treviño M, Manjarrez E. Balanced Expression of G Protein-coupled Receptor Subtypes in the Mouse, Macaque, and Human Cerebral Cortex. Neuroscience. 2022;487: 107–119. pmid:35131393
  57. 57. Joshi S, Li Y, Kalwani RM, Gold JI. Relationships between Pupil Diameter and Neuronal Activity in the Locus Coeruleus, Colliculi, and Cingulate Cortex. Neuron. 2016;89: 221–234. pmid:26711118
  58. 58. Murphy PR, O’Connell RG, O’Sullivan M, Robertson IH, Balsters JH. Pupil diameter covaries with BOLD activity in human locus coeruleus. Human Brain Mapping. 2014;35: 4140–4154. pmid:24510607
  59. 59. Varazzani C, San-Galli A, Gilardeau S, Bouret S. Noradrenaline and dopamine neurons in the reward/effort trade-off: a direct electrophysiological comparison in behaving monkeys. J Neurosci. 2015;35: 7866–7877. pmid:25995472
  60. 60. Murphy PR, Vandekerckhove J, Nieuwenhuis S. Pupil-linked arousal determines variability in perceptual decision making. PLoS Comput Biol. 2014;10: e1003854. pmid:25232732
  61. 61. Van Slooten JC, Jahfari S, Knapen T, Theeuwes J. How pupil responses track value-based decision-making during and after reinforcement learning. PLoS Comput Biol. 2018;14: e1006632. pmid:30500813
  62. 62. Barnes GR. Cognitive processes involved in smooth pursuit eye movements. Brain and Cognition. 2008;68: 309–326. pmid:18848744
  63. 63. Zhao H, Warren W. Interception of a speed-varying target: On-line or model-based control? Journal of Vision. 2013;13: 951.
  64. 64. Borg JM, Mehrandezh M, Fenton RG, Benhabib B. Navigation-Guidance-Based Robotic Interception of Moving Objects in Industrial Settings. Journal of Intelligent and Robotic Systems. 2002;33: 1–23.
  65. 65. Chardenon A, Montagne G, Laurent M, Bootsma RJ. A robust solution for dealing with environmental changes in intercepting moving balls. J Mot Behav. 2005;37: 52–64. pmid:15642692
  66. 66. van Opstal AAM, Casanova R, Zaal FTJM, Bootsma RJ. Characterisation of visual guidance of steering to intercept targets following curving trajectories using Qualitative Inconsistency Detection. Sci Rep. 2022;12: 20246. pmid:36424412
  67. 67. Fabian ST, Sumner ME, Wardill TJ, Rossoni S, Gonzalez-Bellido PT. Interception by two predatory fly species is explained by a proportional navigation feedback controller. J R Soc Interface. 2018;15: 20180466. pmid:30333249
  68. 68. Rule AC, Smith LL. Fine Motor Skills, Executive Function, and Academic Achievement. In: Brewer H, Renck Jalongo M, editors. Cham: Springer International Publishing; 2018. pp. 19–40.