Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

From eyes’ microtremors to critical flicker fusion

  • Pedro Lencastre ,

    Roles Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing

    pedroreg@oslomet.no

    Affiliations Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway, OsloMet Artificial Intelligence Lab, OsloMet, Oslo, Norway

  • Rujeena Mathema,

    Roles Data curation, Methodology

    Affiliations Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway, OsloMet Artificial Intelligence Lab, OsloMet, Oslo, Norway

  • Pedro G. Lind

    Roles Supervision, Writing – review & editing

    Affiliations Department of Computer Science, OsloMet - Oslo Metropolitan University, Oslo, Norway, OsloMet Artificial Intelligence Lab, OsloMet, Oslo, Norway, Kristiania University of Applied Sciences, Oslo, Norway, Simula Research Laboratory, Numerical Analysis and Scientific Computing, Oslo, Norway

Abstract

The critical flicker fusion threshold (CFFT) is the frequency at which a flickering light source becomes indistinguishable from continuous light. The CFFT is an important biomarker of health conditions, such as Alzheimer’s disease and epilepsy, and is affected by factors as diverse as fatigue, drug consumption, and oxygen pressure, which make CFFT individual- and context-specific. Other causal factors beyond such biophysical processes are still to be uncovered. We investigate the connection between CFFT and specific eye-movements, called microtremors, which are small oscillatory gaze movements during fixation periods. We present evidence that individual differences in CFFT can be accounted by microtremors, and design an experiment, using a high-frequency monitor and recording the participant’s eye-movements with an eye-tracker device, which enables to measure the range of frequencies of a specific individual’s CFFT. Additionally, we introduce a classifier that can predict if the CFFT of specific participant lies in the range of high or low frequencies, based on the corresponding range of frequencies of eyes’ microtremors. Our results show an accuracy of for a frequency threshold of 60 Hz and for a threshold of 120 Hz.

Introduction

The critical flicker fusion threshold (CFFT) is defined as the frequency at which a periodic light stimulus transitions from appearing flickering to appearing continuous to an observer, thereby serving as an indicator of the temporal resolution capacity of the visual system. The CFFT is assumed to be limited by the temporal resolution of the human photoreceptors in the central retina [1]. This quantity has inter- and intra-individual variation [2] and is reduced by factors such as increased age [3], brain injuries [4], fatigue [5], or alcohol and marijuana [6] consumption. In contrast, a higher CFFT indicates higher levels of concentration and cognitive performance [7] and can be affected by factors such as oxygen pressure [8] or ambient luminance [1]. The CFFT can also be diminished by various medical conditions such as multiple sclerosis, epilepsy [9], Alzheimer’s disease [10], dyslexia, autism [11], hepatic encephalopathy [12] and eye diseases such as cataract [13], optic neuritis and ischemic neuropathy or demyelinating optic neuritis [14].

While the temporal resolution of the visual system can be investigated by recording its electrophysiological response using an electroretinogram [15], it is more commonly assessed through psychophysical methods that measure the individual’s CFFT value. There are several psychophysical paradigms for measuring CFFT, and the results can vary depending on the specific paradigm used. However, despite these variations, a strong correlation has been observed between CFFT values obtained from different protocols, with Pearson coefficients ranging from 0.6 to 0.92 [1]. Given the heterogeneity of experiments and equipment used, reports of CFFT in humans show considerable variation, with values in ranges from 20 to 85 Hz, depending on the specific study conducted [2,10,1618]. This interval of frequencies is consistent with the temporal resolution of human photoreceptors in the central retina, which is usually limited to CFFTs around 50 Hz [19,20].

Many environmental and health conditions that affect CFFT also influence eye-gaze movements, which can be tracked using eye-tracking technology. Examples of such health conditions include neurodevelopmental conditions such as autism [21] or ADHD [22], dyslexia, drug consumption [23], emotional states, fatigue [24] and level alertness and concentration [25]. Despite some research connecting both topics, eye-gaze dynamics and CFFT, investigations into how they relate to each other are still lacking.

Generally, eye-movements can be classified into two distinct alternating subprocesses: saccades and fixations [26]. The first type of eye-movement, saccades, are rapid jerk-like movements of the eyes that shift the focus of gaze from one point to another. They serve to position the eyes so that the next fixation can occur on a new point of interest. During a fixation, visual information is gathered and processed by the brain. This stationary period is essential for detailed visual perception and cognitive processes such as reading, object recognition, and scene analysis. During everyday activities, our eyes constantly alternate between these saccades and fixations enabling us to gather detailed information about specific aspects of our environment while also scanning for new information.

Our study of eye-movements is focused mainly on fixations, which can be divided into three main subtypes. The first is the so-called microsaccades, a smaller version of saccades that occur involuntarily when an individual is fixating on an area for prolonged periods of time or voluntarily when an individual is aiming to do a small gaze relocation - during reading, for example. It is speculated that microsaccades are important to prevent the retinal image from fading [27].

The second fixational movement is ocular drift, a type of movement that resembles a random walk as it changes direction frequently, in a seemingly erratic manner, resulting in relatively small amplitude displacements of the gaze position [28]. It has recently been shown that this apparently random motion is actually quite reactive to external stimuli [29].

Finally, and more relevant to our study, we have microtremors, an oscillatory movement between 70 Hz to 150 Hz which is smaller than the other fixational movements [30,31]. These movements are difficult to measure and consequently are not studied often [32]. It is hypothesized that such microtremors are caused by asynchronous neuron firing [33], but it is also possible that, since vision is not uniform within the foveola, this targeted microscopic eye-movements may compensate for this lack of homogeneity [34].

This paper is organized as follows: in the next section, we describe our experimental setup to measure CFFT and eye-tracking simultaneously. We also detail the methods used to characterize the periodic behavior of eye-movements. In the results section, we analyze the distribution of CFFT among different participants and address the relationship between microtremor frequency and CFFT variations. As we will see, our findings suggest that the periodicity of eye-gaze shifts correlates significantly with CFFTs. Finally, in the “discussion and conclusions" section, we close the paper with an overview of the main findings, addressing the limitations of our study and stating possible new targets for future research.

Materials and methods

Experimental design and setup

We collected data for this study in the eye-tracker lab of Oslo Metropolitan University using a high-frequency monitor (ASUS ROG Swift 360 Hz) [35] and an eye-tracker (Eyelink Portable Duo [36]). The monitor has 1920x1080 pixels and participants used a chin stabilizer and were seated at a 1-meter distance from the screen. Fig 1 (left plot) illustrates the experimental setup, with a screenshot of the performed task in each session (left plot). The right plot illustrates three consecutive eye-gaze positions, highlighting the different variables that will be used for the data analysis (see below).

thumbnail
Fig 1. Sketch of the experimental setup: on the high-frequency screen (ASUS ROG Swift 360 Hz PG259QNR) one sees the screenshot shown to each participant, namely two targets, one of them flickering with frequency f.

In the inset, we illustrate some of the variables recorded during each session, namely gaze positions (X and Y) and the angle between consecutive gaze shifts, as indicated by points P1, P2, and P3. (cf. Eqs (1) and (2)).

https://doi.org/10.1371/journal.pone.0325391.g001

Binocular eye-tracker data were collected from 85 adult participants, primarily recruited on campus. Among these, 4 participants were excluded given the high number of blinks, leaving a total of 81 participants with a mean age of 23.4 years old, 49 of which are male. While the participant pool includes a significant number of individuals with high education levels, no other threats to validity were identified. The dataset contains records of gaze positions for each eye separately; however, we chose to analyze only the right eye, consistent with common practice in the literature. Data collection took place from 01.Jan.2023 to 01.Dec.2024 and written consent was provided, according to the application Ref. 129768 approved by the Norwegian Ethics Commission (SIKT). The anonymized data, as well as the code for the analysis, has been made publicly available [37].

In the task presented to participants, two stimulus crosses, one left and one right, were shown on the monitor. The crosses were white in color and were placed on a uniform gray background. The gray color was obtained by mixing equal parts of black and white. The crosses had 2.5 cm in width and height, corresponding to of visual angle and were 35 cm apart from each other (corresponding to of visual angle). The luminance level of the room was 88 lux and the monitor’s brightness was measured at 250 nits. Before the experiment began, participants were informed that one of the two crosses would flicker and their task was to identify the flickering cross at the end of each trial. No gaze direction instruction was given and the participants were free to look at each cross individually. In the task, one of the crosses was flickering at frequencies and 120 Hz, while the other was continuously shown on the screen with opacity , and 0.5, increasing with flickering frequency.

The experiment consisted of 9 trials in total, 3 for each of the different values of f, and the position of the flickering cross was randomized across trials. Participants were then asked to identify which of the displayed crosses was flickering (left or right). They were also given the choice of answering “Not sure". If participants could correctly identify the flickering cross 2 out of 3 times, then we assume that they could identify flickers. Considering a binomial distribution with 1/3 for the probability of a successful random choice and with the requirement that at least 2 successful choices must happen, the total probability is . So, the number of incorrect answers was negligible if less than is observed.

Eye-gaze classifiers and data processing

Eye-gaze velocity and angle between consecutive relocations.

While there are many algorithms to distinguish fixations and saccades, here we will use the native eye-tracker algorithm from SR Research Ltd. [38]. This algorithm is a widely accepted standard in the field and was employed due to its demonstrated reliability and accuracy in determining fixations and saccades from eye-tracking data [39]. In this study we will be focused mainly on gaze velocities. Given a series of positions on a plane with coordinates we define:

(1a)(1b)(1c)

where represents the sampling time of recording equipment (1 ms).

To study the period of micro-tremors, we introduce the angle:

(2)

for two distinct instants, t and . Fig 1 shows the eye-gaze positions for three illustrative measurements, together with the corresponding angle .

Four other variables related to gaze velocity will be important to study the periodic nature of eye-movements. These are:

(3a)(3b)(3c)(3d)

Here, and are the horizontal and vertical components of the normalized gaze velocity. The components and are the horizontal and vertical fixational gaze velocities, which are calculated in the same way as the gaze velocity but only considering the data points where both locations and are labeled as fixations (‘fix’). Fixational gaze velocities are relevant to this study since microtremors are subtle movements that could easily be obfuscated by faster dynamics such as the ones responsible for larger gaze shifts.

Power spectrum and fast fourier transform.

When studying periodic movements, one of the most important tools is the Fourier transform, a mathematical operation that translates a signal from its time domain into the frequency domain and is foundational for analyzing the frequency components of signal. Here, we compute it by using the Fast Fourier Transform (FFT), an algorithm which enables the efficient computation of the Discrete Fourier transform (DFT). The DFT, specifically, deals with discrete signals, representing them as a sum of sinusoidal functions, each with a specific frequency, amplitude, and phase. However, directly computing the DFT is computationally intensive, requiring O(N2) operations for N data points. This complexity makes the DFT impractical for large datasets, necessitating a more efficient approach.

The FFT addresses this challenge by dramatically reducing the computational burden to . This efficiency is achieved through a strategy, which recursively breaks down the DFT of a sequence of values into smaller DFTs. By exploiting symmetries in the computation, specifically the periodicity and parity properties of the sine and cosine functions, the FFT minimizes redundant calculations. We follow the Cooley-Tukey implementation, which recursively splits a DFT of any composite size into many smaller DFTs, combining their results to produce the final transform.

In this study, we use the Fourier transform to calculate the power spectrum, which shows how the power or variance of the signal is distributed across different frequencies. As the name indicates, the power spectrum gives an indication of the power associated with each frequency. The power spectrum is calculated from the Fourier transform by squaring the magnitude of the Fourier transform, i.e., Ps(k) = |F(k)|2. We will compute the power spectrum for the horizontal and vertical components of the fixation velocity and the unitary velocity .

Empirical mode decomposition.

Another method which we will use to study periodic movements is the empirical mode decomposition (EMD), an adaptive and data-driven method for decomposing a complex signal into a set of simpler intrinsic mode functions (IMFs). The EMD process begins by identifying the local extrema (both maxima and minima) of the signal. These extrema are then used to create upper and lower envelopes through spline interpolation. The mean of these envelopes is subtracted from the original signal to produce a first approximation of an IMF. This process, known as sifting, is repeated iteratively on the residual (the original signal minus the extracted IMF) until the resulting function meets the criteria for an IMF: it must have the same number of zero crossings and extrema, and its envelopes, as defined by the local maxima and minima, must be symmetric around zero. Once an IMF is extracted, the sifting process continues on the residual signal until all significant IMFs are obtained, leaving a final residual that represents the trend of the original signal. The resulting IMFs represent the signal’s intrinsic oscillatory modes, ordered from high to low frequency.

In this context, three important quantities to analyze IMFs can be computed. The first two quantities are the instantaneous phase (IP) and the instantaneous amplitude (IA), which are obtained with the help of the Hilbert-Huang [40] transform to the IMFs. Mathematically, if S(t) represents the IMF of the original signal, the Hilbert-Huang transform and the respective IMF, yield a signal Z(t) decomposable into the IA a(t) and the IP , namely

(4)

where j is the imaginary unit (j2 = −1). The IP can, therefore, be expressed as

(5)

while the IA is the magnitude of the analytic signal, namely .

The third quantity is the instantaneous frequency (IF), which is derived from the IP by taking its temporal derivative, offering a dynamic representation of the frequency content of the signal as it evolves over time:

(6)

These three quantities allow for a comprehensive analysis of non-stationary and nonlinear signals, capturing the time-varying amplitude and frequency characteristics that traditional Fourier-based methods might miss.

Decision trees as classifiers of CFFT ranges

Decision trees are a powerful and intuitive machine learning technique that employs a flowchart-like structure for both classification and regression tasks. At the core of this technique is a tree-like model where the root node represents the initial decision point based on a feature of the data. From this root, subsequent branches emanate, each corresponding to a possible outcome of the initial decision. These branches continue to split based on additional features, forming a hierarchical structure that ultimately leads to leaf nodes. Each leaf node represents a final classification or prediction, making the decision tree a comprehensive model that encapsulates a series of decisions leading to a particular outcome.

One of the key strengths of decision trees is their clarity and interpretability. The hierarchical structure can be easily visualized, providing a transparent view of the decision-making process. This transparency is particularly valuable for understanding how different features contribute to the final outcome, which is beneficial for both model evaluation and stakeholder communication. By mapping out the relationships between features and their corresponding outcomes, decision trees enable a straightforward analysis of the underlying patterns in the data. This makes them not only useful for predictive analytics but also for gaining insights into the data itself. Despite their simplicity, decision trees can be highly effective, especially when combined with techniques like pruning to avoid overfitting or integrated into ensemble methods such as random forests to enhance predictive performance. In this study, this classification method is going to be used to determine if a particular individual has a CFFT above 60 Hz or 120 Hz based on the characteristics of the gaze fixation’s periodic behaviour. The ground truth to assess the performance of tree-based classifiers is given by the direct answer of each participant, either reporting to have/ have not seen the flicker.

Results

Measuring CFFT and the periodicity of eye-gaze microtremors

We observed that 80 out of 81 participants could see the cross flickering at 30 Hz. In particular, we verified that only 3 out of 792 trials had an incorrect answer of whether the target was flickering or not. As shown in Fig 2(a), we counted the number of participants who could distinguish flickering at each one of the three frequencies, and 120 Hz. The left plot shows the fraction of those participants for each frequency f. In each case, .

thumbnail
Fig 2. (a) Fraction of participants who were able to detect the flicker at a frequency of and 120 Hz (total 81 participants).

, indicates the participants who did not detect flicker at these frequencies. (b) Probability of the angle being within the range in a time interval , i.e. the probability for the gaze to invert its direction within that time interval (cf. Eq (2) and Fig 1, right plot). We observe an increasing probability for small with a maximum at ms (dashed vertical line), indicating that gaze movements revert direction every 2.5 ms to 3.5 ms approximately. Here, the parameter k determines how narrow the angle range must be for a movement to be classified as a reversal in direction. The reversion corresponds to a period between 5 and 7 ms, i.e. a frequency between 142 Hz and 200 Hz. These values are consistent with the larger microtremor frequency values reported in the literature [30] (see text).

https://doi.org/10.1371/journal.pone.0325391.g002

At a frequency of 60 Hz, around half the participants were able to identify the flickering, while at 120 Hz, the fraction reduced to . A flicker frequency of 120 Hz is higher than the thresholds reported in the literature. However, we show evidence that a significant fraction of participants still identifies flickering at such high frequency. Moreover, the plot in Fig 2(a) enables to estimate a sort of typical length of frequency interval. This length corresponds to a decay of 1/e of the number of individuals able to identify flickering within the full range of frequencies. We discuss this further below in the discussion section.

To study the periodicity of eye “tremors", we considered the probability for the angle to take values around . Values of close to represent a reversal in gaze direction, which is qualitatively what tremors are. This probability is estimated as the fraction of time-steps when the angle is in the range  +  for some (small) . Since represents an angle between vectors, i.e. it takes values in , we computed the fraction of time-steps in intervals , with . Specifically, we computed the probability for to take values in the range , for different values of k. Note that the interval Ak has a length of 1/k which decreases with k. In Fig 2(b) plotted the result of for and 8 as a function of (see Eq (1)). We observed that, for (in ms), the gaze movements typically fall outside the interval of -values, indicating to keep the moving direction within the interval . As increases, the probability for to lie within the interval Ak increases as well, with a maximum at ms. This indicates that the periodicity associated with eye-movements inverting their direction lies between 6 and 8 ms, corresponding to a frequency between 125 Hz and 166 Hz.

Identifying individuals with high CFFT

After collecting data from 81 participants and assessing their ability to identify flickering at and 120 Hz, we found that only one participant failed to identify the flickering at 30 Hz. Therefore, we will focus our analysis solely on the frequencies of 60 and 120 Hz.

A function that can be used to differentiate between participants who can identify the flickering and those who cannot is the cumulative power spectrum of a certain variable :

(7)

where is the corresponding normalized power-spectrum. Henceforth, the variable will be substituted by one of the variables defined in Eq (3). Here, we use to distinguish the frequency of the power spectrum from the flickering frequency f tuned in the experimental task (see above). Note that, since we compute the power spectra using FFT, the normalization is performed by computing the cumulative power over the full spectrum, with a maximum frequency of 500 Hz, i.e. half of the sampling frequency.

Fig 3(a) and 3(c) shows the cumulative power spectrum, , computed as described above for f = 60 Hz and f = 120 Hz respectively. As expected, the difference in between the and groups varies with . Therefore, it is useful to identify the frequency at which these groups exhibit the most distinct profiles. We henceforth denote by f* the value of that produces the largest mean difference between groups for a particular variable. In the case of and the groups and we find an optimal frequency f* = 76 Hz. Fig 3(b) and 3(f) display the distribution of at this frequency, where we observe that the group has a higher mean value. Table 1 lists the frequency f* at which the largest mean differences occur for each variable as well as the corresponding p-value from the MWU-test.

thumbnail
Fig 3. (a) The cumulative power-spectrum is shown for groups and , together with (b) the probability density function of the values , across all 81 participants.

In (c) and (d), we have the same plots but for and . We used the Mann-Whitney U-test to determine that, for the variable , Hz is the value that best separates the groups and as well as and . See also Table 1.

https://doi.org/10.1371/journal.pone.0325391.g003

thumbnail
Table 1. Summary of the variables used to distinguish between the target groups, i.e. between and and and (see text). Using the Fourier spectra of the different velocity components (cf. Eq (7)), the component shows the highest capability to identify participants distinguishing CFFT of 60 Hz, while is best to identify participants distinguishing CFFT of 120 Hz.

https://doi.org/10.1371/journal.pone.0325391.t001

Table 1 (first four rows) shows the values of f* in the cumulative power spectrum, for each velocity component in Eq (3) with the corresponding p-value of the MWU-test This test evaluates the null hypothesis that the samples originate from the same distribution; when the distributions are similarly shaped, this is equivalent to testing for equal medians. For f = 60 Hz the component shows the highest capability to distinguish between participants who can resolve that CFFT from those who cannot. For a CFFT f = 120 Hz the component is the best variable. While there are no significant differences in the p-values, it is surprising that the y-components of the velocities are the ones with highest confidence, since the y-component is typically the noisiest one of the eye-gaze velocity.

When it comes to the decomposition into different IMFs by using EMD, the statistical analysis of the IFs retrieves better results than the power-spectrum analysis above. Similar to the cumulative power spectrum, here we consider the cumulative density function using the IF’s probability density function of each IMFs, namely

(8)

where is the probability density function of the IF, refers to a specific IMF, and, as before, refers to a fixation velocity variable defined in Eq (3). Given the already large number of variables in our analysis, we do not consider both and .

Fig 4(b) shows an example of the EMD of , including four IMFs for one randomly selected participant. Fig 44(b) depicts the corresponding probability density function of the IF is shown with . In the Fig 4(c) and 4(e), is represented for and and and respectively, we observe that the values of are larger for the participants who can detect fast flickering stimuli (). As before, we use the MWU-test to determine the values of f*, i.e. the frequency which maximizes the difference in the means between the and groups for a particular quantity. The exact values of f* and their corresponding p-values are shown in Table 1.

thumbnail
Fig 4. On (a), we illustrate, for a randomly selected participant, a set of four IMFs, and S4, generated from the EMD of , and, on (b), we illustrate with the probability density function of the IF of each IMF for the same participant.

Representation of (cf. Eq (13)) for ( and ) (c) and ( and ) (d) respectively. Probability density function of the values of for ( and ) (e) and of for ( and ) (f) respectively. See also Table 1 for the p-values of the optimal frequency f* and the optimal values of MWU-test.

https://doi.org/10.1371/journal.pone.0325391.g004

In Table 1 we computed Eq (13) for the horizontal and vertical components of the fixation and unitary velocities. Similarly as for the cumulative power spectrum, each of these quantities is evaluated at the frequency f*, which is the one that maximizes the difference between the mean of each quantity in Table 1 for the and groups.

Training a classifier and evaluation

In order to check how much the variables in Table 1 determine the CFFT, we create a set of decision tree classifiers, using all possible combinations of the four velocity components. Using a validation set, we employ a five-fold cross-validation for variable selection. We also use cross-validation to tune the hyper-parameters of our classifier, including maximum depth and features, as well as the minimum number of samples required to split an internal nod and the minimum number of samples required to be at a leaf node. Given that the number of participants is not evenly distributed among the groups, we use the synthetic minority oversampling technique (SMOTE) [41] to train the classifier. To evaluate the accuracy of our procedure, we use previously unseen test data, again using a five-fold cross-validation. The classification results are summarized in Table 2, where a detailed breakdown of the true and false positive and negative classifications is presented. We see that the group differences are larger at the 120 Hz threshold ( accuracy) when compared to the 60 Hz threshold ( accuracy). Also, either in the case of f = 60 Hz or f = 120 Hz, we observe higher values of specificity when compared to sensitivity, which indicates that the classifier is better at correctly identifying negatives than positives. The fact that precision is higher than sensitivity suggests that while the classifier reliably labels positive cases as such, its conservative threshold for positive classification likely leads to true positive cases being overlooked.

thumbnail
Table 2. Results relative to a set of tree classifiers, exploring all combinations of the four velocity components.

https://doi.org/10.1371/journal.pone.0325391.t002

Discussion and conclusions

Managerial implications

As expected from previous studies, we found individual variations on CFFT. Based on our experiment, we thus create 4 non-mutually-exclusive groups, namely the group of participants that could distinguish flickering at 120 Hz (resp. 60), which we label as (resp. ) and those that could not identify the flickering at 120 Hz (resp. 60), which we designate as (resp. ). These CFFT values are higher than those reported in the literature. This discrepancy may be attributed to the utilization of a different protocol for measuring CFFT, which can significantly influence the obtained values [1].

From Fig 2, although one has only three points, we can derive a rough estimate of the fraction of participants as an exponential of the frequency, , where is a measure of the frequency tolerance within which the eyes start losing the ability to resolve flickering images. Such exponential fit yields approximately Hz.

On the other hand, the microtremor range of frequencies is consistent with the higher values reported in previous literature [30], falling within the interval of Hz. The uniqueness of our study lies on two major outcomes. First, we provide quantitative evidence on the existence of a link between eye-movements (microtremors) and CFFT. Second, we show that the frequency of periodic movements of the eyes has a direct impact on the CFFT. Moreover, the importance of microtremors for visual perception is supported by previous studies. For example, it has been argued that these eye-movements help prevent retinal adaptation by continuously refreshing the retinal image, thereby maintaining photoreceptor responsiveness [42]. Microtremors are also proposed to enhance visual acuity by enabling a dynamic sampling of the visual scene, which is especially important given that the distribution of visual cones in the retina is not continuous but rather exhibits regional variability [43,44].

Theoretical contributions

In addition to underscoring the relationship between ocular microtremors and visual perception, our findings may also point towards a reevaluation of the assumptions regarding the neural mechanisms underlying flicker fusion thresholds: besides preventing retinal adaptation, our results also indicate that microtremors modulate the temporal resolution of visual processing. This suggests that flicker perception is influenced by both the limitations of photoreceptors and the dynamics of eye movements.

We have shown with statistical significance, according to the MWU-test, that there are differences in the eye’s periodic movements for individuals with different CFFTs. Furthermore, an analysis of eye movements can determine if an individual’s CFFT is above 60 Hz with an accuracy ( accuracy for the 120 Hz case). Furthermore, using EMD, we observe that the significant differences in the eye’s periodic motion between the and occur at a frequency close to the range of microtremor frequencies, namely around 200 Hz.

Future directions

Several potential improvements and future directions can be considered. Future studies may overcome two of the main limitations of our approach, namely measuring CFFT on a continuous range of frequencies and not just above or below 60 and 120 Hz and using special equipment to measure microtremors such as a video-oculography system [45]. Standardizing these protocols can improve the reliability and comparability of results across studies. Conducting longitudinal studies can help understand how CFFT and microtremors change over time and under different conditions, providing insights into their potential as biomarkers for disease progression or treatment efficacy. Ensuring more stringent control over external variables such as lighting, fatigue levels, and cognitive load during experiments can help isolate the specific impact of microtremors on CFFT.

Combining eye-tracking data with neuroimaging techniques (e.g., fMRI, EEG) can lead to the identification of the neural mechanisms underlying the relationship between CFFT and eye movements, providing insights into specific brain regions involved and their roles in health and disease. Furthermore, our classification results could be improved by checking a larger number of simultaneous variables for the decision tree, considering more variables, or using a more complex method such as random forests or gradient boosting [46].

Given the improvements in camera hardware and video-based eye-tracking [47], these results may provide a quick screening for the diseases mentioned above or have a quick assessment of an individual’s fitness to, for example, drive or engage in other concentration-heavy duties. Since we are dealing mainly with the frequency of periodic movements, there may be reduced calibration requirements to measure this movement with video-based eye tracking.

Finally, the subtle nature of microtremors and the consequential difficulty in measuring them have prevented microtremors from playing a very significant role in the medical applications of eye-tracking when compared to the usual analysis of fixations and saccades. Given that CFFT and microtremors are interrelated, it is natural to assume that conditions that affect the first will also impact the second. This points towards a transformative role for this type of eye-movement as early biomarkers of neurodegenerative and neurodevelopmental conditions. Using a similar methodology to ours, the hypothesis that microtremors can be used to detect neurodevelopmental conditions can be tested.

References

  1. 1. Eisen-Enosh A, Farah N, Burgansky-Eliash Z, Polat U, Mandel Y. Evaluation of critical flicker-fusion frequency measurement methods for the investigation of visual temporal resolution. Sci Rep. 2017;7(1):15621. pmid:29142231
  2. 2. Haarlem CS, O’Connell RG, Mitchell KJ, Jackson AL. The speed of sight: Individual variation in critical flicker fusion thresholds. PLoS ONE. 2024;19(4):e0298007.
  3. 3. Luczak A, Sobolewski A. Longitudinal changes in critical flicker fusion frequency: an indicator of human workload. Ergonomics. 2005;48(15):1770–92. pmid:16373316
  4. 4. Benassi M, Frattini D, Garofalo S, Bolzani R, Pansell T. Visuo-motor integration, vision perception and attention in mTBI patients. Preliminary findings. PLoS One. 2021;16(4):e0250598. pmid:33905440
  5. 5. Maeda E, Yoshikawa T, Hayashi N, Akai H, Hanaoka S, Sasaki H, et al. Radiology reading-caused fatigue and measurement of eye strain with critical flicker fusion frequency. Jpn J Radiol. 2011;29(7):483–7. pmid:21882090
  6. 6. Kaufmann RM, Kraft B, Frey R, Winkler D, Weiszenbichler S, Bäcker C, et al. Acute psychotropic effects of oral cannabis extract with a defined content of $\Delta$9-tetrahydrocannabinol (THC) in healthy volunteers. Pharmacopsychiatry. 2010;43(1):24–32. pmid:20178093
  7. 7. Piispanen WW, Lundell RV, Tuominen LJ, Räisänen-Sokolowski AK. Assessment of alertness and cognitive performance of closed circuit rebreather divers with the critical flicker fusion frequency test in arctic diving conditions. Front Physiol. 2021;12:722915. pmid:34447319
  8. 8. Kot J, Winklewski PJ, Sicko Z, Tkachenko Y. Effect of oxygen on neuronal excitability measured by critical flicker fusion frequency is dose dependent. J Clin Exp Neuropsychol. 2015;37(3):276–84. pmid:25715640
  9. 9. Steinhoff BJ, Schuler M, Mighali M, Intravooth T. Critical flicker fusion in patients with epilepsy under antiseizure medication. Epileptic Disorders. 2024.
  10. 10. Abiyev A, Yakaryılmaz FD, Öztürk ZA. A new diagnostic approach in Alzheimer’s disease: the critical flicker fusion threshold. Dement Neuropsychol. 2022;16(1):89–96. pmid:35719254
  11. 11. Thompson JIR, Peck CE, Karvelas G, Hartwell CA, Guarnaccia C, Brown A, et al. Temporal processing as a source of altered visual perception in high autistic tendency. Neuropsychologia. 2015;69:148–53.
  12. 12. Sharma P. Critical flicker frequency: a stethoscope for minimal hepatic encephalopathy evaluation. Turk J Gastroenterol. 2017;28(3):155–6.
  13. 13. Shankar H, Pesudovs K. Critical flicker fusion test of potential vision. J Cataract Refract Surg. 2007;33(2):232–9.
  14. 14. Young MT, Braich PS, Haines SR. Critical flicker fusion frequency in demyelinating and ischemic optic neuropathies. Int Ophthalmol. 2017;38(3):1069–77.
  15. 15. Cornish EE, Vaze A, Jamieson RV, Grigg JR. The electroretinogram in the genomics era: outer retinal disorders. Eye (Lond). 2021;35(9):2406–18. pmid:34234290
  16. 16. Mankowska ND, Marcinkowska AB, Waskow M, Sharma RI, Kot J, Winklewski PJ. Critical flicker fusion frequency: a narrative review. Medicina (Kaunas). 2021;57(10):1096. pmid:34684133
  17. 17. Lafitte A, Sordello R, Legrand M, Nicolas V, Obein G, Reyjol Y. A flashing light may not be that flashy: a systematic review on critical fusion frequencies. PLoS One. 2022;17(12):e0279718. pmid:36584184
  18. 18. Muth T, Schipke JD, Brebeck A-K, Dreyer S. Assessing critical flicker fusion frequency: which confounders? A narrative review. Medicina (Kaunas). 2023;59(4):800. pmid:37109758
  19. 19. Kircheis G, Hilger N, Häussinger D. Correct determination of critical flicker frequency is mandatory when comparisons to other tests are made. Gut. 2014;63(4):701–2. pmid:23846484
  20. 20. Nardella A, Rocchi L, Conte A, Bologna M, Suppa A, Berardelli A. Inferior parietal lobule encodes visual temporal resolution processes contributing to the critical flicker frequency threshold in humans. PLoS One. 2014;9(6):e98948. pmid:24905987
  21. 21. Lencastre P, Lotfigolian M, Lind PG. Identifying autism gaze patterns in five-second data records. Diagnostics (Basel). 2024;14(10):1047. pmid:38786345
  22. 22. Papanikolaou C, Sharma A, Lind PG, Lencastre P. Lévy flight model of gaze trajectories to assist in ADHD diagnoses. Entropy (Basel). 2024;26(5):392. pmid:38785640
  23. 23. Maurage P, Masson N, Bollen Z, D’Hondt F. Eye tracking correlates of acute alcohol consumption: a systematic and critical review. Neurosci Biobehav Rev. 2020;108:400–22. pmid:31614153
  24. 24. Nguyen HT, Isaacowitz DM, Rubin PAD. Age- and fatigue-related markers of human faces: an eye-tracking study. Ophthalmology. 2009;116(2):355–60. pmid:19084276
  25. 25. Pei X, Xu G, Zhou Y, Tao L, Cui X, Wang Z, et al. A simultaneous electroencephalography and eye-tracking dataset in elite athletes during alertness and concentration tasks. Sci Data. 2022;9(1):465. pmid:35918334
  26. 26. Liversedge SP, Gilchrist I, Everling S. The Oxford handbook of eye movements. Oxford University Press. 2011. https://doi.org/10.1093/oxfordhb/9780199539789.001.0001
  27. 27. Poletti M, Rucci M. Eye movements under various conditions of image fading. J Vis. 2010;10(3):6.1-18. pmid:20377283
  28. 28. Poletti M, Aytekin M, Rucci M. Head-eye coordination at a microscopic scale. Curr Biol. 2015;25(24):3253–9. pmid:26687623
  29. 29. Malevich T, Buonocore A, Hafed ZM. Rapid stimulus-driven modulation of slow ocular position drifts. Elife. 2020;9:e57595. pmid:32758358
  30. 30. Graham L, Das J, Vitorio R, McDonald C, Walker R, Godfrey A, et al. Ocular microtremor: a structured review. Exp Brain Res. 2023;241(9):2191–203. pmid:37632535
  31. 31. Klein C, Ettinger U. Eye Movement research. Springer. 2019. https://doi.org/10.1007/978-3-030-20085-5
  32. 32. Krauzlis RJ, Goffart L, Hafed ZM. Neuronal control of fixation and fixational eye movements. Philos Trans R Soc Lond B Biol Sci. 2017;372(1718):20160205. pmid:28242738
  33. 33. Schor CM. Neural control of eye movements. In: Kaufman PL, Alm A, Levin LA, Nilsson SFE, Ver Hoeve J, Wu S, editors. Adler’s physiology of the eye. Edinburgh: Saunders Elsevier. 2011. p. 220–42. https://doi.org/10.1001/archopht.121.11.1667
  34. 34. Poletti M, Listorti C, Rucci M. Microscopic eye movements compensate for nonhomogeneous vision within the Fovea. Curr Biol. 2013;23(17):1691–5.
  35. 35. Republic of Gamers. ROG Swift 360Hz PG259QNR. 2023. https://dlcdnwebimgs.asus.com/gain/C8567253-8AEC-4CBB-A471-131C0A018F52/w717/h525
  36. 36. S R Research. Eyelink Portable Duo. 2023. https://www.sr-research.com/wp-content/uploads/2021/07/eyelink-duo-eye-tracker-video-cover.jpg
  37. 37. Lencastre P, Mathema R, Lind PG. Eye-tracking Flicker Frequency Experiment Repository; 2025. https://osf.io/74awu
  38. 38. Raju MH, Friedman L, Bouman TM, Komogortsev OV. Filtering eye-tracking data from an EyeLink 1000: comparing heuristic, Savitzky-Golay, IIR and FIR digital filters. arXiv preprint. 2023. https://arxiv.org/abs/2303.02134
  39. 39. Friedman L, Rigas I, Abdulin E, Komogortsev OV. A novel evaluation of two related and two independent algorithms for eye movement classification during reading. Behav Res Methods. 2018;50(4):1374–97. pmid:29766396
  40. 40. Huang NE, Shen SSP. Hilbert-Huang transform and its applications. Interdisciplinary mathematical sciences. World Scientific. 2005. https://doi.org/10.1142/5862
  41. 41. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. JAIR. 2002;16:321–57.
  42. 42. Martinez-Conde S, Macknik SL, Troncoso XG, Hubel DH. Microsaccades: a neurophysiological analysis. Trends Neurosci. 2009;32(9):463–75. pmid:19716186
  43. 43. Rucci M, Poletti M. Control and functions of fixational eye movements. Annu Rev Vis Sci. 2015;1:499–518. pmid:27795997
  44. 44. Cooper RF, Brainard DH, Morgan JIW. Optoretinography of individual human cone photoreceptors. Opt Express. 2020;28(26):39326–39. pmid:33379485
  45. 45. Pei X, Xu G, Zhou Y, Tao L, Cui X, Wang Z, et al. A simultaneous electroencephalography and eye-tracking dataset in elite athletes during alertness and concentration tasks. Sci Data. 2022;9(1):465. pmid:35918334
  46. 46. James G, Witten D, Hastie T, Tibshirani R, Taylor J. An introduction to statistical learning. Springer. 2023. https://doi.org/10.1007/978-3-031-38747-0
  47. 47. Brien DC, Riek HC, Yep R, Huang J, Coe B, Areshenkoff C, et al. Classification and staging of Parkinson’s disease using video-based eye tracking. Parkinsonism Relat Disord. 2023;110:105316. pmid:36822878