Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Introducing Point-of-Interest as an alternative to Area-of-Interest for fixation duration analysis

  • Nak Won Rim,

    Roles Conceptualization, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Masters in Computational Social Science, The University of Chicago, Chicago, Illinois, United States of America

  • Kyoung Whan Choe,

    Roles Conceptualization, Data curation, Methodology, Writing – review & editing

    Affiliations Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America, Mansueto Institute for Urban Innovation, The University of Chicago, Chicago, Illinois, United States of America

  • Coltan Scrivner,

    Roles Conceptualization, Data curation, Methodology, Writing – review & editing

    Affiliations Department of Comparative Human Development, The University of Chicago, Chicago, Illinois, United States of America, Institute for Mind and Biology, The University of Chicago, Chicago, Illinois, United States of America

  • Marc G. Berman

    Roles Conceptualization, Methodology, Writing – review & editing

    bermanm@uchicago.edu

    Affiliations Department of Psychology, The University of Chicago, Chicago, Illinois, United States of America, Grossman Institute for Neuroscience, Quantitative Biology and Human Behavior, The University of Chicago, Chicago, Illinois, United States of America

Abstract

Many eye-tracking data analyses rely on the Area-of-Interest (AOI) methodology, which utilizes AOIs to analyze metrics such as fixations. However, AOI-based methods have some inherent limitations including variability and subjectivity in shape, size, and location of AOIs. In this article, we propose an alternative approach to the traditional AOI dwell time analysis: Weighted Sum Durations (WSD). This approach decreases the subjectivity of AOI definitions by using Points-of-Interest (POI) while maintaining interpretability. In WSD, the durations of fixations toward each POI is weighted by the distance from the POI and summed together to generate a metric comparable to AOI dwell time. To validate WSD, we reanalyzed data from a previously published eye-tracking study (n = 90). The re-analysis replicated the original findings that people gaze less towards faces and more toward points of contact when viewing violent social interactions.

Introduction

Since the pioneering works of Buswell [1] and Yarbus [2], eye-tracking has increasingly become an important method in answering a variety of questions in diverse disciplines such as psychology, neuroscience, marketing, and computer science [36]. The eye-movement data from eye-tracking provide a rich source of complex data that can be analyzed through a variety of methods. Currently, the majority of these methods are based on Area-of-Interest (AOI; also called Region-of-Interest or ROI) analyses. AOIs are defined as areas in the stimulus space relevant to the research question and could be used to analyze a variety of eye-movement metrics such as fixations, saccades, or scan paths [79].

The popularity of AOI-based methods comes from its interpretability and its capability to investigate phenomena in stimulus space. For example, AOI dwell time [7] is calculated by summing the duration of fixations that landed within the AOI. The resulting metric can be interpreted as the amount of time the participant gazed at the area in which researchers are interested. Statistical tests such as analysis of variance can then be applied to examine if there are statistical differences between different conditions or between AOIs.

Various methods for defining AOIs have been suggested [10]. One approach is to draw shapes (e.g., ellipse, rectangle, circle) around the objects of interest. Shapes used for AOI definitions in this approach vary between and within studies. For example, Scrivner et al. [11] defined AOIs for faces by drawing ellipses around the face, Lazarov et al. [12] used rectangles for defining face AOIs, and Võ et al. [13] used rectangular AOIs for mouths while using ellipses for faces, eyes, and noses. Another approach for defining AOIs is to draw custom shapes that follow the shape of the object in interest. For example, Tatler et al. [14] drew custom boundaries to define AOIs for various body parts and objects. A third approach is to segment the stimulus into grids and treat each grid as a separate AOI that could be associated with an object of interest (e.g., [15]).

This variability in AOI definitions has led to valid criticism of AOI-based methods. Although some researchers have suggested guidelines in defining AOIs [7, 1618], there is no gold standard for defining AOIs. In addition, although methods that automatically generate AOIs have been put forward [10, 1923], the dominant approach in eye-tracking studies is to manually define AOIs. Therefore, researchers often make subjective decisions in defining AOIs, causing locations, shapes, and sizes of AOIs to vary even between studies that utilize similar stimuli [10, 24]. Combined with the fact that researchers rarely make their AOI definitions public [25], this variability and subjectivity could potentially make inter-study comparison difficult and decrease the reproducibility of studies.

Another inherent problem with the AOI approach is that it can exacerbate the effect of video-based eye-tracking errors [2628]. AOI-based methods classify fixations into dichotomous classes—the fixation resides within an AOI or it does not. This is problematic because there will be fixations that reside very close to the boundary of an AOI (see Fig 1c for an example), and the inclusion and exclusion of these fixations become almost arbitrary considering the measurement errors. In other words, small measurement errors that could make a fixation cross an AOI boundary will have a large effect on the overall dwell time since the inclusion and exclusion of fixations is decided by a hard decision boundary. Moreover, this dichotomous classification treats all fixations equally as long as the gaze resided within the AOI. In other words, this method does not take into account that a fixation located closer to the center of the AOI likely has a higher probability of being related to the object of interest than fixations located very close to the AOI boundary.

thumbnail
Fig 1. Example of AOI dwell time and WSD calculation.

a) An example of a participant’s fixations (Bluish-green dots where bluish-green numbers denote the fixation duration for each fixation), AOI (orange ellipse) definition, and POI (sky blue X) definition are shown on a subset of an exemplar image. Although we present a partially-blurred image here to protect privacy, the participants saw real, unblurred images in the experiment. b) The uniform kernel weights used for AOI dwell time calculations. c) An example of AOI dwell time calculation. d) The Gaussian kernel weights used for the WSD calculation. e) Example of WSD calculation. Note that the original fixation durations have been reweighted based on their proximity to the POI.

https://doi.org/10.1371/journal.pone.0250170.g001

In an attempt to address methodological issues with the AOI method, various alternative methods have been suggested. One of the most common alternative methods is fixation map analysis and its variations [29, 30], where the location of each fixation and a metric related to each fixation (e.g., fixation duration) are mapped onto a three-dimensional space. Fixation maps provide an intuitive visualization of fixation dispersions and have been used for illustrative purposes in various studies. For example, with this method one can create heatmaps that locate fixations and color-code them by their duration, using hotter colors to signify longer fixation durations.

However, an important drawback of these methods is that it is difficult to apply statistical tests to access significant differences within the stimulus space (i.e., it is hard to quantify whether participants are differentially looking at different parts of the image). Because of this, many studies use methods such as fixation maps for visualizations but still rely on AOI-based methods to run their statistical tests and to draw their conclusions (e.g., [14]). It is worth noting that a toolbox (iMap) that does not use AOIs has been proposed to address this issue [24, 31]. However, this toolbox needs a normalized space between all stimuli for it to work properly (analogous to MRI images being normalized into standard atlas spaces like the Montreal Neurological Institute template so that all individual participant’s brains can be compared to each other by being in the same space), which limits the types of eye-tracking studies with which it can be used. For example, this method could be applied to analyze eye-tracking data where participants looked at a variety of human portrait images since most human portrait images will have common components, such as eyes that would appear in similar positions in the stimulus space. However, it would be difficult to apply this method to stimuli that would be challenging to place in a normalized space such as having participants look at a set of abstract art, where there may not be common features that appear in similar positions across art pieces.

In this study, we propose a new method, which we call Weighted Sum Durations (WSD) analysis, that allows for fixation duration analyses while decreasing the variability of AOI definitions but retaining the interpretability of AOI dwell time analysis. This method utilizes Points-of-Interests (POIs), defined as single-pixel points in the stimulus space, as an alternative to AOIs. This substitution reduces the variability of AOI definitions by a large margin since POIs are defined only in terms of locations while AOIs are defined in terms of shape, size, and locations. Furthermore, we demonstrate that the POIs can be defined in a data-driven fashion so that the location of POIs will not rely fully on subjective decisions. Although the dimensionality of AOI definition is reduced, the semantic meanings (e.g. faces) are still maintained by the POIs upon definition, allowing them to retain much of the interpretability of AOIs.

To calculate a dwell time-like metric, our method weights the duration of fixations by the distance between the fixation and the POI and sums them to produce a single metric for the POI (i.e., the WSD). Specifically, an isotropic Gaussian kernel centered at each POI is used to weight the fixation durations. Importantly, the Gaussian kernel shares the same shape and size across all POIs, reducing the subjectivity and variability issues that plague the AOI method. Additionally, this approach naturally circumvents the problem of AOIs as dichotomous classification since there are no hard boundaries in this method and all fixations are weighted differently by their distance to the POIs. As an illustration of WSD analysis, we applied the POI-based method to a study that used AOI dwell time analysis [11]. We show that the results of this study can be replicated using WSD.

Materials and methods

Overview

This study used a previously collected eye-tracking dataset with 72 images and 90 participants [11]. This dataset is publicly available and can be downloaded from the Center for Open Science (https://osf.io/sfyj2/). Scrivner et al. [11] used AOI dwell time analysis for statistical testing. In this study, we analyzed whether the results of this study can be replicated using WSD analysis instead of the AOI dwell time analysis. Although we used data from a study previously published in a peer-reviewed journal, this work does not constitute dual publication since we are applying a novel analysis method to replicate the results from the previous study. The results using the AOI dwell time method are only presented in this paper to allow for easy comparison between the conventional method and our newly proposed method. Although we provide a brief explanation of the dataset below, please see the original study for a more detailed explanation of the experimental design and the data collection process.

Participants

Ninety participants participated in the study (86 completed demographics survey; median age = 20; 56 self-identified as females and 30 self-identified as males). All participants had normal or corrected-to-normal vision (with contacts) and spoke fluent English. Informed consent was provided and signed from all participants in the study. The experiment was approved by the Social Sciences Institutional Review Board at the University of Chicago and all procedures were executed in accordance with the relevant regulations and guidelines.

Materials

Stimuli.

Seventy two colored images depicting interactions between two adult males were shown to the participants in random order. All images were 1600 x 900 pixels and were collected from various media sources. One-third of the images (24 images) displayed violent interactions between two adult males, one-third of the images displayed friendly interactions between two adult males, and one-third displayed ambiguous (not clearly violent nor friendly) interactions between two adult males.

Apparatus.

Participants sat 95 cm away from a 24-inch LCD monitor. The resolution of the monitor was 1920 x 1080 pixels, and the images were displayed at the center of the screen in their native resolution. Sixty pixels corresponded to a visual angle (VA) of 1°. MATLAB with the Psychophysics Toolbox extension [3234] was used to present the stimuli. Eye movements were recorded from both eyes via an SR Research (Ottawa, Ontario, Canada) Eyelink 1000 eye tracker with a sampling rate of 500 Hz using head free-to-move remote mode. The eye tracker was calibrated using a nine-point calibration routine and validated for all participants individually before the experiment.

Procedure

Each participant went through a practice block and four main blocks. Practice blocks used 6 images that were not included in the study. The 72 images for main blocks were randomly split into 4 blocks of 18 images for each participant. In each trial, an image was presented for 6 seconds, and participants were asked to look at the image naturally. After the image presentation, participants rated the degree of violence in the shown interaction using a 7-point Likert scale with ‘1’ indicating not violent and ‘7’ indicating extremely violent.

At the start of each trial, participants had to click a small dot with a diameter of 0.3° (18 pixels) that appeared at the center of the screen. The central dot served as an implicit required fixation location [35] where the participants had to fixate their gaze to aim and click the mouse cursor [36]. Since gazing at the central dot in the pre-stimulus period carried over to the first fixations, this allowed Scrivner et al. [11] to check the quality of the eye movement data at the trial level and to drift-correct the eye movement data based on the first fixations of each trial.

Eye-tracking data processing

Preprocessing.

The data were preprocessed using the Eyelink Data Viewer (SR Research) to acquire discrete fixation locations and the duration of each fixation. All first fixations were excluded from analysis since these fixations were carried over from clicking the central dot prior to the image being displayed.

Offset-correction and drift-correction.

The monitor used in this study had a resolution of 1920 x 1080 pixels, while the images presented had a size of 1600 x 900 pixels. Since images were presented in their native resolution, the coordinate of each fixation from the preprocessed data was corrected to account for this offset.

In addition to the offset-correction, Scrivner et al. [11] also accounted for the video-based eye-trackers measurement error by drift-correcting the fixation locations based on the location of the first fixations in each trial. Specifically, the coordinate of the first fixation of each trial was considered to be the coordinate of the central dot, and the difference between the two was corrected. The direction of drift-correction was mostly within the 90°/270° axis and 45°/225° axis (S1b Fig in S1 File). The mean magnitude of drift-corrections across all trials was 1.32° (79.39 pixels; SD = 1.39°; S1c Fig in S1 File).

Discarded trials.

All trials that had total fixation time (excluding the first fixation) less than 3,000 ms (half of the display time) were discarded to rule out trials with potential measurement errors or trials where participants were inattentive to the image. Furthermore, trials that had drift-corrections greater than 3 standard deviations from the mean were discarded from the analysis. In total, 292 trials across all participants (4.5%) were discarded and 52 participants had no discarded trials. On average, a participant had 3.24 discarded trials (SD = 5.77, median = 0, max = 23).

AOI dwell time and WSD analysis

Calculation of AOI dwell time and WSD.

AOI dwell time was calculated by summing the duration of all fixations located within the AOI. This is equivalent to applying uniform kernel weights to fixation durations based on their corresponding fixation locations (Fig 1b) and summing them. WSD was also calculated by applying kernel weights to fixation durations and summing the results, but isotropic Gaussian kernels centered at the POI coordinates were used instead (Fig 1d). In other words, bigger weights were applied to fixation durations where fixation locations were closer to the AOI. The isotropic Gaussian kernel for each POI was constructed using a bivariate Gaussian probability density function with mean as POI location and isotropic covariance matrix in the form of (refer to later sections for how POIs and the σ were chosen for the WSD analysis). The kernel was divided by its maximum value so that the weights were normalized to [0, 1], and the weights were rounded to three decimal places.

AOI definitions.

Scrivner et al. [11] defined three types of AOIs for their analysis—faces, points of contact, and objects. They defined the AOIs by drawing ellipses surrounding the objects of interest (see Fig 2a for an example). Since dwell time on the object AOIs were not significantly related to any results in the original study, we only conducted analyses using the face AOIs and point of contact AOIs without any modification from the original study. The face AOIs were defined for all 72 images while point of contact AOIs were defined for the 37 images that contained contact points.

thumbnail
Fig 2. Example of AOI/POI definitions and selected mixture components.

a) AOI definitions from the original study (ellipses) are illustrated in an exemplar image. b) The average BIC for GMMs fitted with different numbers of components (k). The error bar denotes the standard deviation of BIC across 50 different fitted GMMs. The reddish-purple dotted line denotes the number of component with the least average BIC. c) An example of visualization of Gaussian components of the fitted GMM with the selected number of components that had the lowest BIC. The yellow dots represent the offset- and drift-corrected locations of all fixations across all participants to the exemplar image. The components are visualized as semi-transparent colored circles with means at the means of the components and radii of 2σ. The components selected to match the AOI have black boundary lines. d) An example of selected mixture components that correspond to the AOIs defined in the original study. The means of selected mixture components (Xs) were used as the POI.

https://doi.org/10.1371/journal.pone.0250170.g002

POI definitions and determining the σ of the WSD Gaussian kernel.

Since each POI serves as the center for the weighting kernel, the optimal position of a POI will be the center of a fixation cluster. Building on this, we defined the POIs for our analysis using the bivariate Gaussian Mixture Model (GMM) [37, 38], which is a model used for a variety of tasks such as clustering and density estimation. Although the k-means clustering algorithm is more widely used in clustering analysis of fixation data [3941], we chose to use the GMM method since it allows us to estimate the densities of fixation as well, not just cluster membership. GMMs assume that the data are generated from a mixture of random and normally distributed components; each with a unique mean and variance. The data are bivariate in our case (i.e., each fixation location has an x-coordinate and a y-coordinate), therefore each Gaussian component will have a 2 x 1 column vector of means and 2 x 2 covariance matrix. The mean vector and the covariance matrix of each individual Gaussian component can be estimated using algorithms such as the Expectation-Maximization (EM) algorithm [42], which we utilized for fitting GMMs in this study. This setting is quite similar to WSD which weights the fixations based on a Gaussian kernel centered at POIs, making GMM potentially a good tool to guide POI definitions and to estimate the covariance of the Gaussian kernel for WSD analysis.

Because the Gaussian kernel for WSD analysis is designed to be isotropic, we used the spherical GMM, a special form of GMM that restricts the covariance matrix to be isotropic [43]. In other words, all Gaussian components in spherical GMM have a covariance matrix in the form of (i.e., no covariance between the x- and y-coordinates). For each image, we aggregated all offset and drift-corrected fixation locations for that image across all participants. As we were lacking strong theoretical justification for the optimal number of components (k) to initialize (i.e., we knew that there will be faces of two interacting adults in the image, which fixations will likely be clustered around, but we could not justify that those faces are the only part of an image that will draw fixations), we used a grid search for k ∈ {1, …, 50} using the Bayesian information criterion (BIC) [44] as the evaluation metric (Fig 2b) for each image. Specifically, 50 spherical GMMs were fitted using the EM algorithm for each number of components, and the number of components that showed the least average BIC was used. The mean number of componets used for GMM fitting was 12.63 (SD = 2.52, min = 6, max = 19; S2 Fig in S1 File)

After determining the number of mixture components, we visualized the Gaussian components of the fitted GMM with the chosen number of components that had the lowest BIC (Fig 2c). We then identified components that semantically matched each AOI from Scrivner et al. [11] and used the mean of the identified component to define the POI for that AOI (Fig 2d).

When we were unable to find Gaussian components that matched a previoulsy defined AOI from Scrivner et al. [11], implying that the fixations were not clustered near that AOI, we defined the POI as the mean of the AOI from the original study. No face POI was defined in this manner while 10 point of contact POIs were defined in this manner. Finally, the σ’s of all Gaussian components corresponding to AOIs were averaged to use as the σ value for the WSD analysis. The averaged σ value was 45.01 pixels (SD = 13.04 pixels), equivalent to 0.75° in visual angle (S3 Fig in S1 File). Building on this, we used 0.75° (45 pixels) as the σ for the WSD analysis. To illustrate the weighting, a fixation point that was located 1° away from the POI received approximately 0.41 weight and a fixation point that was 2° away from the POI received approximately 0.03 weight.

Calculation of AOI/POI saliency.

The physical saliency of each pixel was calculated using the Graph-Based Visual Saliency algorithm [45]. Then, the same kernel weights used in the AOI dwell time calculation (Fig 1b) and the WSD calculation (Fig 1d) were applied to the physical saliency of each pixel and summed to calculate the physical saliency of AOIs and POIs. The physical saliency of AOIs and POIs was normalized by the total saliency of each image and was included in all statistical analyses to control for the physical saliency of AOIs and POIs.

Linear Mixed-Effects Models

Trial-level Linear Mixed-Effects Models (LMM) were fitted to both AOI dwell time data and WSD data. LMM can isolate the effect of interest while controlling for the differences between participants and stimuli [46]. To control for the difference between participants and stimuli, random intercepts were included for stimulus and participant (i.e., they were random effects) for all models. When the outcome variable was AOI dwell time, the LMM included the saliency of AOIs and the size of AOIs as fixed effects to control for the two. When the outcome variable was WSD, the LMM included the saliency of the POIs as a fixed effect. We did not include the size of POIs in the model since POIs and Gaussian kernels used in WSD calculations all had the same size. All model statistics (b estimates, confidence intervals, t-values, p-values, marginal R2, and conditional R2) are reported in S1 Table in S1 File.

Robustness to noise

To investigate how AOI dwell time and WSD differ when the level of noise increases, we repeated the analysis after systematically adding noise to the drift-corrected fixation locations. Specifically, we generated Gaussian noise separately for horizontal and vertical coordinates and added that noise to each trial’s fixation location (similar to [47]). This method was chosen since eye-tracking devices generally produce white noise even when artificial eyes were used for recording [48, 49]. Then, the AOI dwell times and the WSDs were calculated from this altered dataset with the added noise using the same AOI and POI definitions used in the main analysis. Finally, LMMs investigating the relationship between violence rating and fixation durations on faces, which was the weakest relationship in the original study, were fitted to the data. This process was repeated 1,000 times for 4 different levels of noise (four different standard deviations of Gaussian noise: 0.25°, 0.5°, 0.75°, 1°). We then investigated how many times the tested relationship reached significance with three different significance levels (α = 0.01, 0.05, 0.1). The Gaussian noise was generated using NumPy [50]’s random module, and models that failed to converge were excluded from the analysis.

Code availability and software acknowledgment

All codes used for the data analysis, including the Python functions that can be generalized to use for other eye-movement datasets, can be downloaded from https://osf.io/wgma5/. SciPy [51], pandas [52], and NumPy [50] packages in Python3 were used for general data processing and analysis, including the calculation of WSD. The GMM fitting and BIC calculation was performed using the GaussianMixture Class from scikit-learn [53] package in Python3. The LMM was fitted using lmerTest [54] package built on top of lme4 [55] package in R [56]. The tidyverse [57] package was also used for general data manipulation in R. For visualization, matplotlib [58] package in Python3 and ggplot2 [57] package in R were used. The colorblind-friendly color template from Wong [59] and Brewer [60] was used for color selection. Finally, MATLAB (The MathWorks, Natick, MA) was used for extracting the AOIs from the original dataset and calculating the physical saliency of images using the GBVS algorithm [45]. The script from the GBVS algorithm was downloaded from http://www.vision.caltech.edu/~harel/share/gbvs.php.

Supplementary analysis

We also repeated the analysis setting full width at half maximum of σ to 2° (i.e. σ = 0.85°; S2 Table in S1 File). This value corresponds with the size of foveal vision, which is about 2° in diameter [6164] and was a value used for fixation map analysis in previous studies [65, 66]. In addition, we also conducted an additional analysis setting all POI definitions to the center of an AOI ellipse, rather than using GMMs, with σ = 0.75° (S3 Table in S1 File) and σ = 0.85° (S4 Table in S1 File). The results were not substantially different from those reported in the main article.

Results

Correlation between AOI dwell time and WSD

Dwell time on AOIs and WSD of POIs were highly positively correlated on both face AOIs/POIs (r(6180) = 0.80, p <.001; Fig 3a) and point of contact AOIs/POIs (r(3183) = 0.81, p <.001; Fig 3b). Interestingly, 18.68% of the trials using images with point of contact AOI (595 trials out of 3185 trials) had zero point of contact AOI dwell time but had non-zero point of contact WSD. In contrast, only 3.15% of trials (195 trials out of 6182 trials) had zero face AOI dwell time but had non-zero face WSD. This suggests that point of contact AOIs could have neglected a large number of fixation points that were sufficiently close to POI to get weight in the WSD analysis compared to Face AOIs.

thumbnail
Fig 3. Correlation between AOI dwell time and WSD.

a) Correlation between dwell time on face AOIs and face WSDs. b) Correlation between dwell time on point of contact AOIs and point of contact WSDs. The sky blue line denotes the fitted regression line. The shaded region denotes the 95% confidence interval (due to the large number of data points, it is not very visible). Each point denotes a trial. The orange rectangle highlights the trials with zero AOI dwell time. The number of trials for b) is smaller than a) because there are fewer stimuli with point of contact AOIs/POIs while all stimuli had faces AOIs/POIs.

https://doi.org/10.1371/journal.pone.0250170.g003

Replication of the original study

We tested if the WSD analysis is robust enough to replicate the three main findings of Scrivner et al. [11]. Analogous to the original study, trial-level LMMs were used to investigate the relationship between the outcome variable and predictor variable in all analyses. Note that we excluded some outlier trials based on the magnitude of drift-correction, which was not accounted for in the original study, so the reported statistics could deviate slightly from the results reported in the original study.

Interaction type and fixation durations on faces.

The first major finding of Scrivner et al. [11] was that participants showed less dwell time on face AOIs when looking at images showing violent interactions compared to when they were looking at images showing friendly interactions or ambiguous interactions. As a baseline, we first fitted LMMs using dwell time on face AOIs as the outcome variable and the depicted interaction type in images as the predictor variable. Analogous to the results from the previous study, we found that participants fixated significantly less inside face AOIs when the interaction shown in the image was violent than when it was friendly (b = -360.49, 95% CI [-615.62, -105.38], t = −2.807, p = .006) or ambiguous (b = −327.58, 95% CI [-580.33, -74.85], t = −2.575, p = .012; Fig 4a). Next, we used WSD for face POIs as the outcome variable and depicted interaction type in the images as the predictor variable for the LMM. We found that participants showed significantly less WSD for face POIs when looking at images showing violent interaction compared to looking at images showing friendly interaction (b = −167.37, 95% CI [-327.41, -7.33], t = −2.077, p = .041) or looking at images showing ambiguous interaction (b = −170.62, 95% CI [-328.08, -13.17], t = −2.153, p = .035; Fig 4b).

thumbnail
Fig 4. Fixation durations on faces by interaction type.

a) AOI dwell time on Face AOIs by predefined interaction types. b) WSD on Face POIs by predefined interaction types. Error bars represent ±1 SD. Although the values look similar, WSD and AOI dwell time uses different metrics and cannot be compared directly.

https://doi.org/10.1371/journal.pone.0250170.g004

Violence rating and fixation durations on faces.

The second main finding of Scrivner et al. [11] was that participants showed less dwell time on face AOIs for images they rated as more violent. We first attempted to replicate this finding by fitting an LMM using violence rating given by participants as the predictor variable and AOI dwell time on faces as the outcome variable. To account for the individual difference in the standard for violence judgment, we z-scored the violence rating within participants. In line with the result using interaction type as the predictor variable, participants spent less time fixating inside the AOIs when they rated the depicted interaction more violent (b = −54.66, 95% CI [-102.96, -5.95], t = −2.253, p = .024; Fig 5a). Furthermore, z-scored violence rating was a significant predictor variable in LMM using WSD on face POIs as the outcome variable (b = −37.80, 95% CI [-72.75, -2.35], t = −2.157, p = .031; Fig 5b).

thumbnail
Fig 5. Fixation durations on faces by violence rating.

a) AOI dwell time on Face AOIs by violence rating determined by individual participants. b) WSD on Face POIs by violence rating determined by individual participants. Error bars represent ±1 SD. Note that for the LMM analysis violence ratings were z-scored within participants, but the original ratings were shown here for illustrative purposes. Although the values look similar, WSD and AOI dwell time uses different metrics and cannot be compared directly.

https://doi.org/10.1371/journal.pone.0250170.g005

Interaction type and fixation duration on points of contact.

The third finding of Scrivner et al. [11] was about participants’ dwell time on point of contact AOIs when looking at images with all three AOIs in the image (face, point of contact, object held by a person). When viewing these images, participants showed increased dwell time on point of contact AOIs when looking at violent interactions compared to when looking at images with friendly interactions. We tested this effect by fitting LMM using interaction type as the predictor variable and dwell time on point of contact AOI as the outcome variable. In line with the results from the original study, we found that participants fixated significantly longer on the point of contact AOIs when viewing violent images that contained all three AOIs compared to when viewing friendly images that contained all three AOIs (b = 285.68, 95% CI [110.85, 460.54], t = 3.477, p = .005; Fig 6a). Furthermore, we tested if this effect replicated if we used the WSD on point of contact POIs as the outcome variable. In the 12 images that contained all three AOIs defined, participants’ WSD on point of contact POI was significantly higher when viewing violent images than when viewing friendly images (b = 177.28, 95% CI [41.68, 312.90], t = 2.782, p = .017; Fig 6b).

thumbnail
Fig 6. AOI dwell time and WSD on point of contact by interaction type.

a) AOI dwell time on points of contact by predefined interaction type. b) WSD on points of contact by predefined interaction type. Error bars represent ±1 SD. Although the values look similar, WSD and AOI dwell time uses different metrics and cannot be compared directly.

https://doi.org/10.1371/journal.pone.0250170.g006

The Effect of noise on subsequent linear models.

We calculated AOI dwell times and WSDs on the altered data with added Gaussian noise and fitted LMMs using AOI dwell time or WSD on faces as the outcome variables, and z-scored violence rating as the predictor variable. Out of 8,000 LMMs fitted to the data from 4,000 generated datasets (1,000 datasets for 4 different Gaussian noise distributions where sigma was manipulated for 4 different levels), 187 LMMs (2.34%) failed to converge and were excluded for analysis (101 used AOI dwell time as the outcome variable, 86 used WSD as the outcome variable). The mean p-value for the LMMs fitted using AOI dwell time was higher than that of LMMs fitted using WSD across all noise levels (S4a Fig in S1 File). Additionally, the proportions of LMMs that showed statistically significant relationships were higher for LMMs fitted using WSD compared to LMMs fitted using AOI dwell time across all noise levels and all significance levels (S4b Fig in S1 File). Similar results were shown when we use added noise to each fixation or each participant instead of each trial. These results suggest that WSD was less affected by added noise than AOI dwell time.

Discussion

We developed and validated a point-of-interest-based method for fixation duration analysis, Weighted Sum Durations (WSD), by replicating three main results from previous research [11] which used AOI dwell time analysis [7]. Given that WSD is robust enough to replicate the results from AOI dwell time analysis, we suggest that WSD could be a valuable alternative approach since it has some advantages over both AOI-based approaches and approaches that do not use AOIs. WSD analyses decrease the subjectivity and variability of AOIs [10, 16, 24, 25] by utilizing POIs instead of AOIs. Moreover, the POI approach still follows the basic framework of AOI-based approaches to provide a metric that can be directly substituted for AOI dwell time for statistical testing in the stimulus space.

Furthermore, the WSD approach does not use a hard boundary that classifies fixations dichotomously as being in or out of the AOI but instead uses a soft boundary that down-weights fixations that are far from POIs. This is advantageous over the AOI dwell time approach since this can potentially mitigate adverse effects of video-based eye-tracking errors [2628] for fixations located in positions where it is difficult to judge whether the fixation is related to the object of interest or not. In other words, the effect of small measurement errors that could make a fixation cross an AOI boundary will have less of an effect in the WSD approach since there is no hard boundary. Additionally, by up-weighting fixations that are closer to POIs and down-weighting fixations that are further from POIs, researchers can take into account the probability of a fixation being related to the object of interest more directly.

POIs could also be useful when objects in the stimulus space make up a small portion of the image or do not have natural boundaries. For example, Scrivner et al. [11] defined AOIs for points of contact. As the word “point” implies, this object of interest is inherently centered to a single point, and it becomes very difficult to decide on where to draw the boundary of the AOI. As a result, there was a large number of trials (942 trials or 29.58%) with zero AOI dwell time for the point of contact. However, in a large proportion of these trials (595 trials or 63.16%) had non-zero WSDs for these points of contact. Our results suggest that using the POI approach can be a great alternative when we can be sure of the central point of an object, but uncertain of where the discrete boundary of the object lies.

Another advantage of WSDs over AOI dwell times is that POIs are easier to store and share than AOIs. There is no standard programming language or data structure in defining and storing AOIs. Combined with a relatively large amount of information needed to define AOIs, this means that researchers often need to learn new programs and data structures and convert these idiosyncratic data structures into a format they are familiar with to access the AOI information defined by other researchers. On the contrary, POIs are just coordinates attached to images, and only 2 floating point numbers are required to recreate the POI definitions. This enables the storage and sharing of POI definitions without using idiosyncratic data formats with multiple layers of information. For example, our implementation, which can be openly downloaded at https://osf.io/wgma5/, requires only three columns (image name, x-coordinate, y-coordinate) in a CSV file to store each POI information. Researchers could access and examine the POI definitions without having to hassle with various data formats. With the recent emphasis on open science and replicable research across multiple domains [6771], this simplicity in sharing definitions could be an important advantage of POI-based WSDs over AOI dwell time.

In this article, we defined most of the POIs used for analysis in a data-driven way based on GMM. This data-driven approach could be seen as another advantage of POIs over AOIs since POIs could be guided by the data in this way, while AOIs often have to rely solely on subjective decisions to draw the AOI boundaries. However, this data-driven approach has the drawback of not knowing the POIs prior to the data collection, making it difficult to tailor the design for specific hypothesis testing (e.g., there might be a case where no mean from the GMM component is located near an object of interest). For example, we were not able to find GMM components that matched some of the point of contact AOIs, defined before the experiment, because there was no fixation cluster near the point of contact AOIs. One way of circumventing the issue will be to conduct a small pilot study to ensure that the fixations are clustered near objects of interest, but this could increase the cost of the research. Another way to circumvent this issue is to pick semantically meaningful locations and supplement them to the data-driven POI definitions. However, this approach has the disadvantage of bringing back a lot of subjectivity in POI definitions that the data-driven approach addresses. Additional research will be required to develop a method of defining POIs that could address the issue further such as using computer vision algorithms to define POIs based on semantically relevant objects.

While we showed that our POI-based approach is quite robust, this does not mean that the capabilities of the WSD approach cannot be enhanced. Some hyperparameters could be fine-tuned through additional research. One important hyperparameter in the WSD analysis is the σ of the Gaussian kernel. In this study, we set the σ of the Gaussian kernels to be 0.75°, but it is uncertain whether this value is the ideal parameter value when examining fixation durations. The approach has to be applied to more datasets to uncover the optimal σ value. In addition, it is uncertain that there is a shared σ that works well in diverse images with different sizes and objects. More research needs to be conducted to evaluate whether there is a generalizable value that works for most studies, or whether researchers need to calibrate the σ for their purposes. Another important hyperparameter is the covariance matrix. In this study, we used an isotropic covariance matrix for the Gaussian kernel so that we were as assumption-free as possible. However, some studies have shown that other forms of the covariance matrix could be more suitable for modeling human fixation patterns [72]. Future research could examine the effect of using other forms of the Gaussian Kernel in applying WSD.

Due to the novelty of using POI in fixation data analysis, there are some limitations to our findings. One important limitation of this work is that we only used one dataset to test the newly proposed method. Though the general concept should apply to other datasets, it is difficult to fully know how generalizable the method will be. We also only tested the Gaussian kernel for weighting in WSD analysis. Although Gaussian kernels are one of the most widely used kernels in various methodology such as smoothing, it will be interesting to exchange the weighting method and see how this affects the results from WSD analysis. Furthermore, the WSD method has only been tested on static images. It is not certain that this method could be readily applied to experiments using non-static stimuli. Moreover, we only compared WSD with AOI dwell time calculated using AOIs defined manually by researchers in this study. We did not investigate how WSD compares to dwell time using automatically generated AOIs. It is possible that WSD works better for certain types of scenes or AOIs than for others.

Another limitation worth mentioning is that while this method retains the interpretability of AOI dwell time and reduces the subjectivity in AOI definitions, it may introduce some issues with regard to interpretability. Since WSDs use sums of transformed fixation durations rather than raw sums of fixation durations, they do not provide intuitive explanations such as “participants fixated on the faces for 300 ms.” If an intuitive explanation is favored for the research purpose, WSDs may be less useful than traditional AOI dwell time analysis. In addition, although the POI method decreases the subjectivity of POI placement to one dimension (location), some subjectivity remains with regard to where the POI should be placed. However, some completely data-driven solutions could be implemented when fixations are clustered an object of interest. Additional research using the proposed method is required to address the above-mentioned limitations and to further validate the POI-based method.

Conclusion

Weighted Sum Durations analysis based on POIs was proposed as an alternative approach for AOI dwell time analysis. The use of POIs instead of AOIs for the analysis decreases the subjectivity and variability in AOI definitions and accounts for the dichotomous classification problem of AOIs (i.e., whether a fixation falls within an AOI or not if it is right on the border of the AOI, or even very near to it). We checked the robustness of the WSD approach by replicating results from research that used AOI dwell time analysis. The findings of this study provide researchers with a new tool for assessing fixation durations that can be easily replicated and shared across researchers.

Acknowledgments

The authors thank Dr. Philip D. Waggoner for his insightful comments, especially regarding the use of Gaussian Mixture Models. The authors thank Mr. Mark Horvath for giving us permission to use his photo (used in Figs 1 and 2; retrieved from https://www.prweb.com/releases/2015/03/prweb12600961.htm) for this paper.

References

  1. 1. Buswell GT. How people look at pictures: a study of the psychology and perception in art. The University of Chicago Press; 1935.
  2. 2. Yarbus AL. Role of eye movements in the visual process. Nauka; 1965.
  3. 3. Duchowski AT. A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers. 2002;34(4):455–470.
  4. 4. Rayner K. The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene perception, and visual search. Quarterly Journal of Experimental Psychology. 2009;62(8):1457–1506.
  5. 5. Winkler S, Subramanian R. Overview of Eye tracking Datasets. In: 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX); 2013. p. 212–217. Available from: https://doi.org/10.1109/QoMEX.2013.6603239.
  6. 6. Wedel M, Pieters R. A review of eye-tracking research in marketing. In: Review of marketing research. Routledge; 2017. p. 123–147.
  7. 7. Holmqvist K, Nyström M, Andersson R, Dewhurst R, Jarodzka H, Weijer Jvd. Eye Tracking: A comprehensive guide to methods and measures. OUP Oxford; 2011.
  8. 8. Dewhurst R, Nyström M, Jarodzka H, Foulsham T, Johansson R, Holmqvist K. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior Research Methods. 2012;44(4):1079–1100.
  9. 9. Borys M, Plechawska-Wójcik M. Eye-tracking metrics in perception and visual attention research. European Journal of Medical Technologies. 2017;3:11–23.
  10. 10. Hessels RS, Kemner C, van den Boomen C, Hooge ITC. The area-of-interest problem in eyetracking research: A noise-robust solution for face and sparse stimuli. Behavior Research Methods. 2016;48(4):1694–1712.
  11. 11. Scrivner C, Choe KW, Henry J, Lyu M, Maestripieri D, Berman MG. Violence reduces attention to faces and draws attention to points of contact. Scientific Reports. 2019;9(1):17779.
  12. 12. Lazarov A, Abend R, Bar-Haim Y. Social anxiety is related to increased dwell time on socially threatening faces. Journal of Affective Disorders. 2016;193:282–288.
  13. 13. Võ MLH, Smith TJ, Mital PK, Henderson JM. Do the eyes really have it? Dynamic allocation of attention when viewing moving faces. Journal of Vision. 2012;12(13):3–3.
  14. 14. Tatler BW, Wade NJ, Kwan H, Findlay JM, Velichkovsky BM. Yarbus, Eye Movements, and Vision. i-Perception. 2010;1(1):7–27.
  15. 15. Hunnius S, Geuze RH. Developmental Changes in Visual Scanning of Dynamic Faces and Abstract Stimuli in Infants: A Longitudinal Study. Infancy. 2004;6(2):231–255.
  16. 16. Goldberg JH, Helfman JI. Comparing Information Graphics: A Critical Look at Eye Tracking. In: Proceedings of the 3rd BELIV’10 Workshop: BEyond Time and Errors: Novel EvaLuation Methods for Information Visualization. BELIV’10. New York, NY, USA: Association for Computing Machinery; 2010. p. 71–78. Available from: https://doi.org/10.1145/2110192.2110203.
  17. 17. Hooge I, Camps G. Scan path entropy and arrow plots: capturing scanning behavior of multiple observers. Frontiers in Psychology. 2013;4:996.
  18. 18. Orquin JL, Ashby NJS, Clarke ADF. Areas of Interest as a Signal Detection Problem in Behavioral Eye-Tracking Research. Journal of Behavioral Decision Making. 2016;29(2-3):103–115.
  19. 19. Duchowski AT, Gehrer NA, Schönenberg M, Krejtz K. Art Facing Science: Artistic Heuristics for Face Detection: Tracking Gaze When Looking at Faces. In: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. ETRA’19. New York, NY, USA: Association for Computing Machinery; 2019. p. 1–5. Available from: https://doi.org/10.1145/3317958.3319809.
  20. 20. Fuhl W, Kuebler T, Santini T, Kasneci E. Automatic Generation of Saliency-Based Areas of Interest for the Visualization and Analysis of Eye-Tracking Data. In: Proceedings of the Conference on Vision, Modeling, and Visualization. EG VMV’18. Goslar, DEU: Eurographics Association; 2018. p. 47–54. Available from: https://doi.org/10.2312/vmv.20181252.
  21. 21. Fuhl W, Kübler T, Sippel K, Rosenstiel W, Kasneci E. Arbitrarily shaped areas of interest based on gaze density gradient. In: European Conference on Eye Movements 2015; 2015. p. 5.
  22. 22. Fuhl W, Kuebler T, Brinkmann H, Rosenberg R, Rosenstiel W, Kasneci E. Region of interest generation algorithms for eye tracking data. In: Proceedings of the 3rd Workshop on Eye Tracking and Visualization. ETVIS’18. New York, NY, USA: Association for Computing Machinery; 2018. p. 1–9. Available from: http://doi.org/10.1145/3205929.3205937.
  23. 23. Wolf J, Hess S, Bachmann D, Lohmeyer Q, Meboldt M. Automating areas of interest analysis in mobile eye tracking experiments based on machine learning. Journal of Eye Movement Research. 2018;11(6). pmid:33828716
  24. 24. Caldara R, Miellet S. iMap: a novel method for statistical fixation mapping of eye movement data. Behavior Research Methods. 2011;43(3):864–878.
  25. 25. Purucker C, Landwehr JR, Sprott DE, Herrmann A. Clustered insights: Improving Eye Tracking Data Analysis using Scan Statistics. International Journal of Market Research. 2013;55(1):105–130.
  26. 26. Drewes J, Zhu W, Hu Y, Hu X. Smaller Is Better: Drift in Gaze Measurements due to Pupil Dynamics. PLOS ONE. 2014;9(10):1–6.
  27. 27. Choe KW, Blake R, Lee SH. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation. Vision Research. 2016;118:48–59.
  28. 28. Nyström M, Hooge I, Andersson R. Pupil size influences the eye-tracker signal during saccades. Vision Research. 2016;121:95–103.
  29. 29. Pomplun M, Ritter H, Velichkovsky B. Disambiguating Complex Visual Information: Towards Communication of Personal Views of a Scene. Perception. 1996;25(8):931–948.
  30. 30. Wooding DS. Eye movements of large populations: II. Deriving regions of interest, coverage, and similarity using fixation maps. Behavior Research Methods, Instruments, & Computers. 2002;34(4):518–528.
  31. 31. Lao J, Miellet S, Pernet C, Sokhn N, Caldara R. iMap4: An open source toolbox for the statistical fixation mapping of eye movement data with linear mixed modeling. Behavior Research Methods. 2017;49(2):559–575.
  32. 32. Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10(4):433–436.
  33. 33. Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision. 1997;10(4):437–442.
  34. 34. Kleiner M, Brainard D, Pelli D. What’s new in Psychtoolbox-3? Perception. 2007;36:1.
  35. 35. Hornof AJ, Halverson T. Cleaning up systematic error in eye-tracking data by using required fixation locations. Behavior Research Methods, Instruments, & Computers. 2002;34(4):592–604.
  36. 36. Helsen WF, Elliott D, Starkes JL, Ricker KL. Temporal and Spatial Coupling of Point of Gaze and Hand Movements in Aiming. Journal of Motor Behavior. 1998;30(3):249–259.
  37. 37. Pearson K. Contributions to the Mathematical Theory of Evolution. Philosophical Transactions of the Royal Society of London A. 1894;185:71–110.
  38. 38. Titterington DM, Smith AF, Makov UE. Statistical analysis of finite mixture distributions. Wiley; 1985.
  39. 39. Isokoski P, Kangas J, Majaranta P. Useful Approaches to Exploratory Analysis of Gaze Data: Enhanced Heatmaps, Cluster Maps, and Transition Maps. In: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. ETRA’18. New York, NY, USA: Association for Computing Machinery; 2018. p. 1–9. Available from: https://doi.org/10.1145/3204493.3204591.
  40. 40. Latimer CR. Eye-movement data: Cumulative fixation time and cluster analysis. Behavior Research Methods, Instruments, & Computers. 1988;20(5):437–470.
  41. 41. Naqshbandi K, Gedeon T, Abdulla UA. Automatic clustering of eye gaze data for machine learning. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC); 2016. p. 001239–001244.
  42. 42. Dempster AP, Laird NM, Rubin DB. Maximum Likelihood from Incomplete Data Via the EM Algorithm. Journal of the Royal Statistical Society: Series B (Methodological). 1977;39(1):1–22.
  43. 43. Hsu D, Kakade SM. Learning Mixtures of Spherical Gaussians: Moment Methods and Spectral Decompositions. In: Proceedings of the 4th Conference on Innovations in Theoretical Computer Science. ITCS’13. New York, NY, USA: Association for Computing Machinery; 2013. p. 11–20. Available from: https://doi.org/10.1145/2422436.2422439.
  44. 44. Schwarz G. Estimating the dimension of a model. The annals of statistics. 1978;6(2):461–464.
  45. 45. Harel J, Koch C, Perona P. Graph-Based Visual Saliency. In: Schölkopf B, Platt JC, Hoffman T, editors. Advances in Neural Information Processing Systems 19. MIT Press; 2007. p. 545–552. Available from: http://papers.nips.cc/paper/3095-graph-based-visual-saliency.pdf.
  46. 46. Baayen RH, Davidson DJ, Bates DM. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language. 2008;59(4):390–412.
  47. 47. Zemblys R, Niehorster DC, Komogortsev O, Holmqvist K. Using machine learning to detect events in eye-tracking data. Behavior Research Methods. 2018;50(1):160–181.
  48. 48. Coey CA, Wallot S, Richardson MJ, Orden GV. On the Structure of Measurement Noise in Eye-Tracking. Journal of Eye Movement Research. 2012;5(4).
  49. 49. Wang D, Mulvey FB, Pelz JB, Holmqvist K. A study of artificial eyes for the measurement of precision in eye-trackers. Behavior Research Methods. 2017;49(3):947–959.
  50. 50. van der Walt S, Colbert SC, Varoquaux G. The NumPy Array: A Structure for Efficient Numerical Computation. Computing in Science Engineering. 2011;13(2):22–30.
  51. 51. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods. 2020;17:261–272. pmid:32015543
  52. 52. Wes McKinney. Data Structures for Statistical Computing in Python. In: Stéfan van der Walt, Jarrod Millman, editors. Proceedings of the 9th Python in Science Conference; 2010. p. 56 – 61.
  53. 53. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research. 2011;12(85):2825–2830.
  54. 54. Kuznetsova A, Brockhoff P, Christensen R. lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software, Articles. 2017;82(13):1–26.
  55. 55. Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, Articles. 2015;67(1):1–48.
  56. 56. R Core Team. R: A Language and Environment for Statistical Computing; 2019. Available from: https://www.R-project.org/.
  57. 57. Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, et al. Welcome to the tidyverse. Journal of Open Source Software. 2019;4(43):1686.
  58. 58. Hunter JD. Matplotlib: A 2D Graphics Environment. Computing in Science Engineering. 2007;9(3):90–95.
  59. 59. Wong B. Points of view: Color blindness. Nature Methods. 2011;8(6):441–441.
  60. 60. Brewer CA. ColorBrewer;. Available from: https://colorbrewer2.org/.
  61. 61. Drasdo N, Fowler CW. Non-linear projection of the retinal image in a wide-angle schematic eye. The British journal of ophthalmology. 1974;58(8):709.
  62. 62. Hendrickson AE, Yuodelis C. The Morphological Development of the Human Fovea. Ophthalmology. 1984;91(6):603–612.
  63. 63. Polyak SL. The retina. University of Chicago Press; 1941.
  64. 64. Yamada E. Some Structural Features of the Fovea Centralis in the Human Retina. Archives of Ophthalmology. 1969;82(2):151–159.
  65. 65. Choe KW, Kardan O, Kotabe HP, Henderson JM, Berman MG. To search or to like: Mapping fixations to differentiate two forms of incidental scene memory. Journal of Vision. 2017;17(12):8–8.
  66. 66. Lyu M, Choe KW, Kardan O, Kotabe HP, Henderson JM, Berman MG. Overt attentional correlates of memorability of scene images and their relationships to scene semantics. Journal of Vision. 2020;20(9):2–2.
  67. 67. King G. Replication, Replication. PS: Political Science and Politics. 1995;28(3):444–452.
  68. 68. Peng RD. Reproducible Research in Computational Science. Science. 2011;334(6060):1226–1227.
  69. 69. Asendorpf JB, Conner M, De Fruyt F, De Houwer J, Denissen JJ, Fiedler K, et al. Recommendations for increasing replicability in psychology. European Journal of Personality. 2013;27(2):108–119.
  70. 70. McKiernan EC, Bourne PE, Brown CT, Buck S, Kenall A, Lin J, et al. Point of View: How open science helps researchers succeed. eLife. 2016;5:e16800.
  71. 71. Allen C, Mehler DMA. Open science challenges, benefits and tips in early career and beyond. PLOS Biology. 2019;17(5):1–14.
  72. 72. Clarke ADF, Tatler BW. Deriving an appropriate baseline for describing fixation behaviour. Vision Research. 2014;102:41–51.