Fig 1.
The operations for finding new models.
Two models (left) are exploited to generate new models using reproduction (in blue), mutation (two elements are modified—Green) and crossover (two branches are combined—Red) operations.
Table 1.
Retrieved models by sample size.
The algorithm is able to find the mapping model for the three different data samples. The number of the coefficients remains the same, however, the mean absolute error (MAE) / mean squared error (MSE) values changes. The differences are small are lesser than the size of a pixel (e.g, Δ(200-150) = 0.046, Δ(150-75) = 0.0432). This means that when calculating the gaze position in the camera frame, these differences are not noticeable.
Table 2.
Symbolic regression extracted models.
Fig 2.
Behavior of the extracted models on the validation points.
(a) model at t0, (b) at t100, (c) at t500, (d) at t800. The fitness quality is represented by the area between the red and violet curves.
Fig 3.
(a) A scatter plot indicating the strong positive linear relationship between the x-values of pupil data and the x-values of the target (r = 0.9958, p < .0001). (b) There is no linear relationship between the y-values of pupil data and the x-values of the target (r = −0.5557, p < .0001).
Fig 4.
Fitted models obtained using symbolic regression on 6 different calibration patterns.
The blue circles represent the training points and the red circles represent the estimated points. Symbolic regression was able to find reliable models that map the pupil positions to the target positions for the 6 calibration patterns.
Fig 5.
Reprojection of the validation points (9×6 lattice points) using the models obtained from the symbolic regression algorithm on the 6 calibration patterns.
The circular and spiral calibration patterns (a. and c.) resulted in lower accuracy compared to the 9-point and rectangular calibration patterns (b. and d.).
Fig 6.
Illustration of the marker used in Study 1.
A white cross is drawn on the center of the marker and corresponds to the point the user is requested to fixate on during the calibration. The curve printed in the gray stroke corresponds to the pattern followed by the marker. It is represented here for illustration purposes and is not visible on the monitor.
Fig 7.
Illustration of (A) the experimental setup and (B) the Pupil Labs eye tracking system. One eye camera was used during the study.
Table 3.
Mean Absolute Error and Standard Deviation of SP and VOR calibrations based on marker, on x and y axes, using standard regression.
Table 4.
Mean Absolute Error and Standard Deviation of SP and VOR calibrations based on marker, on x and y axes, using symbolic regressions after 1s, 10s and 30s.
Fig 8.
(a) A user performing a calibration using her thumb as a target. A red marker is attached to the tip of her thumb to allow easy detection of her thumb using computer vision techniques. (b) Illustration of a frame obtained in the world camera video stream.
Table 5.
Mean Absolute Error and Standard Deviation of SP and VOR calibrations based on finger, on x and y axes, using standard regression.
Table 6.
Mean Absolute Error and Standard Deviation of SP and VOR calibrations with finger, on x and y axes, based on symbolic regressions after 1s, 10s and 30s.
Fig 9.
Reduction of the mean absolute error (MAE) in time for all participant on x-axis.
A similar effect is observed for y-axis. The MAE obtained using the standard regression (orange circles) decreased significantly when using symbolic regression. It is clearly visible that the MAE obtained after 1 second (green circles) can help reducing gaze estimation accuracy in real time scenarios. For some participants, the difference between symbolic and standard regression errors (represented in bold black line) exceeded 50% (participants 10, 2 and 8). Also, note that for participant 7, the algorithm did not find directly a better model after 1 second, but a few moments later.
Fig 10.
Reduction of the MAE over time using symbolic regression for one participant.
The algorithm stabilizes after a few milliseconds and no better model is extracted.