Fig 1.
Example of a played game scene.
(A) An example of drag-and-drop action while playing the game. Users must drag the image placed in the center of the screen onto the target, avoiding distractors. The dotted line represents the trajectory drawn by the user. (B) A user’s correct execution on a game scene where the image of an "octopus" in the center of the screen has been dragged to its target avoiding the 7 distractors.
Fig 2.
Layout of the customization settings section.
The page displays the options available to the experimenter from left to right: (i) the 3 semantic category options (animals, fruits, and geometric shapes) headered "category"; (ii) The number of distractors that can affect the game’s complexity, “n. distractors"; (iii) The number of times the same arrangement of image category and number of distractors can appear, "n. replay"; (iv) the section where the child’s age can be entered "Age"; (v) user identification code "ID".
Fig 3.
Trajectories graphical representation.
Graphical representation of the original trajectories and their respective clockwise rotated trajectories.
Fig 4.
Graphic representation of the two models.
(A) For the two-features model’s architecture, N represents the number of samples, effectively representing the trajectories as input, t denotes the timestep, and the features f_1 are aligned with spatial coordinates, specifically the x and y positions. (B) For the four-features model’s architecture, N also represents the number of samples, indicating the trajectories as input, t indicates the timestep. However, the features f_2 are: The spatial coordinates x and y positions, velocities and accelerations. The network architecture for both models remains the same, comprising a masking layer to ignore "0" values introduced by padding. A LSTM layer, a type of recurrent network with 64 units. The final layer is a Dense output layer equipped with a softmax activation function and two neurons for the two classes, TD and ASD.
Fig 5.
Graphical representation of trajectories simulated from a pre-existing trajectory.
The figure illustrates how a trajectory undergoes changes while maintaining different percentages of its points: 90%, 80%, 70%, 60%, 50%, 40% and 35%.
Fig 6.
Box Plot of the accuracy of the two-features model and four-features model within the 4 folds.
Each box displays the distribution of model accuracy, with the horizontal line inside it representing the median (the central accuracy value). The upper and lower quartiles of accuracy are depicted as the upper and lower boundaries of the boxes. Additionally, the mean accuracy values, the maximum and minimum accuracy values for each model are also clearly indicated.
Fig 7.
Comparison of the receiver operating characteristic (ROC) curves of the two-features model and with four-features model.
The curve is derived from the sensitivity and specificity index, i.e., the rate of samples classified correctly in the positive and negative classes.
Fig 8.
Average loss during the training process for each model configuration (2 features and 4 features) based on the cross-validation results.
Fig 9.
Relationship between average acceleration, expressed in cm^2/s, and model predictions.
This graph displays the relationship between average acceleration of trajectories (on the x-axis) and ML model predictions (on the y-axis) for two different approaches: The two-features model (in dark blue) and the four-features model (in blue). Furthermore, the graph has been augmented to display both the p-values and Pearson’s correlation coefficients (r), associated with each model, providing a statistical measure of the strength and direction of the linear relationship between average acceleration and model predictions.