Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Single point motion kinematics convey emotional signals in children and adults

  • Elisa Roberti ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Writing – original draft

    elisa.roberti@unimib.it

    Affiliations Psychology Department, University of Milano–Bicocca, Milan, Italy, Neuromi, Milan Center for Neuroscience, Milan, Italy

  • Chiara Turati,

    Roles Conceptualization, Data curation, Methodology, Resources, Supervision, Visualization, Writing – review & editing

    Affiliations Psychology Department, University of Milano–Bicocca, Milan, Italy, Neuromi, Milan Center for Neuroscience, Milan, Italy

  • Rossana Actis-Grosso

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Supervision, Visualization, Writing – original draft

    Affiliations Psychology Department, University of Milano–Bicocca, Milan, Italy, Neuromi, Milan Center for Neuroscience, Milan, Italy

Abstract

This study investigates whether humans recognize different emotions conveyed only by the kinematics of a single moving geometrical shape and how this competence unfolds during development, from childhood to adulthood. To this aim, animations in which a shape moved according to happy, fearful, or neutral cartoons were shown, in a forced-choice paradigm, to 7- and 10-year-old children and adults. Accuracy and response times were recorded, and the movement of the mouse while the participants selected a response was tracked. Results showed that 10-year-old children and adults recognize happiness and fear when conveyed solely by different kinematics, with an advantage for fearful stimuli. Fearful stimuli were also accurately identified at 7-year-olds, together with neutral stimuli, while, at this age, the accuracy for happiness was not significantly different than chance. Overall, results demonstrates that emotions can be identified by a single point motion alone during both childhood and adulthood. Moreover, motion contributes in various measures to the comprehension of emotions, with fear recognized earlier in development and more readily even later on, when all emotions are accurately labeled.

Introduction

The contribution of motion to the recognition of emotional expressions has been questioned for decades. Many studies have demonstrated that dynamic cues could facilitate the recognition of facial emotional expressions in adults [1, 2], children [3], and even infants [4]. Furthermore, the kinematic component is crucial when one considers emotions as expressed with "body language": hands, posture, and gait contribute widely to emotion expression, recognition, and comprehension [5].

A traditional way used to investigate the contribution of kinematics to perception and cognition takes advantage of the so-called point-light displays (PLDs) [6]. PLDs technique relies on applying illuminated markers placed on some crucial points (i.e., joints) of a human body, while all other body features are masked. When presented as static, the PLD is not recognized as a meaningful configuration, but as soon as the markers are moving, from birth we immediately recognize the movement of a human body [79]. PLDs have been used to separate information concerning motion from any other type of visual information [8, 10]. As humans’ and animals’ locomotion [11] is readily available from PLDs, they are often referred to as "biological motion." Research on bodily emotions [12] shows that children and adults can categorize emotions when expressed with static body posture, but with a far lower accuracy as compared to dynamic stimuli [3, 1315]. Thus, while recognizing facial emotional expressions may be possible with static and moving images [16, 17], this is not true with other body parts, which rely heavily on kinematic cues to express emotions. Research on bodily expressions of emotions has thus primarily focused on moving bodies [13, 18, 19], with plenty of studies using the PLD technique with adults [2023], children [15], and infants [24], showing that PLDs are sufficient for the perception of emotions.

However, several authors [18, 25] have pointed out that the PLDs technique does not separate the kinematic component from configural information: some information related to the shape could be extracted even from PLDs due to the motion coherence of the visible points, which determines the vivid impression of a moving figure. This is a very well-known effect in psychology reported as common fate [26], and also as structure-from-motion perception [27] or shape-from-motion perception [28]. Thus, the perceptual system could extract the same structural components available in static pictures from the moving pattern. Following this line of reasoning, it is unsurprising that dynamic emotional stimuli are better recognized than static ones, given that more information is provided to the perceptual system.

To sum up, the challenge of understanding the relevance of the kinematic component in emotion recognition remains, given that the kinematic, configural, and featural information are all present in dynamic emotional stimuli, PLDs included.

We reasoned that a possible way to solve this puzzle would be to study the recognition of emotions conveyed by a single point. This idea stems from studies investigating biological motion from the perspective of the motor theory of perception [2931], which demonstrated that a "biological kinematics" exists independently from any configural information. Rather, the process of perceptual selection is constrained by the implicit knowledge that the central nervous system has depending on the movements it can produce. In other words, our perceptual system is very well attuned to a peculiarity of human movement, namely a specific relation between velocity and curvature (i.e., the two-thirds power law) [32, 33], which is thus a low-level description of biological motion. The sensitivity to biological motion of a single point-of-light has been investigated in adults [31, 3436]. Even 4-day-old newborns look longer at the non-biological motion, suggesting that the movement in which the two-third-power law is not respected violates their expectations [37].

Similar reasoning also applies to classical studies on animacy [38, 39]. In those studies, by showing a specific dynamic display, authors demonstrated that social perception on the one hand [38] and causality on the other [38, 39] are stimulus-driven and strictly related to the kinematics of the displayed animation. Later on, studies on animacy [40, 41] typically showed geometrical forms moving according to specific kinematics and dynamics, providing important insights into the foundations of perceptual processes involved in our understanding of interpersonal dynamics and social signals.

In the present study, in analogy with both the two-third power law and animacy studies, we wanted to verify whether it is possible to recognize a specific emotion (namely fear, or joy) as conveyed only by kinematics (i.e., absolute velocity, accelerations/decelerations, stops, etc.) and dynamics (i.e., the "form" of the trajectory, wavelike, parabolic, rectilinear, etc.). In other words, we sought to determine whether it is possible to identify the specific law of motion related to particular emotions, from which children and adults could perceive a single point of light (or a meaningless geometrical form) as happier or sadder. In this way, the link between emotions and motion perception throughout development could be clarified. The comparison between adults’ and children’s performance could help us better understand the origins and development of emotion recognition from motion. Indeed, while a large amount of literature is dedicated to the investigation of the development of infants’ [42] and children’s [43] ability to recognize emotional expressions and some studies have addressed this issue using moving faces [13] or body movements [15], none has ever investigated whether the kinematic of a single point/geometrical form is capable of conveying an emotional signal.

We chose to create emotional motion patterns by replicating the kinematics and dynamics (which will be detailed in the Stimuli Section) from cartoons in which the character displayed a fearful or a happy emotion. It should be noticed that in cartoons, emotions tend to be magnified in the same way human actors adopt when conveying different emotions only with the body [13]. Following this approach, it was possible to convey happy and fearful emotional signals by manipulating the motion of a single and meaningless point/geometrical form, thus avoiding any possible confounding with other configural or featural elements of the display. We created three motion patterns conveying happiness, fear, and a neutral state. Interestingly, these patterns nicely matched Chafi and colleagues [44], where three specific motion patterns (i.e., Translational, Parabolic, and Wavelike) have been associated with different emotions displayed by faces. The present study conveyed fear through parabolic and happiness through wavelike trajectories. Translational trajectories were used for neutral animations (i.e., not conveying emotional expression). In a forced-choice paradigm, these animations were shown to a group of adults, along with two groups of children. Specifically, we included children aged 7 and 10 years, documented in the literature as significant turning points in which accuracy in identifying emotional displays portrayed by full-light and point-light video clips increases significantly [15, 45]. We expect the group of adults to be able to associate each pattern of motion correctly and quickly (as indexed by accuracy and reaction times) with the corresponding emotion. The observed performance at different ages could help us better understand whether movement’s contribution in emotion recognition might change throughout development.

Lower reaction times and higher accuracy for a specific emotional category would indicate a preferential processing of that category. We also measured two other indices used to measure response competition: the maximum deviation (MD) and area-under-the-curve (AUC) of the motor response given with the mouse. Mouse tracking paradigms, employed both with adults [46] and children [47], are based on the idea that hand motions may reflect a continuous motor trace of the tentative to commit to a specific behavioral choice [4850], and thus these measures reflect real-time decision-making and how this changes motor programs. A comparison between the ideal trajectory (i.e., the straight line from the initiation to completion points) and the actual trajectory is indexed by the largest perpendicular deviation between the mouse movement and the straight trajectory (MD) and the geometric area between the mouse movement and the straight trajectory (AUD). As such, this measure would detect whether some motion patterns, and thus some real-time choices, are more easily identified than others.

To date, no study specifically investigated the possibility that emotions could be recognized when conveyed only by the motion of a single point/geometrical form. The present study poses itself as exploratory and seeks to take the first steps in investigating how emotions can be transposed purely into kinematics.

Material and methods

Participants

Data was collected between September 28 and October 30, 2018. The sample included in the present study consisted of 30 adults (mean age = 25.78 years, SD = 4.35 years, 9 males), 30 10-year-olds (mean age = 9.69 years, SD = 4.15 months, 16 males), and 30 7-year-olds (mean age = 7.31 years, SD = 3.78 months, 14 males). An additional 7-year-old child was tested but excluded from the sample as he did not complete the procedure. A power analysis was performed prior to conducting the study. We chose to aim for a moderate effect as the smallest detectable effect, setting the partial eta squared of 0.06 as an effect size estimate [51]. Using MorePower [58] software, we calculated the sample size for the interaction between the three emotional categories and the three age groups and two genders [52] with repeated-measures ANOVA. Setting alpha at 0.05 (two-sided), partial eta squared () at 0.06, and a power of 1- β = 0.80, the software yielded an optimal sample size of 90 participants in total. Adult participants were students recruited from the University of Milano–Bicocca. Child participants were recruited in the suburban areas of Milano and Lecco in northern Italy, and the teachers reported them not to have any history of a neurological or significant medical condition. All participants had normal or corrected to normal vision.

Before the testing sessions, all adult participants and parents gave their written informed consent, while verbal consent was obtained for the 7- and 10-year-olds, according to the ethical standards of the Declaration of Helsinki (BMJ 1991; 302:1194). The ethics committee of the University of Milano—Bicocca approved the study on September 21, 2018 (protocol n. 395). Data have been anonymized so that no individual participants can be identified after data collection.

Stimuli

The stimuli consisted of videos in which a single animated geometrical shape on a black background moved, conveying a happy, fearful, or neutral emotion. The neutral animation followed a Translational trajectory to represent a movement that would not be associated with any emotional expression. The emotional animations were created from a selection of cartoons in which the character (e.g., Tom & Jerry) displayed a fearful or a happy emotion through a Wavelike body movement for the positive emotion and a Parabolic trajectory for the negative emotion. Individual frames were extracted using Virtual Dub 1.9.11 (http://www.virtualdub.org/) and imported into a Microsoft PowerPoint 97–2003 presentation. A geometrical form was added to each frame and aligned to the top-left point in the character’s body, which was then removed from the scene. This allowed preserving only the cues about movement in space, while all other pictorial emotional information (i.e., facial expressions and posture) were removed. To make the task more interesting and diverse for children, the moving geometrical form expressing each emotion (happy, fearful, and neutral) was presented in 3 different shapes (circle, square, triangle), 2 different colors (white, yellow), and could start its movement from the two sides of the screen (left, right), for a total of 36 videos. The square’s sides were 128 pixels (4.52 cm), the triangle’s sides were 126 (4.45 cm), 140 (4.94 cm), and 140 (4.94 cm) (height: 124 pixels, 4.37 cm) pixels, and the circle had a diameter of 126.81 pixels (4.48 cm). The luminance of the videos was checked with a Minolta CS-100 photometer for the two colors presented. The yellow animation had a luminance of 90.4 cd/m2, and the white animation had a 108 cd/m2 luminance.

All three movements started with an initial entry motion that followed a linear path and ended with a final backward motion, in which, after reaching three-quarters of the screen, the geometrical shape turned around and returned from the same path. The total length of each video was of 3 seconds. (a) Fearful motion (Fig 1, panel a). After an entry motion of 861 pixels, the shape "jumped" (with a parabolic trajectory) and started "shaking." The jump started from a velocity of 0.14 m/s (the same as the entry motion) and had a linear acceleration until it reached the top of the vertical trajectory (184.6 pixels, 6.53 cm in height). At the top of the jump, the square moved back-and-forth along a small horizontal trajectory of 139.4 pixels (4.9 cm) (i.e., "shaking" behavior) five times. Then it moved down to go back along a horizontal trajectory of 861 pixels (30.4 cm), with a higher velocity (i.e., 0.24 m/s) and a constant acceleration. (b) Happy motion (Fig 1, panel b). After an entry motion of 664 pixels (23.42 cm), the shape started "jumping" in five jumps, following a wavelike trajectory. Each jump started from a velocity of 0.17 m/s (the same as the entry motion) and had a positive linear acceleration until it reached the top of the vertical trajectory, 82 pixels (2.89 cm) in height for the first jump, then 158.2 pixels (5.57 cm), 113 pixels (3.99 cm), 219.7 pixels (7.76 cm) and 141.8 pixels (5.01 cm) respectively) and a negative linear acceleration when it went down. After the jumps, the shape moved backward, following a linear path of 764.7 pixels (27 cm) with a constant velocity of 0.19 m/s. (c) Neutral video (Fig 1, panel c). After an entry motion of 748 pixels (26.39 cm), the shape went upward along a 65-degree tilted trajectory. It then went down and moved for 403.9 pixels (14.25 cm) to start an "inverse" jump (i.e., downward first), after which it went along a small horizontal path of 231 pixels (8.15 cm) and turned backward along a flat path of 527.3 pixels (18.6 cm). The whole motion had a constant velocity of 0.16 m/s. All videos are available at the following link: https://www.doi.org/10.17605/OSF.IO/8DGMW.

thumbnail
Fig 1.

Trajectories for the fearful (a), happy (b), and neutral (c) stimuli. (a) After a Translational entrance, the shape jumps, shakes, and goes back down following a Parabolic course; (b) the shape jumps following a Wavelike motion; (c) a neutral Translational series of movements.

https://doi.org/10.1371/journal.pone.0301896.g001

A Mann-Whitney test also showed that the distribution of values between the angular acceleration values in the three videos did not differ (all p>.72) (Fig 2). Therefore, on average, no video contained more accelerating or slowing down movements than the others (mean acceleration: neutral: -0.24 m/s2; fear: -0.03 m/s2; happiness: 0.11 m/s2). The choice to present only happiness and fear as emotional stimuli is based on the fact that these two emotions are the earliest that children are able to recognize accurately [53, 54]. Furthermore, happiness is typically the easiest emotion to recognize by the movement of faces [61], but also with moving PLDs by adults, who rely more on moving PLD bodies than static faces when recognizing fear [55]. By choosing happiness and fear, we thus maximized the possibility of observing possible differences between different emotions as conveyed by pure motion.

thumbnail
Fig 2. Boxplot for speed and acceleration patterns.

The speed distributions are shown on the left, and the acceleration distributions on the right for neutral (1), fearful (2), and happy (3) videos.

https://doi.org/10.1371/journal.pone.0301896.g002

Design and procedure

Participants sat on a chair in front of a desk at a distance of 75 cm to the computer screen. They were told that they would see some short videos of geometrical shapes. The task required categorizing each video as quickly and accurately as possible by clicking with the mouse on one out of two emoticons (Fig 3, panel a) displayed at the computer screen’s upper left and right corners. The emoticons could be happy and neutral, happy and fearful, fearful and neutral, counterbalanced within participants as their left/right corner position. Six practice trials were administered before starting the experimental session. To begin each trial, participants had to press a "Start" button at the bottom-center of the screen, followed by the videos presented centered on the screen (Fig 3, panel b). The order of video presentation was randomized, and the stimuli were organized into three blocks of 12 videos each to allow children to take breaks if needed. All participants completed all three blocks, for a total of 36 trials, on average in 7 minutes (SD = 1.24 min). A Dell computer with a 15.6-inch screen connected via USB to a mouse was used for data collection. The Mouse Tracker software [48] was used. The mouse speed was set to the middle setting of Windows 7 [56].

thumbnail
Fig 3.

(a) Displayed key responses for the fearful (1), happy (2), and neutral (3) conditions. All the response buttons had the same dimension (1.43 x 1.29 cm) and were aligned to the top left and right corners of the screen. (b) Visual display of the experimental procedure, presented with Mouse Tracker. Participants pressed the "Start" button, and the videos appeared at the center of the screen. They were instructed to press the response button that better represented the emotional category of the stimuli by clicking a mouse button. After the response was given, another "Start" button appeared to begin the subsequent trial. No feedback on the accuracy was given to the participants.

https://doi.org/10.1371/journal.pone.0301896.g003

Data analysis

Accuracy was calculated for all age groups by dividing the number of correct answers by the total of the presented trials for each condition (i.e., happiness, fear, neutral). Reaction times (RT) in milliseconds, starting from the onset of the videos to the moment when a response was given, were recorded by the mouse-tracker and analyzed. From the Mouse Tracker software, AUC and MD measures were also extracted, measuring the attraction towards the two alternative emoticons displayed in the top corners of the computer screen [48]. Data were checked for outliers in reaction times in the first instance, calculated as exceeding the two standard deviations from the mean criteria. The trials in which an outlier was found were excluded from further analyses (trials per subject in the adult sample: M = 0.88; SD = 1.00; in the 10-year-old sample: M = 0.66; SD = 0.97; in the 7-year-old sample: M = 0.44; SD = 0.78). Accuracy scores were then calculated for each participant and each category. One-sample t-tests were performed to ensure that accuracy scores significantly differed from a 50% chance of response.

After excluding trials in which an incorrect response was given or the reaction time exceeded the two standard deviations threshold, 27.8 trials (SD = 6.4) per participant were included in average in the final analyses for the adult sample, a mean of 24.7 trials (SD = 5.2) for the 10-year-olds sample, and a mean of 22.4 trials (SD = 5.9) for the 7-year-olds. Considering incorrect responses and outliers, a mean of 2.73 trials (SD = 3.09) was eliminated in the adult sample, a mean of 3.78 trials (SD = 2.78) in the 10-year-olds sample, and a mean of 4.53 trials (SD = 2.95) in the 7-year-olds sample.

A 2 x 3 x 2 repeated measures analysis of variance (ANOVA) was performed for each age group with color (white, yellow), shape (circle, square, triangle), and direction of the movement (left, right) as within-subject factors, to make sure that these factors did not affect the responses to the task. As expected, for all age groups, the only significant effect was emotion (p = 0.02), while no effect of color, shape, direction, and no interaction were found (all p>0.09). Therefore, these factors were collapsed for further analyses.

For each participant, a RT, MD, and AUC score was then calculated for the three emotions (happiness, fear, neutral). For each of the variables, the normality of distributions was checked with the Shapiro–Wilk normality test (RT: p < .001, MD: p < .001, AUC: p < .001). Separate ANOVAs were computed for the three dependent variables, with emotion as within-subject factor and age group (adults, 10-year-olds, 7-year-olds) as between-subject variable. Possible gender effects were also investigated, adding gender as a between-subject variable.

Pairwise t-test comparisons, where necessary, were conducted using a Bonferroni correction. The significance threshold was set at 0.05, and a Greenhouse-Geisser correction was applied whenever the assumption of Sphericity was violated (indicated by ε). Effect sizes were indicated with Cohen’s d for t tests and partial eta squared for ANOVAs [57, 58].

Results

Accuracy

In the adult sample, participants’ accuracy was significantly different than the 0.5 chance level for the happy (M = 0.83; SD = 0.20), t (29) = 7.57; p < .001, d = 1.20, fearful, (M = 1; SD = 0.29) t (29) = 6.56; p < .001, d = 1.38, and neutral condition (M = 0.76; SD = 0.24), t (29) = 6.13; p < .001, d = 1.12. In the 10-year-old sample, the same was found, with high accuracies for the happy (M = 0.70; SD = 0.20), t (29) = 5.43; p < .001, d = 1.21, fearful (M = 0.83; SD = 0.27), t (29) = 6.63; p < .001, d = 0.99, and neutral condition (M = 0.62; SD = 0.18), t (29) = 3.71; p < .001, d = 0.67. For the 7-year-olds, this was true for the fearful (M = 0.75; SD = 0.29), t (29) = 4.73; p < .001, d = 0.86, and neutral condition (M = 0.61; SD = 0.21), t (29) = 2.93; p = .007, d = 0.53, while for the happy condition, the accuracy was not significantly different than chance (M = 0.58; SD = 0.25), t (29) = 1.76; p = .089, d = 0.32.

Therefore, except for the happy stimuli in the 7-year-old group, the animations presented were accurately recognized at all ages, supporting the validity of the stimuli set toward communicating the expected emotional valence or absence thereof. A 3 x 3 x 2 repeated measures Analysis of Variance (rmANOVA) was performed with the three emotion categories as within-subjects factors and the three age groups and gender (male, female) as between-subject factors to check if the accuracy between ages and gender differed. As expected, there was a significant effect of emotion, F (2, 206) = 9.19, p < 0.001, = 0.08. Post-hoc tests showed that fear’s accuracy (M = 0.86; SD = 0.28) was higher than both the happy (M = 0.83; SD = 0.22) and the neutral accuracy (M = 0.66; SD = 0.21). An effect of age group was also found, F (2, 103)  =  11.7, p < 0.001, = 0.19). No main effect of gender or interactions with gender were found (all p>0.31). Post-hoc tests showed that adults’ accuracy (M = 0.86; SD = 0.24) was higher than both the 10-year-olds (M = 0.72; SD = 0.22) and the 7-year-olds (M = 0.65; SD = 0.25) and that 10-year-olds accuracy was higher than the one of 7-year-olds (all p < .04). No emotion by age interaction was observed in accuracy values (p = 0.29).

Reaction time (RT)

First, we conducted an ANOVA with emotion (Fear, Happiness, Neutral) as within-subject factor and age group (adults, 10-year-olds, 7-year-olds) and gender (male, female) as between-subject variables. A main effect of emotion, F (1.44, 125.61)  =  17.25, p < 0.001, = 0.17, ε  =  0.72, and age group, F (2, 87)  =  58.5, p < 0.001, = 0.57, were observed. No main effect of gender or interactions with gender were found (all p>0.09). Therefore, t-test comparisons were carried on for the emotion and age group factors separately. Averaging among the three age groups, we observed that RTs in the fearful condition (M = 5276 ms; SD = 1151 ms) were lower than in both the happy (M = 5840 ms; SD = 1660 ms; t (29) = -4.26, p < .001, d = -0.78) and the neutral (M = 5794 ms; SD = 1304 ms; t (29) = -6.51, p < .001, d = -1.19) condition. The happy and the neutral condition, in contrast, did not differ from each other (p > 0.9) (Fig 4).

thumbnail
Fig 4. RTs observed for the different emotions across age groups.

In the fearful condition, RTs were lower than in both the happy and the neutral conditions, while the happy and the neutral condition did not differ from each other (***p < 0.001). Error bars represent the standard errors of the means (dark grey: fear; light grey: happiness; diagonal lines: neutral).

https://doi.org/10.1371/journal.pone.0301896.g004

Averaging among the three emotions, the t-tests indicated that adults (M = 4561 ms; SD = 185 ms) were faster than 10-year-olds (M = 5482 ms; SD = 363 ms; t (29) = -5.27, p < .001, d = -0.96) and 7-year-olds (M = 6866 ms; SD = 420 ms; t (29) = -9.47, p < .001, d = -1.73), and 10-year-old children were also significantly faster than the 7-year-old children, (t (29) = -6.56, p < .001, d = -1.2).

To further investigate the differences within the different age groups in terms of responses to emotions, three separate ANOVAs were carried out (Fig 5). In the adult group, the pattern of a main effect of emotion found in the general ANOVA was confirmed, F (2, 58)  =  8.66, p < 0.001, = 0.23, with lower RTs in the fearful (M = 4351 ms, SD = 662) condition compared with the happy (M = 4702 ms, SD = 599; t (29) = −3.93, p < 0.001, d = -0.71), and the neutral condition, (M = 4632 ms, SD = 662; t (29) = −3.15, p = 0.004, d = -0.58). The same pattern was found for 10-year-olds, F (2, 58)  =  25.5, p < 0.001, = 0.47, with lower RTs in the fearful condition (M = 5243 ms, SD = 768) compared with the happy, (M = 5684 ms, SD = 787; t (29) = −5.14, p < 0.001, d = -1.12), and the neutral condition, (M = 5868 ms, SD = 931; t (29) = −6.72, p < 0.001, d = - 1.23). For the 7-year-olds, although the general effect of emotion was still present, it had a small effect size, F (2, 58)  =  4.20, p  =  0.02, = 0.12, and the paired-sample t-test revealed that fear (M = 6588 ms, SD = 1206) only differed from happiness (M = 7040 ms, SD = 2043; t (29) = −2.82, p = 0.008, d = -0.4). Again, for all age groups no main effects of gender or interactions with gender were found (adults: all p>0.2; 10-year-olds: all p>0.4; 7-year-olds: all p>0.1).

thumbnail
Fig 5. Results of the ANOVAs performed separately for the three age groups.

In the adult and 10-year-olds groups, faster RTs in the fearful condition compared with the happy and the neutral condition. In the 7-year-olds, lower RTs were observed only for fear compared with happiness (***p < 0.001; **p < 0.01; *p < 0.05). Error bars represent the standard errors of the means (dark grey: fear; light grey: happiness; diagonal lines: neutral).

https://doi.org/10.1371/journal.pone.0301896.g005

Area-under-the-curve (AUC) and maximum deviation (MD)

The ANOVA with AUC as a dependent variable revealed a significant interaction between emotion and age group, F (4, 174)  =  2.97, p  =  0.021, = 0.06. No main effect of gender or interactions with gender were found (all p>0.2). To further investigate this interaction, post-hoc comparisons were conducted and revealed that, for the 10-year-old sample, the area between the AUC was smaller for fearful (M = 0.37, SD = 0.5) than for neutral responses (M = 0.79, SD = 0.7; t (29) = −2.9, p = 0.007, d = -0.52).

A similar pattern was also evident when considering the MD measure. In this case, the main effect of emotion, F (2, 174)  =  3.86, p  =  0.023, = 0.04, was further qualified by the interaction between emotion and age group, F (4, 174)  =  4.11, p  =  0.003, = 0.09. No main effect of gender or interactions with gender were found (all p>0.4). Again, for the 10-year-olds, the largest perpendicular deviation of the mouse was smaller for fearful (M = 0.17, SD = 0.2) than for neutral responses, (M = 0.33, SD = 0.2; t (29) = −3.30, p = 0.003, d = -0.60 (Fig 6). The similarity in AUC and MD findings confirms that both measures reflect the same continuous unfolding of cognitive processes during the execution of behavioral responses [49].

thumbnail
Fig 6. AUC and MD, Mouse Tracker indexes of the strength of attraction towards the alternative response for the 10-year-old children (the upper line represents responses to the neutral condition; the lower line represents responses to the fear condition).

https://doi.org/10.1371/journal.pone.0301896.g006

Discussion

This study tested the possibility that humans recognize different emotions as conveyed only by a single moving geometrical figure. Moreover, we sought to investigate whether this competence’s developmental onset and developmental trajectory are consistent across emotions or differ according to the specific emotion considered. Three age groups, 7-year-old children, 10-year-old children, and adults, were presented with a geometrical shape, which could move according to a happy, a fearful, or a neutral motion, and asked to categorize each animation in a forced-choice paradigm. The movement of the mouse while the participants’ selected response was also tracked.

Our main result shows that adults and 10-year-old children recognized happiness and fear when conveyed by the animations presented, while 7-year-olds could only recognize fear. This finding has important implications for future studies on emotion recognition, given that it highlights the perceptual system’s ability to extract information regarding emotion from single-point displays. The contribution of motion (from both face and body) to the recognition of emotions has been questioned for decades. Our results suggest that motion alone (without any figural confound) can also convey the emotion of fear and happiness.

We found that kinematics contributes in various measures to the comprehension of the different presented emotions. This consideration is supported by both the difference found in accuracy and RTs for adults and 10-year-old children and by the performance of 7-year-olds. In particular, for adults and 10-year-olds, fear’s accuracy was higher than for both happy and neutral stimuli. Fear was recognized faster than both happiness and neutral movement. At the same time, 7-year-olds could recognize only fear (and the neutral state), with the accuracy for the happy condition not significantly different from chance. Moreover, we found for 10-year-old children a straighter trajectory for fear identification in the mouse tracking variables. Given the assumption that hand motions reflect a continuous motor trace of a specific behavioral choice, this measure would index an easier identification for fearful stimuli. Taken together, these results suggest a kinematic fear advantage. Fear was well recognized by motion alone, while this was only partially true for happiness: we observed that the distinction between happiness and a neutral movement was more difficult at all ages (i.e., no differences between RTs for the happy and neutral condition), and 7-year-old children did not recognize happiness in our stimuli. As far as happiness is concerned, we might surmise that humans rely more on static/figural information present in happy faces (specifically, the smile) than dynamic cues, which do not appear to be particularly relevant to discriminate happiness from other affective states. This consideration is also based on the well-known happy face advantage [59], an effect often described in the scientific literature [6062], for which happy faces are easier to discriminate than other emotional expressions even early in life [63].

The difference found in the present study between the perception of fearful and happy stimuli supports the idea that the static and the dynamic components of emotional expressions (conveyed by either faces or bodies) might be differently involved in recognizing different emotions [55]. This idea is nicely fitting an evolutionary perspective. Happiness (and its expression) is essentially aimed at increasing empathy [64] towards other individuals (i.e., conspecific). Consequently, no action is required when happiness is concerned, if not that of sharing some pleasant stimuli (such as food). Fear, on the other hand, as well as anger, communicates the presence of potential danger in the environment, and for this reason, being particularly relevant for survival is more likely to be conveyed by dynamic cues, which are not only more related to action but also more easily detectable from a distance (or when the other individual is not completely visible). In this perspective, fear and anger should be recognized more quickly. This idea is also supported by the fact that they are associated with increased vigilance, attention [6567], and modulation of the motor system [68, 69]. Moreover, an advantage for fearful stimuli has been reported in several studies on recognition of emotional point-light displays both in typically developed individuals [13, 55, 70] and in individuals with Autistic Spectrum Disorder (ASD) [7173]. Our results support the idea that this advantage is largely due to kinematics alone and not to the possible figural information extracted by PLDs’ motion coherence.

Perspectives and limitations

Evidence accumulating is suggesting different processing for different emotions, particularly when fear and anger are considered [71, 74]. This supports the idea that natural selection resulted in a propensity to react more strongly to threatening stimuli [64].

Notably, being an exploratory study, only one sample of motion for each emotion was presented, adapted from cartoons. This approach might be criticized for not fully reflecting the natural kinematics of human bodies and for using a too-limited sampling of artificial kinematics.

Nonetheless, it is worth noticing that we did vary the color, shape, and direction of our animations. Specifically, the moving geometrical form expressing each emotion (happy, fearful, and neutral) was presented in three different shapes (circle, square, triangle), two different colors (white, yellow), and could start its movement from the two sides of the screen (left, right), for a total of 36 stimuli. Even more importantly, our choice was motivated by the aim of the study, which attempted to isolate the contribution of motion to emotion comprehension by relying upon magnified expressions of emotional signals conveyed through motion alone.

In this perspective, our study could be compared to the first studies on animacy [38, 39]. Like in those studies, it is not crucial to sample a range of movements (like Heider & Simmel, did not need to present various displays). In other words, we have to verify whether humans can recognize this relation without any "possible" form (human or human human-like). Thus, our crucial point was to show that human beings can recognize a specific emotion conveyed only by a single point’s motion. Future directions include investigating how this possibility could be generalized to all the movements representing different emotional contents.

Moreover, further studies could consider assessing the emotion recognition performance with gold-standard measurements, to correlate emotion comprehension from kinematics and other cues.

In our study, no gender differences were found. On the other hand, sex differences are often described in the literature when investigating the processing of emotional expressions through faces [75] or human biological motion [76]. While, for the present study, no specific hypotheses on gender effects were made, future studies might further investigate how gender differences impact the processing of kinematics of emotions throughout the lifespan.

Lastly, it should be noted that even though the mouse tracking measures are strongly related to motor skills, those measures complemented the results observed with accuracy and reaction times. Indeed, if it is true that accuracy and speed improve with age, the varying degree to which different stimuli are identified within the same age sample suggests that cognitive processes still play a fundamental role. Moreover, the mouse remains the most accessible tool for children aged 6 and older [77].

Conclusion

To sum up, the present study constitutes the first evidence of the idea that different emotions can be extracted from the kinematics of movements alone across development. Further, this competence could be modulated by the specific emotion in the adult and the developmental population, where different developmental onsets were observed for happy and fearful kinematics. Indeed, in our sample fearful motion provided particularly rich information, quickly interpreted by all age groups. Happy motion was instead less readily categorized by adults and older children and poorly identified by our youngest participants. This developmental trajectory suggests that this ability could be improved with experience.

Acknowledgments

The authors thank the families who participated in this study. We are also grateful to Mauro Pizzera (Camp Multisport), Raffaella Maria Crimella along with the teachers at the school ’Scuola Primaria Silvio Pellico’ (Lecco), and Michela Milani for the assistance in data collection.

References

  1. 1. Richoz AR, Lao J, Pascalis O, Caldara R. Tracking the recognition of static and dynamic facial expressions of emotion across the life span. J Vis. 2018;18(9):5. pmid:30208425
  2. 2. Krumhuber EG, Skora LI, Hill HCH, Lander K. The role of facial movements in emotion recognition. Nat Rev Psychol. 2023 Mar 27;2(5):283–96.
  3. 3. Nelson NL, Russell JA. Preschoolers’ use of dynamic facial, bodily, and vocal cues to emotion. J Exp Child Psychol. 2011 Sep;110(1):52–61. pmid:21524423
  4. 4. Quadrelli E, Conte S, Macchi Cassia V, Turati C. Emotion in motion: Facial dynamics affect infants’ neural processing of emotions. Dev Psychobiol. 2019 Sep;61(6):843–58. pmid:31032893
  5. 5. Atkinson AP. Bodily Expressions of Emotion. In: Armony J, Vuilleumier P, editors. The Cambridge Handbook of Human Affective Neuroscience [Internet]. Cambridge: Cambridge University Press; 2013 [cited 2021 Oct 4]. p. 198–222. Available from: http://ebooks.cambridge.org/ref/id/CBO9780511843716A018
  6. 6. Johansson G. Visual perception of biological motion and a model for its analysis. Percept Psychophys. 1973 Jun;14(2):201–11.
  7. 7. Roberti E, Addabbo M, Colombo L, Porro M, Turati C. Newborns’ perception of approach and withdrawal from biological movement: A closeness story. Infancy. 2023 Oct 23;infa.12565. pmid:37870090
  8. 8. Simion F, Regolin L, Bulf H. A predisposition for biological motion in the newborn baby. Proc Natl Acad Sci. 2008 Jan 15;105(2):809–13. pmid:18174333
  9. 9. Bidet-Ildei C, Kitromilides E, Orliaguet JP, Pavlova M, Gentaz E. Preference for point-light human biological motion in newborns: Contribution of translational displacement. Dev Psychol. 2014 Jan;50(1):113–20. pmid:23668800
  10. 10. Quadrelli E, Roberti E, Turati C, Craighero L. Observation of the point-light animation of a grasping hand activates sensorimotor cortex in nine-month-old infants. Cortex. 2019 Oct;119:373–85. pmid:31401422
  11. 11. Mitkin AA, Pavlova MA. Changing a natural orientation: Recognition of biological motion pattern by children and adults. Psychol Beitrage. 1990;
  12. 12. de Gelder B, Van den Stock J. The Bodily Expressive Action Stimulus Test (BEAST). Construction and Validation of a Stimulus Basis for Measuring Perception of Whole Body Expression of Emotions. Front Psychol [Internet]. 2011 [cited 2021 Sep 28];2. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2011.00181/abstract
  13. 13. Atkinson AP, Dittrich WH, Gemmell AJ, Young AW. Emotion Perception from Dynamic and Static Body Expressions in Point-Light and Full-Light Displays. Perception. 2004 Jun;33(6):717–46. pmid:15330366
  14. 14. de Gelder B, de Borst AW, Watson R. The perception of emotion in body expressions: Emotional body perception. Wiley Interdiscip Rev Cogn Sci. 2015 Mar;6(2):149–58.
  15. 15. Ross PD, Polson L, Grosbras MH. Developmental Changes in Emotion Recognition from Full-Light and Point-Light Displays of Body Movement. Lappe M, editor. PLoS ONE. 2012 Sep 10;7(9):e44815. pmid:22970310
  16. 16. Quadrelli E, Roberti E, Polver S, Bulf H, Turati C. Sensorimotor Activity and Network Connectivity to Dynamic and Static Emotional Faces in 7-Month-Old Infants. Brain Sci. 2021 Oct 24;11(11):1396. pmid:34827394
  17. 17. Bidet-Ildei C, Decatoire A, Gil S. Recognition of Emotions From Facial Point-Light Displays. Front Psychol. 2020 Jun 4;11:1062. pmid:32581934
  18. 18. Bachmann J, Zabicki A, Munzert J, Krüger B. Emotional expressivity of the observer mediates recognition of affective states from human body movements. Cogn Emot. 2020 Oct 2;34(7):1370–81. pmid:32249663
  19. 19. Clarke TJ, Bradshaw MF, Field DT, Hampson SE, Rose D. The Perception of Emotion from Body Movement in Point-Light Displays of Interpersonal Dialogue. Perception. 2005 Oct;34(10):1171–80. pmid:16309112
  20. 20. Alaerts K, Nackaerts E, Meyns P, Swinnen SP, Wenderoth N. Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences. Valdes-Sosa M, editor. PLoS ONE. 2011 Jun 9;6(6):e20989. pmid:21695266
  21. 21. Krüger S, Sokolov AN, Enck P, Krägeloh-Mann I, Pavlova MA. Emotion through Locomotion: Gender Impact. Lappe M, editor. PLoS ONE. 2013 Nov 22;8(11):e81716. pmid:24278456
  22. 22. Sokolov AA, Zeidman P, Erb M, Pollick FE, Fallgatter AJ, Ryvlin P, et al. Brain circuits signaling the absence of emotion in body language. Proc Natl Acad Sci. 2020 Aug 25;117(34):20868–73. pmid:32764147
  23. 23. Sokolov AA, Erb M, Gharabaghi A, Grodd W, Tatagiba MS, Pavlova MA. Biological motion processing: The left cerebellum communicates with the right superior temporal sulcus. NeuroImage. 2012 Feb;59(3):2824–30. pmid:22019860
  24. 24. Missana M, Atkinson AP, Grossmann T. Tuning the developing brain to emotional body expressions. Dev Sci. 2015 Mar;18(2):243–53. pmid:25041388
  25. 25. Kurz J, Helm F, Troje NF, Munzert J. Prediction of action outcome: Effects of available information about body structure. Atten Percept Psychophys. 2020 May;82(4):2076–84. pmid:31797178
  26. 26. Wertheimer M. Laws of organization in perceptual forms. Psycologische Forsch. 1923;4:301–50.
  27. 27. Andersen RA, Bradley DC. Perception of three-dimensional structure from motion. Trends Cogn Sci. 1998 Jun;2(6):222–8. pmid:21227176
  28. 28. Tang MF, Dickinson JE, Visser TAW, Edwards M, Badcock DR. The shape of motion perception: Global pooling of transformational apparent motion. J Vis. 2013 Nov 20;13(13):20–20. pmid:24259672
  29. 29. Scheerer E. Motor Theories of Cognitive Structure: A Historical Review. In: Prinz W, Sanders AF, editors. Cognition and Motor Processes [Internet]. Berlin, Heidelberg: Springer Berlin Heidelberg; 1984 [cited 2021 Sep 28]. p. 77–98. Available from: http://link.springer.com/10.1007/978-3-642-69382-3_6
  30. 30. Viviani P, Mounoud P. Perceptuomotor Compatibility in Pursuit Tracking of Two-Dimensional Movements. J Mot Behav. 1990 Sep;22(3):407–43. pmid:15117667
  31. 31. Viviani P, Stucchi N. Biological movements look uniform: Evidence of motor-perceptual interactions. J Exp Psychol Hum Percept Perform. 1992;18(3):603–23. pmid:1500865
  32. 32. Lacquaniti F, Terzuolo C, Viviani P. The law relating the kinematic and figural aspects of drawing movements. Acta Psychol (Amst). 1983 Oct;54(1–3):115–30. pmid:6666647
  33. 33. Bidet-Ildei C, Orliaguet JP, Sokolov AN, Pavlova M. Perception of Elliptic Biological Motion. Perception. 2006 Aug;35(8):1137–47. pmid:17076071
  34. 34. Actis-Grosso R, de’Sperati C, Stucchi N, Viviani P. Visual extrapolation of biological motion. Citeseer; 2001.
  35. 35. Carlini A, Actis-Grosso R, Stucchi N, Pozzo T. Forward to the past. Front Hum Neurosci [Internet]. 2012 [cited 2022 Apr 13];6. Available from: http://journal.frontiersin.org/article/10.3389/fnhum.2012.00174/abstract pmid:22712012
  36. 36. de’Sperati C, Viviani P. The Relationship between Curvature and Velocity in Two-Dimensional Smooth Pursuit Eye Movements. J Neurosci. 1997 May 15;17(10):3932–45. pmid:9133411
  37. 37. Méary D, Kitromilides E, Mazens K, Graff C, Gentaz E. Four-Day-Old Human Neonates Look Longer at Non-Biological Motions of a Single Point-of-Light. Baune B, editor. PLoS ONE. 2007 Jan 31;2(1):e186. pmid:17264887
  38. 38. Heider F, Simmel M. An Experimental Study of Apparent Behavior. Am J Psychol. 1944 Apr;57(2):243.
  39. 39. Michotte A. La perception de Ia causalité. Louvain Stud Psychol Sch. 1954;
  40. 40. Scholl BJ, Tremoulet PD. Perceptual causality and animacy. Trends Cogn Sci. 2000 Aug;4(8):299–309. pmid:10904254
  41. 41. Wagemans J, van Lier R, Scholl BJ. Introduction to Michotte’s heritage in perception and cognition research. Acta Psychol (Amst). 2006 Sep;123(1–2):1–19. pmid:16860283
  42. 42. Grossmann T. The development of emotion perception in face and voice during infancy. Restor Neurol Neurosci. 2010;28(2):219–36. pmid:20404410
  43. 43. Brechet C, Baldy R, Picard D. How does Sam feel?: Children’s labelling and drawing of basic emotions. Br J Dev Psychol. 2009 Sep;27(3):587–606.
  44. 44. Chafi A, Schiaratura L, Rusinek S. Three Patterns of Motion Which Change the Perception of Emotional Faces. Psychology. 2012;03(01):82–9.
  45. 45. Gao X, Maurer D. Influence of intensity on children’s sensitivity to happy, sad, and fearful facial expressions. J Exp Child Psychol. 2009 Apr;102(4):503–21. pmid:19124135
  46. 46. Capellini R, Sacchi S, Ricciardelli P, Actis-Grosso R. Social Threat and Motor Resonance: When a Menacing Outgroup Delays Motor Response. Front Psychol [Internet]. 2016 Nov 1 [cited 2024 Jan 16];7. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01697/full
  47. 47. Pearce AL, Adise S, Roberts NJ, White C, Geier CF, Keller KL. Individual differences in the influence of taste and health impact successful dietary self-control: A mouse tracking food choice study in children. Physiol Behav. 2020 Sep;223:112990. pmid:32505786
  48. 48. Freeman JB, Ambady N. MouseTracker: Software for studying real-time mental processing using a computer mouse-tracking method. Behav Res Methods. 2010 Feb;42(1):226–41. pmid:20160302
  49. 49. Freeman JB, Dale R, Farmer TA. Hand in Motion Reveals Mind in Motion. Front Psychol [Internet]. 2011 [cited 2021 Sep 28];2. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2011.00059/abstract pmid:21687437
  50. 50. Stillman PE, Shen X, Ferguson MJ. How Mouse-tracking Can Advance Social Cognitive Theory. Trends Cogn Sci. 2018 Jun;22(6):531–43. pmid:29731415
  51. 51. Correll J, Mellinger C, McClelland GH, Judd CM. Avoid Cohen’s ‘Small’, ‘Medium’, and ‘Large’ for Power Analysis. Trends Cogn Sci. 2020 Mar;24(3):200–7. pmid:31954629
  52. 52. Isernia S, Sokolov AN, Fallgatter AJ, Pavlova MA. Untangling the Ties Between Social Cognition and Body Motion: Gender Impact. Front Psychol. 2020 Feb 6;11:128. pmid:32116932
  53. 53. Camras LA, Allison K. Children’s understanding of emotional facial expressions and verbal labels. J Nonverbal Behav. 1985;9(2):84–94.
  54. 54. Thomas LA, De Bellis MD, Graham R, LaBar KS. Development of emotional facial recognition in late childhood and adolescence. Dev Sci. 2007 Sep;10(5):547–58. pmid:17683341
  55. 55. Actis-Grosso R, Bossi F, Ricciardelli P. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits. Front Psychol [Internet]. 2015 Oct 23 [cited 2021 Sep 28];6. Available from: http://journal.frontiersin.org/Article/10.3389/fpsyg.2015.01570/abstract pmid:26557101
  56. 56. Hermens F. When do arrows start to compete? A developmental mouse-tracking study. Acta Psychol (Amst). 2018 Jan;182:177–88. pmid:29195148
  57. 57. Iacobucci D, Popovich DL, Moon S, Román S. How to calculate, use, and report variance explained effect size indices and not die trying. J Consum Psychol. 2023 Jan;33(1):45–61.
  58. 58. Fritz CO, Morris PE, Richler JJ. Effect size estimates: Current use, calculations, and interpretation. J Exp Psychol Gen. 2012;141(1):2–18. pmid:21823805
  59. 59. Shimamura AP, Ross JG, Bennett HD. Memory for facial expressions: The power of a smile. Psychon Bull Rev. 2006 Apr;13(2):217–22. pmid:16892984
  60. 60. Iarrobino I, Bongiardina A, Dal Monte O, Sarasso P, Ronga I, Neppi-Modona M, et al. Right and left inferior frontal opercula are involved in discriminating angry and sad facial expressions. Brain Stimulat. 2021 May;14(3):607–15. pmid:33785407
  61. 61. Leppänen JM, Hietanen JK. Affect and Face Perception: Odors Modulate the Recognition Advantage of Happy Faces. Emotion. 2003;3(4):315–26. pmid:14674826
  62. 62. Rhodes G, Haxby J. Oxford handbook of face perception. Oxford University Press; 2011.
  63. 63. Addabbo M, Longhi E, Marchis IC, Tagliabue P, Turati C. Dynamic facial expressions of emotions are discriminated at birth. Urgesi C, editor. PLOS ONE. 2018 Mar 15;13(3):e0193868. pmid:29543841
  64. 64. Steenbergen L, Maraver MJ, Actis-Grosso R, Ricciardelli P, Colzato LS. Recognizing emotions in bodies: Vagus nerve stimulation enhances recognition of anger while impairing sadness. Cogn Affect Behav Neurosci [Internet]. 2021 Jul 15 [cited 2021 Oct 4]; Available from: https://link.springer.com/10.3758/s13415-021-00928-3 pmid:34268714
  65. 65. Kret ME, Stekelenburg JJ, Roelofs K, de Gelder B. Perception of Face and Body Expressions Using Electromyography, Pupillometry and Gaze Measures. Front Psychol [Internet]. 2013 [cited 2021 Dec 28];4. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00028/abstract pmid:23403886
  66. 66. Phelps EA. Emotion and Cognition: Insights from Studies of the Human Amygdala. Annu Rev Psychol. 2006 Jan 1;57(1):27–53. pmid:16318588
  67. 67. Tamietto M, Geminiani G, Genero R, de Gelder B. Seeing Fearful Body Language Overcomes Attentional Deficits in Patients with Neglect. J Cogn Neurosci. 2007 Mar 1;19(3):445–54. pmid:17335393
  68. 68. Borgomaneri S, Gazzola V, Avenanti A. Motor mapping of implied actions during perception of emotional body language. Brain Stimulat. 2012 Apr;5(2):70–6. pmid:22503473
  69. 69. Borgomaneri S, Vitale F, Gazzola V, Avenanti A. Seeing fearful body language rapidly freezes the observer’s motor cortex. Cortex. 2015 Apr;65:232–45. pmid:25835523
  70. 70. Bannerman RL, Milders M, de Gelder B, Sahraie A. Orienting to threat: faster localization of fearful facial expressions and body postures revealed by saccadic eye movements. Proc R Soc B Biol Sci. 2009 May 7;276(1662):1635–41. pmid:19203922
  71. 71. Mazzoni N, Landi I, Ricciardelli P, Actis-Grosso R, Venuti P. “Motion or Emotion? Recognition of Emotional Bodily Expressions in Children With Autism Spectrum Disorder With and Without Intellectual Disability.” Front Psychol. 2020 Mar 25;11:478.
  72. 72. Mazzoni N, Ricciardelli P, Actis-Grosso R, Venuti P. Difficulties in Recognising Dynamic but not Static Emotional Body Movements in Autism Spectrum Disorder. J Autism Dev Disord [Internet]. 2021 Apr 17 [cited 2021 Dec 28]; Available from: https://link.springer.com/10.1007/s10803-021-05015-7
  73. 73. Philip RCM, Whalley HC, Stanfield AC, Sprengelmeyer R, Santos IM, Young AW, et al. Deficits in facial, body movement and vocal emotional processing in autism spectrum disorders. Psychol Med. 2010 Nov;40(11):1919–29. pmid:20102666
  74. 74. Ikeda H, Watanabe K. Anger and Happiness are Linked Differently to the Explicit Detection of Biological Motion. Perception. 2009 Jul;38(7):1002–11. pmid:19764302
  75. 75. Olderbak S, Wilhelm O, Hildebrandt A, Quoidbach J. Sex differences in facial emotion perception ability across the lifespan. Cogn Emot. 2019 Apr 3;33(3):579–88. pmid:29564958
  76. 76. Tsang T, Ogren M, Peng Y, Nguyen B, Johnson KL, Johnson SP. Infant perception of sex differences in biological motion displays. J Exp Child Psychol. 2018 Sep;173:338–50. pmid:29807312
  77. 77. Donker A, Reitsma P. Young children’s ability to use a computer mouse. Comput Educ. 2007 May;48(4):602–17.