Table 1.
Tags used for the form annotation.
Fig 1.
Signer A informs signer B that she is unable to meet with her at a proposed time. Signer B nods her head in order to demonstrate understanding. No manual signs accompany or follow the nod showing that signer B is not willing to take over the conversational turn. This nod has been categorized a feedback nod. Transcript & time-stamps: nue_06_calendar_task; 00:01:57:461—00:01:59:740.
Fig 2.
Signer B asks signer A a direct question, to which signer A produces a positive response. The affirmative head movement precedes the manual signs forming the response and then spreads only the entire articulated clause. Transcript & time-stamps: ber_12_regional_specialities; 00:08:50:160—00:08:52:560.
Table 2.
The MY DGS—Annotated transcripts annotated in this study.
Fig 3.
Visualisation of head nod analysis based on pose estimation keypoints.
The source video (left) is overlaid with the OpenPose keypoints used for the calculations. On the line graph (upper right) the line represents the vertical motion of the nose relative to body position with crosses indicating peaks and troughs and light blue boxes indicating durations manually labeled as head nods. The frequency spectrogram of nod movements (lower right) was included as an experimental visualization, but excluded from the study as it was found to only be reliable for very regular larger nods. Source video origin: MY DGS—annotated [76].
Fig 4.
Body and face keypoint sets determined by OpenPose.
The keypoints used in this study are circled in red. Image source (highlights ours): https://github.com/CMU-Perceptual-Computing-Lab/openpose.
Fig 5.
Distribution of head nod forms in the analysed dataset (n = 648).
hnn = many small head nods; ln = large single nod; lnn = many large nods; mn = mixed nod (e.g. one large and many small nods), sn = small single nod.
Fig 6.
Distribution of head nod functions in the analysed dataset (n = 648).
Fig 7.
Percentages of different form and function types in the analyzed sample.
Table 3.
Average duration of head nods in DGS.
Fig 8.
The distributions of the three variables, showing their density, their outliers, and box plots.
The plots include outliers that we later removed from the analysis (points further away than 5 * IQR from the box plots).
Table 4.
T-tests and Wilcoxon tests comparing three phonetic properties between feedback and affirmation nods.
Fig 9.
Coefficients and standard errors based on a logistic regression model with all three predictors.
Fig 10.
Predicted value for every signer.
Fig 11.
Correlation plot for velocity and amplitude in order to inspect their colinearity.
Fig 12.
Overlap or lack thereof of feedback nods with manual items and mouth movements.
Fig 13.
Overlap or lack thereof of affirmation nods with manual items and mouth movements.
Fig 14.
Turn-taking behavior and function of the nods.
Table 5.
T-tests and Wilcoxon tests comparing three phonetic properties between feedback nods that do and do not claim a turn.