Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Tags used for the form annotation.

More »

Table 1 Expand

Fig 1.

Feedback nod.

Signer A informs signer B that she is unable to meet with her at a proposed time. Signer B nods her head in order to demonstrate understanding. No manual signs accompany or follow the nod showing that signer B is not willing to take over the conversational turn. This nod has been categorized a feedback nod. Transcript & time-stamps: nue_06_calendar_task; 00:01:57:461—00:01:59:740.

More »

Fig 1 Expand

Fig 2.

Affirmation nod.

Signer B asks signer A a direct question, to which signer A produces a positive response. The affirmative head movement precedes the manual signs forming the response and then spreads only the entire articulated clause. Transcript & time-stamps: ber_12_regional_specialities; 00:08:50:160—00:08:52:560.

More »

Fig 2 Expand

Table 2.

The MY DGS—Annotated transcripts annotated in this study.

More »

Table 2 Expand

Fig 3.

Visualisation of head nod analysis based on pose estimation keypoints.

The source video (left) is overlaid with the OpenPose keypoints used for the calculations. On the line graph (upper right) the line represents the vertical motion of the nose relative to body position with crosses indicating peaks and troughs and light blue boxes indicating durations manually labeled as head nods. The frequency spectrogram of nod movements (lower right) was included as an experimental visualization, but excluded from the study as it was found to only be reliable for very regular larger nods. Source video origin: MY DGS—annotated [76].

More »

Fig 3 Expand

Fig 4.

Body and face keypoint sets determined by OpenPose.

The keypoints used in this study are circled in red. Image source (highlights ours): https://github.com/CMU-Perceptual-Computing-Lab/openpose.

More »

Fig 4 Expand

Fig 5.

Distribution of head nod forms in the analysed dataset (n = 648).

hnn = many small head nods; ln = large single nod; lnn = many large nods; mn = mixed nod (e.g. one large and many small nods), sn = small single nod.

More »

Fig 5 Expand

Fig 6.

Distribution of head nod functions in the analysed dataset (n = 648).

More »

Fig 6 Expand

Fig 7.

Percentages of different form and function types in the analyzed sample.

More »

Fig 7 Expand

Table 3.

Average duration of head nods in DGS.

More »

Table 3 Expand

Fig 8.

The distributions of the three variables, showing their density, their outliers, and box plots.

The plots include outliers that we later removed from the analysis (points further away than 5 * IQR from the box plots).

More »

Fig 8 Expand

Table 4.

T-tests and Wilcoxon tests comparing three phonetic properties between feedback and affirmation nods.

More »

Table 4 Expand

Fig 9.

Coefficients and standard errors based on a logistic regression model with all three predictors.

More »

Fig 9 Expand

Fig 10.

Predicted value for every signer.

More »

Fig 10 Expand

Fig 11.

Correlation plot for velocity and amplitude in order to inspect their colinearity.

More »

Fig 11 Expand

Fig 12.

Overlap or lack thereof of feedback nods with manual items and mouth movements.

More »

Fig 12 Expand

Fig 13.

Overlap or lack thereof of affirmation nods with manual items and mouth movements.

More »

Fig 13 Expand

Fig 14.

Turn-taking behavior and function of the nods.

More »

Fig 14 Expand

Table 5.

T-tests and Wilcoxon tests comparing three phonetic properties between feedback nods that do and do not claim a turn.

More »

Table 5 Expand