Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Architecture.

Connections between the Kinect sensors, Zotac mini computers running the client software and the server computer in a setup with six Kinect sensors.

More »

Fig 1 Expand

Fig 2.

Screenshots of the server program.

(A-B) Three-dimensional point clouds of six Kinect sensors after spatial calibration. Each point is represented as tiny cube with the color of the corresponding the pixel in the color image. The big colored cubes indicate the Kinect sensors. The red, green and blue lines attached to the colored boxes (sensors) indicate the axes of local coordinate system. Notice that the edge of the red coating on the floor is very straight, which illustrates the precise spatial calibration. (A) The marker in the very back is the marker with id 1 and represents the origin of the global coordinate system which is indicated by the grid. (B) A subject standing in the tracking volume to visualize the dimensions. (C) Screenshot of the live depth images view. The tracked body is highlighted. By walking through the tracking volume one can easily identify blind spots.

More »

Fig 2 Expand

Fig 3.

Markers and bipartite graph used for the spatial calibration.

(A) The shape of the markers is easily detectable in the RGB image; the white squares in the center encode the marker id [20]. The red circles indicate the salient points which have been used for defining the position and orientation of the marker. (B) Graph illustrating the “sees / is seen” relation (edges) between sensors (blue vertices) and markers (red vertices). For example, S3 cannot see M1 directly but indirectly via M2 and S2. A second possibility is via M2 and S5.

More »

Fig 3 Expand

Fig 4.

Arrangement of the Kinect sensors illustrating the overlapping tracking volumes.

Sensors were arranged using this pattern but then spatially calibrated to achieve high precision. The theoretical length of the tracking volume in walking direction using this setup is about 9 meters. The Kinect sensors were arranged in a way ensuring that the VICON tracking volume was completely covered.

More »

Fig 4 Expand

Fig 5.

Reconstruction of the body surface and averaged skeleton.

(A, B) Three-dimensional reconstruction of the body surface and the corresponding skeleton reconstruction using two sensors. The surface was estimated based on the 3d point clouds using the marching cubes algorithm in MeshLab [32]. Surface areas tracked by only one of the two sensors are highlighted in red (right) and blue (left). Corresponding Kinect skeleton joint positions estimates of the two sensors are shown as red and blue dots. The spatially averaged skeleton is indicated as black stick figure. (B) Magnification of the left lower leg. Notice, that the joint position estimates of the left sensor (blue) are closer to the surface which is only tracked by the left sensor (blue), correspondingly for the joint position estimates of the right sensor. (C) Averaged skeleton and joint position trajectories during walking obtained using six sensors.

More »

Fig 5 Expand

Fig 6.

Illustration of the analyzed spatial gait parameters.

Subjects were asked to always start walking with their left foot. The first step length left and first stride length right (gray) were excluded from the analysis, since these are generally shorter than the steps during actual walking (acceleration phase). Subjects did not stop at the end of the track volume (finish) so that there was no slowing down. Depending on the subjects’ individual step lengths there might be an unequal number of left and right steps. Parameters are assigned to the left or right foot depending on which foot was last placed on the floor, e.g. the first step length and step width are assigned to the left foot.

More »

Fig 6 Expand

Fig 7.

Identification of the foot events.

Foot events (black crosses) based on the ankle trajectories, here exemplified using the Kinect skeleton averaged across all sensors. Using the same procedure foot events were extracted from the VICON data. Top: Snapshots of the body posture during walking, bottom: ankle position of the left (red) and right (blue) foot in walking direction.

More »

Fig 7 Expand

Table 1.

Summary and agreement statistics for the view angle unrelated gait parameters walking speed and step time.

More »

Table 1 Expand

Table 2.

Summary and agreement statistics for spatiotemporal gait parameters using Kinect tracking from the left side (one-sided).

More »

Table 2 Expand

Table 3.

Summary and agreement statistics for spatiotemporal gait parameters using Kinect tracking from both sides (two-sided).

More »

Table 3 Expand