Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Locally weighted PCA regression to recover missing markers in human motion data


“Missing markers problem”, that is, missing markers during a motion capture session, has been raised for many years in Motion Capture field. We propose the locally weighted principal component analysis (PCA) regression method to deal with this challenge. The main merit is to introduce the sparsity of observation datasets through the multivariate tapering approach into traditional least square methods and develop it into a new kind of least square methods with the sparsity constraints. To the best of our knowledge, it is the first least square method with the sparsity constraints. Our experiments show that the proposed regression method can reach high estimation accuracy and has a good numerical stability.


Motion Capture (MoCap) technology is widely applied to our daily life, ranging from clinical purposes, sport coaching to movie visual effect production, computer animation [14] and VR/AR such as the iPad’s LiDAR sensor. We aim at one kind of MoCap data, i.e. 3D skeletal motion data, since some of usual problems (e.g. missing markers or occlusion, short-duration high frequency noise or jitter) always result in gaps in datasets, which is called as the “Missing Marker Problem” [5]. Although there are some commercial software available which can provide powerful tools for aiding in the cleanup of MoCap data [6], it can still often take several hours per capture and is almost always the most expensive and time consuming part of the pipeline. A rising challenge is to improve the accuracy of recovering gaps and the computational efficiency for evergrowing data [7]. There have been a certain number of presented measures to address this problem. Traditional approaches [810], utilizing the linear interpolation, spline interpolation, monotone piecewise cubic interpolation as well as Kalman filter, can successfully recovery gaps. Resorting to the available temporal information, they can work in real-time. However, these methods usually rely on the continuity of motion sequences. Manual intervention is still required when markers are missing for a long period of time, or missing from the very beginning [11, 12]. Thus this kind of methods may be unsuitable for long duration of missing joints [13, 14].

Besides, several methods, Li et al. [15, 16] and Tan et al. [17], employed the Linear Dynamic System (LDS) technology and successfully applied it to real-time applications. But they failed in a large ratio of occluded markers [18]. Moreover, Singular Value Thresholding (SVT) [19] and Non-negative Matrix Factorization (NMF) [11] approaches were employed as well in [12, 20, 21]. The distinct advantage is to take charge of the sparsest approximation to redundant motion datasets. The further research [22] aimed to clean up motion data through the low-rank matrix decomposition technology. However, such low rank approximation methods usually require prior knowledge of skeleton constraints, or the availability of a prerecorded dataset to recalculate skeleton constraints. Unfortunately they still led to unrealistic recovery [23], particularly the multiple missing markers’ scenarios [5, 24]. [18] further shows that when joints move up and down sharply and radically in motion sequences, all methods have to suffer too many “outliers” in the context of interpolation. The numerical stability of algorithms should be given priority. Apart from that, Federol [5, 24] employed the principal component analysis (PCA) approach to the multiple missing markers’ scenarios. The improvement is limited since they didn’t utilize training datasets. To take advantage of training datasets, Liu et al. [25] proposed a method of combining PCA and K-mean clustering. Our previous work [26] also made an attempt to tackle this issue. However, the numerical stability of algorithms is still a major challenge.

The main contribution of this paper is to introduce the multivariate tapering approach [27] to traditional least square methods and further develop it into the locally weighted PCA regression method for the “missing marker problem”. To the best of our knowledge, it is the first least square method with the sparsity constraints. Essentially, thanks to a sparse approximate covariance, it effectively suppresses the errors from redundant observation data and drastically improves the accuracy of estimations. The traditional least square methods just cannot handle the redundancy of input data well. Our experiments validate that the proposed locally weighted PCA regression method has a good numerical stability.


Our basic idea is to apply weighted least squares (WLS) to principal component analysis regression. Unlike the traditional WLS, we introduce the locally weighted strategy into WLS and conclude the locally weighted PCA algorithms. For clarity, we briefly address the weighted PCA method, and then propose the locally weighted PCA in this section. After that, we address extreme cases.

A sequence of 3D skeletal motion data is usually represented in a matrix form. Let a sample of motion data be AiRm×3n, i = 1‥K, where m is the number of frames, n is the number of markers, K is the number of training samples and mn. All the training samples may be stocked in a matrix . Let the testing sample be MRm×3n, which contains G gaps. Each gap does not only refer to some missing marker and also indicate the beginning and ending time of missing this marker in a sample. We can apply these gaps to every training sample Ai so that the resulting training sample has the same gaps as the testing sample M. To emphasize every gap, we apply only one gap to all the training samples each time, , where g denotes the index of gaps. As a result, we obtain a set of training samples for each full motion matrix Ai, i.e., , and stack them by gaps into the individual gap-groups, .

Weighted PCA

Applying Singular Value Decomposition (SVD) to the training sample sets yields, (1) where U and span the individual eigenspaces. In general, the principal component space is regarded as the sub-eigenspace spanned by the first k eigen vectors. (We still use U and to denote the principal component spaces in the following.) There exists a linear mapping Tg between the U and . Thus we may assume that, (2) where Tg is a mapping of size k × k with regard to the g-th gap. The residual error is expressed as, (3) Applying SVD to the residual BTB yields the eigenvalues {δi}.

To weight the residual error, we can construct the weighted matrix W as a diagonal matrix with the diagonal of {1/δi} through the training sample pairs. Usually, we need to set a threshold. When the δi < threshold, let δi = threshold. In weighted least squares, the W can overcome the issue of non-constant variance in samples.

We rewrite Eq 2 in a linear combination form as below, (4) where contains G gaps rather than one gap, and the regression coefficients αg correspond to the gaps separately. To solve the unknown α, we employ the weighted least square method as follows, (5) It concludes the weighted PCA interpolation equation for the testing sample M as, (6) where M* denotes the reconstructed full motion matrix.

Locally weighted PCA

Although the weighted least square method has good numerical stability and high computational efficiency, particularly it can deal with “outliers”, it still suffers the underfitting issue. When gaps stay at the areas where joints have the big fluctuations, it is hard for Eq 6 to improve the interpolation accuracy. The locally weighted strategy is to introduce a weighting mask over the motion matrix M, in which the entries far from the gaps will be given lower weights and the entries near to the gaps are given higher weights. This will taper the distance function to zero beyond a certain range. Mathematically, the mask will bring about some sparsity to covariance and result in an asymptotic optimal mean squared error. The mask is defined as, (7) where p denotes an entry within a sample matrix, and σ denotes the window size of Gaussian function. dist denotes the distance from an entry p to the gth gap within a sample matrix, that is, a square root of the summation of the squared the makers’ spatial and temporal distances in a motion matrix. ′i′ indicates time dimension and hence it is used to compute the temporal distance. ′j′ stands for the markers. But, we compute the spatial distance between markers using the shortest path on a human skeleton model instead of the real distance between two markers here.

Moreover, for each gap, we may construct the individual mask Qg and apply it to two sample sets A and respectively, which updates Eq 1 to yield the eigenspaces Ug and accordingly. We rewrite Eq 3 as, (8) where ′.*′ denotes the elementwise multiplication. The weighted matrix W can be constructed according to the eigenvalues of the covariance BTB. The regression coefficient α is solved by minimizing the residuals Eq 8 as, (9) where and is quantified in [0‥1]. It concludes the interpolation equation for the testing sample M as, (10)

Compared to the weighted PCA, the highlighted issue is the locally weighted Q, which is applied to the sample sets A and separately and results in the tapered covariance matrices. Essentially, the proposed locally weighted PCA is still a variant of weighed least square methods. Additionally, we only care about the interpolated items in M* of Eq 10. The others may be neglected.


The strategy of locally weighted mask Q is from the multivariate tapering approach, in which tapering, i.e., creating sparse approximate linear systems, has been shown to be an efficient tool in both the estimation and prediction settings [27]. In the missing marker interpolation scenario, we essentially construct the tapered covariances through a direct product of the presumed covariance function and a positive definite but compactly supported correlation function, i.e., the mask Q. Theoretically, multivariate tapering has shown asymptotic optimality for prediction, consistency, and asymptotic efficiency for estimation. In the application of motion data interpolation, our basic idea is to take the traditional weighted least square methods to suppress “outliers” and the covariance tapering technique to refine estimations. The “outliers” usually result in non-constant variances that are remedied by the traditional WLS. The sparsity of the tapered covariances can both remedy the underfitting issues and amend “outliers”. The covariance tapering method that we use is a bit different from the original version [27, 28], which applies the mask Q to the covariance to make it sparse. However it requires the tapered covariance maintains positive definiteness. In our implementation, the mask Q is applied directly to the sample data, i.e., Eq 8. Although the positive definiteness is guaranteed by the weight W in Eq 9, an issue is rising, i.e., is the resulting covariance Eq 9 sparse? In fact, Eq 9 is sparse, which may be simply explained as follows. Consider a sparse matrix A with mean zero. Let the covariance C = AAT. The diagonal entries of C indicate the squares of the norm of each row of A. The off diagonal entries indicate close and distant relationship among the rows of A. When two rows of A are similar, their off diagonal entry of C is high. Otherwise, the off diagonal entry is close to zero. If thresholding C, it can be noted that covariance is limited to a local neighborhood in C. Thus, C is sparse. Moreover, in terms of the multivariate tapering approach, the asymptotic mean squared error of the tapered covariance, i.e., Eq 9, can converge to the optimal error.

Extreme case

—Missing whole frames.

There are two extreme scenarios, missing whole frames and missing markers throughout. Due to sudden high frequency noise (or jitter) from the output, some frames are contaminated and have to be removed, which causes the issue of missing whole frames. Moreover, due to the detachment of markers throughout the MoCap session, this will result in the other issue of missing markers throughout. However, due to the de-meaning of the samples (i.e. removing the mean from the samples) in the pre-process stage, an accurate estimation of the mean is necessary to recover samples. In fact, it still remains challenging to estimate a proper mean in the scenario of missing markers throughout the MoCap session. In contrast, there is no such difficulty in the missing whole frames scenario. Our previous work [26] also shows the recovered samples are very sensitive to the estimated mean. We thus focus on the missing whole frames scenario in this paper.

Given an eigenspace V and the projection a of some sample P onto V, the sample can be expressed as P = Va. If dividing V into two parts, , the P may be reconstructed by them, , in which the same projection a is shared by these two parts of V. Thus, it is possible to reconstruct a part of P by the other one, . Moreover, if P2 contains one row vector while P1 containing all other vectors, it will effectively improve the estimation accuracy of P2. This is essentially a computation in terms of the correlation between frames. To this end, we introduce the Gram Matrix into Eq 1, which represents the inner product space, so that we can exploit the correlation between frames.

Let A = (A1, …, AK) and , where each pair Ai and share the same shape but contains the missing frames. Let , where and correspond to the non-missing frame part and missing frame part respectively. Ai has the same partition, . The eigenspaces can be constructed by SVD, AAT = VΣVT and . There is a linear mapping T between V and . We can divide V and according to the non-missing frame part and missing frame part, (11) and obtain the residual error as, (12) Unlike Eq 3, the weighted matrix W is constructed by the eigenvalues of BBT, which unlikely results in zero eigenvalues due to a small number of missing frames in practice.

As the missing part on a sample is a set of whole frames, we may view each missing frame as a gap and set the missing frame number as G here. Without consideration of the markers’ spatial distances, we can remove the index ′j′ from the locally weighted mask Eq 7 and apply it to two sample sets A and to generate the individual eigenvectors Vg, and the mapping Tg between them g = 1‥G, respectively. After that, we can solve the regression coefficients α through minimizing, (13) where , α is a vector with G unknown regression parameters. We further conclude the locally weighted PCA regressor to recover the missing whole frames, (14) where M1 denotes the non-missing frame part of the testing sample M while for the recovery of missing frame part.


Dataset and experiment settings

We conduct experiments on two famous datasets, i.e., the Motion Capture database HDM05 [29] and the CMU Motion Capture Database [30]. These two datasets contain up to 4000 motion frames with 41 markers. For convenience, we denote them as HDM and CMU respectively. In our experiments, two motion sequences are used for samples. Every sample is of 400 successive frames from the sequences.

Consider two scenarios in our experiments, i.e., one missing marker and multiple missing markers in motion sequences. For the single missing marker case, we test each marker (or joint) with one random gap, in which the gap refers to one missing marker as adopted in [24]. Each sample is a sequence of 400 consecutive frames and each gap lasts 380 consecutive frames within it. For the multiple missing markers case, we produce three types of samples, including 3, 6 and 9 gaps randomly placed in a sample. Each gap occupies one marker. Fig 1 illustrates missing markers in a sequence sample. To reduce the influence of the randomness of generating samples, all the settings are executed 50 times. The final result is an average of 50 recovery errors. For the extreme case, we also produce four types of samples, including 3, 6, 9 and 12 consecutive whole frames separately as gaps. The gaps are randomly placed in a sample.

Fig 1. Examples of missing a single marker and multiple markers in a sample.

YELLOW indicates gaps. (a) a single gap in HDM dataset. (b) multiple gaps in HDM dataset.

To compare with the state of the art methods, we focus on Probabilistic Model Averaging (PMA) [18] and two kinds of PCA-based reconstruction methods separately from Gloersen et al. [24] and our previous method [26]. Tits et al. [18] shows a good performance of their PMA against the other existing methods. Gloersen et al. [24] mentioned two methods, PCA_R1 and PCA_R2. Their experiments show that PCA_R2 outperforms PCA_R1. Our previous work [26] is closely related to [24] but the difference is to use training datasets. The other methods do not share their source codes for comparison such as Kalman Filter based gap filling algorithm [14]. Hence, we compare our proposed algorithms, Weighted PCA (denoted as WPCA) and Locally Weighted PCA (denoted as LWPCA), with PMA, PCA_R2 (denoted as PCA) and our previous method [26] in our tests.

The recovery error in our experiments is estimated by Mean Square Error (MSE) as follows, (15) where Mask is a 0/1 matrix with the same shape as the sample M*, Mgrd is the ground truth matrix and m is the number of missing entries in a testing sample matrix. The mean recovery error is to average the MSEs of 50 trials.


Missing single marker.

Table 1 shows a comparison of our proposed algorithms (WPCA and LWPCA) and the other methods (PCA, PMA and [26]) on 5 markers(or joints). All the methods are carried out on the missing single marker setting through 50 trials for each marker respectively, that is, for each marker, a gap is randomly placed on its trajectory at each trial and the average values resulting from five methods are shown in Table 1. Thanks to the locally weighted mask strategy, our LWPCA outperforms the others. LWPCA’s results are noticeable better than those of the other methods though it can be noted that our algorithms’ performance(i.e., [26], WPCA and LWPCA) are very close on the CMU dataset compared to the HDM dataset. This is because the algorithm in [26] and WPCA are essentially least square method that can overcome noises but fails in “outliers”. LWPCA can overcome both noises and outliers. Our algorithms employ the training dataset (see Eq 1). Compared to the HDM training data, there are fewer outliers in the CMU training data. As a result, when performing our algorithms on two datasets, there are more prominent differences on the HDM data than on the CMU data.

Table 1. Mean recovery error for missing a single marker (The small value indicates small error according to Eq 15.

The results of all joints in the S1 Appendix).

Moreover it can be noted that the performance of our LWPCA on the joints-9,14 in the HDM group and the joints-21,30,33 in the CMU group are not best in Table 1 (for all the joints’ results, refer to the S1 Appendix). This is not strange. From a statistical perspective, the statistics of joints in Table 1 only show a kind of statistical average values that conceals numerical variance. The reconstruction of some joint’s trajectory can disclose the numerical variance of fitting curves and give us an insight to the methods. Fig 2 shows the trajectory reconstruction of these five joints with single missing marker. The ground truth trajectories of these five joints have big fluctuations, which tends to overfitting when interpolating gaps on them. Thus the stability of algorithms is prior to others here. We show the whole reconstructed trajectories instead of the interpolated gaps to highlight the numerical performance of our method-LWPCA, i.e., it can reach desired reconstruction accuracy. By contrast, the statistical averages in Table 1 cannot always accurately reveal the methods’ numerical performance. The important issue is that our algorithm-LWPCA demonstrates good numerical stability.

Fig 2. Comparison of the reconstruction for single missing gap at joints-9,14 on HDM data and joints-21,30,33 on CMU data respectively.

Dotted lines represent the ground truth trajectories of joints. As WPCA is very close to LWPCA, it is not shown here.

Missing multiple markers.

Table 2 shows the average recovery errors with increasing the number of missing markers in a testing sample. For a quantitative comparison, we give out the percentage of gaps over the whole sample in Table 2. Our LWPCA outperforms the others. Moreover, Fig 3 shows a boxplot of the recovery errors for missing 3, 6 and 9 markers in a testing sample. It can be noted that our algorithms ([26], WPCA and LWPCA) can gain a low variance of errors while the PCA and PMA methods suffer a big variance of errors. This means that our algorithms have a good numerical stability. It is also justified by Fig 4, which shows the joint-33’s trajectories when there are 3, 6 and 9 gaps in a sample respectively and the trajectory of joint-33 has gap all the time. Even if increasing the number of gaps, our LWPCA does not show evident degradation in performance. This isn’t indeed surprising since our algorithms employ training datasets.

Fig 3. Boxplot of recovery errors for missing 3, 6, 9 markers in a testing sample using CMU dataset.

Fig 4. Reconstruction of the trajectory of joint-33 in the scenarios of missing 3,6 and 9 markers.

Dotted lines represent the ground truth trajectories of the joint-33.

Table 2. Mean recovery errors for missing multiple markers (the small figure indicates small error according to Eq 15).

Additionally, PMA [18] employs a “Spacing Constraint” as its post-process in case of outliers, which can effectively enhance the algorithm robustness. We therefore use the PMA+constraints in our tests. For an intuitive comparison, we further perform our LWPCA on the same settings in [18] and compare the results with Table 2 of [18] in Table 3. Such low p-values indicate that the PMA suffers many “outliers” and heavily depends on the “Spacing Constraint” to suppress them. Furthermore, compared to Fig 3, it can be noted that our LWPCA evidently decreases the risk of “outliers”. Thus our LWPCA demonstrates good robustness.

Table 3. Mean recovery errors comparison with Table 2 of [18].

Extreme case–missing whole frames.

We compare the LWPCA method with our previous method [26] in estimating the missed whole time frames in a sample since the other methods do not take into account such extreme cases. Table 4 shows the proposed LWPCA is evidently better than our previous method [26]. Moreover, comparing Table 2 with Table 1, it can be noted that the LWPCA performance is comparable. This justifies again that the proposed LWPCA has a good numerical stability.


In this paper, we introduce the sparsity of observation data through the multivariate tapering approach [27] into traditional least square methods and develop it into the locally weighted least square scheme. It is the first least square method with the sparsity constraint and has a wide applications in prediction, estimation, regression analysis etc. To validate its numerical performance, we apply the proposed locally weighted PCA regressor (i.e., LWPCA Eq 10 to the “missing markers problem”. The experiment results show that the proposed LWPCA can reach high estimation accuracy and has a good numerical stability.

For motion data interpolation, our LWPCA demonstrates a good numerical performance though the extreme case-missing marker throughout is not taken into account. In fact, the distinct advantage of our methods (WPCA, LWPCA, [26]) is to employ training datasets. Using training datasets does not add computational burden since the complexity of matrix decomposition relies on the sample’s size rather than the amount of samples. Our LWPCA can work nearly in real-time. However, selecting training data still remains challenging. This will be our future work.


  1. 1. Moeslund TB, Hilton A, Krüger V. A survey of advances in vision-based human motion capture and analysis. Computer vision and image understanding. 2006;104(2-3):90–126.
  2. 2. Hilton A, Fua P, Ronfard R. Modeling people: Vision-based understanding of a person’s shape, appearance, movement, and behaviour. Computer Vision and Image Understanding. 2006;104(2):87–89.
  3. 3. Rego P, Moreira PM, Reis LP. Serious games for rehabilitation: A survey and a classification towards a taxonomy. In: 5th Iberian conference on information systems and technologies. IEEE; 2010. p. 1–6.
  4. 4. Zhou H, Hu H. Human motion tracking for rehabilitation—A survey. Biomedical Signal Processing and Control. 2008;3(1):1–18.
  5. 5. Federolf PA. A novel approach to solve the “missing marker problem” in marker-based motion analysis that exploits the segment coordination patterns in multi-limb motion data. PloS one. 2013;8(10). pmid:24205295
  6. 6. Vicon Software. https://wwwviconcom/products/software/. 2020;.
  7. 7. Holden D. Robust Solving of Optical Motion Capture Data by Denoising. ACM Trans Graph. 2018;37(4).
  8. 8. Fritsch FN, Carlson RE. Monotone piecewise cubic interpolation. SIAM Journal on Numerical Analysis. 1980;17(2):238–246.
  9. 9. Rose C, Cohen MF, Bodenheimer B. Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications. 1998;18(5):32–40.
  10. 10. Aristidou A, Cameron J, Lasenby J. Real-time estimation of missing markers in human motion capture. In: 2008 2nd International Conference on Bioinformatics and Biomedical Engineering. IEEE; 2008. p. 1343–1346.
  11. 11. Peng SJ, He GF, Liu X, Wang HZ. Hierarchical block-based incomplete human mocap data recovery using adaptive nonnegative matrix factorization. Computers & Graphics. 2015;49:10–23.
  12. 12. Feng Y, Xiao J, Zhuang Y, Yang X, Zhang JJ, Song R. Exploiting temporal stability and low-rank structure for motion capture data refinement. Information Sciences. 2014;277:777–793.
  13. 13. Howarth SJ, Callaghan JP. Quantitative assessment of the accuracy for three interpolation techniques in kinematic analysis of human movement. Computer methods in biomechanics and biomedical engineering. 2010;13(6):847–855. pmid:21153975
  14. 14. Gomes D, Guimarães V, Silva J. A Fully-Automatic Gap Filling Approach for Motion Capture Trajectories. Applied Sciences. 2021;11(21).
  15. 15. Li L, McCann J, Pollard NS, Faloutsos C. Dynammo: Mining and summarization of coevolving sequences with missing values. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining; 2009. p. 507–516.
  16. 16. Li L, McCann J, Pollard N, Faloutsos C. BoLeRO: A Principled Technique for Including Bone Length Constraints in Motion Capture Occlusion Filling. In: Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. SCA’10. Goslar, DEU: Eurographics Association; 2010. p. 179–188.
  17. 17. Tan CH, Hou J, Chau LP. Motion capture data recovery using skeleton constrained singular value thresholding. The Visual Computer. 2015;31(11):1521–1532.
  18. 18. Tits M, Tilmanne J, Dutoit T. Robust and automatic motion-capture data recovery using soft skeleton constraints and model averaging. PloS one. 2018;13(7). pmid:29990367
  19. 19. Lai RY, Yuen PC, Lee KK. Motion Capture Data Completion and Denoising by Singular Value Thresholding. In: Eurographics (Short Papers); 2011. p. 45–48.
  20. 20. Tan CH, Hou J, Chau LP. Human motion capture data recovery using trajectory-based matrix completion. Electronics letters. 2013;49(12):752–754.
  21. 21. Hu W, Wang Z, Liu S, Yang X, Yu G, Zhang JJ. Motion capture data completion via truncated nuclear norm regularization. IEEE Signal Processing Letters. 2017;25(2):258–262.
  22. 22. Liu X, Cheung Ym, Peng SJ, Cui Z, Zhong B, Du JX. Automatic motion capture data denoising via filtered subspace clustering and low rank matrix approximation. Signal processing. 2014;105:350–362.
  23. 23. Cao F, Chen J, Ye H, Zhao J, Zhou Z. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Networks. 2017;85:10–20. pmid:27814461
  24. 24. Gløersen Ø, Federolf P. Predicting missing marker trajectories in human motion data using marker intercorrelations. PloS one. 2016;11(3). pmid:27031243
  25. 25. Liu G, McMillan L. Estimation of missing markers in human motion capture. The Visual Computer. 2006;22(9-11):721–728.
  26. 26. Li Z, Yu H, Kieu HD, Vuong TL, Zhang JJ. PCA-Based Robust Motion Data Recovery. IEEE Access. 2020;8:76980–76990.
  27. 27. Furrer R, Bachoc F, Du J. Asymptotic properties of multivariate tapering for estimation and prediction. Journal of Multivariate Analysis. 2016;149:177–191.
  28. 28. Du J, Zhang H, Mandrekar VS. Fixed-domain asymptotic properties of tapered maximum likelihood estimators. The Annals of Statistics. 2009;37(6A):3330–3361.
  29. 29. Müller M, Röder T, Clausen M, Eberhardt B, Krüger B, Weber A. Documentation Mocap Database HDM05. Universität Bonn; 2007. CG-2007-2.
  30. 30. CMU Graphics Lab Motion Capture Database. http://mocapcscmuedu/. 2020;.