## Figures

## Abstract

In recent years, non-rigid structure from motion (NRSFM) has become one of the hottest issues in computer vision due to its wide applications. In practice, the number of available high-quality images may be limited in many cases. Under such a condition, the performances may not be satisfactory when existing NRSFM algorithms are applied directly to estimate the 3D coordinates of a small-size image sequence. In this paper, a sub-sequence-based integrated algorithm is proposed to deal with the NRSFM problem with small sequence sizes. In the proposed method, sub-sequences are first extracted from the original sequence. In order to obtain diversified estimations, multiple weaker estimators are constructed by applying the extracted sub-sequences to a recent NRSFM algorithm with a rotation-invariant kernel (RIK). Compared to other first-order statistics, the trimmed mean is a relatively robust statistic. Considering the fact that the estimations of some weaker estimators may have large errors, the trimmed means of the outputs for all the weaker estimators are computed to determine the final estimated 3D shapes. Compared to some existing methods, the proposed algorithm can achieve a higher estimation accuracy, and has better robustness. Experimental results on several widely used image sequences demonstrate the effectiveness and feasibility of the proposed algorithm.

**Citation: **Wang Y-P, Sun Z-L, Lam K-M (2015) An Effective Approach for NRSFM of Small-Size Image Sequences. PLoS ONE 10(7):
e0132370.
https://doi.org/10.1371/journal.pone.0132370

**Editor: **Sergio Gómez,
Universitat Rovira i Virgili, SPAIN

**Received: **July 29, 2014; **Accepted: **June 14, 2015; **Published: ** July 10, 2015

**Copyright: ** © 2015 Wang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

**Data Availability: **The authors cannot make data publicly available because they were obtained from third parties. Data are freely available from the NRSFM study whose authors may be contacted at Dr. Ijaz Akhter, email: ijaz.akhter@tue.mpg.de, http://ps.is.tuebingen.mpg.de/person/akhter; or from Dr. Paulo Fabiano Urnau Gotardo, email: gotardop@ece.osu.edu, http://www2.ece.ohio-state.edu/~gotardop/.

**Funding: **The work was supported by a grant from National Natural Science Foundation of China (No. 61370109), a grant from Natural Science Foundation of Anhui Province (No. 1308085MF85), and 2013 Zhan-Li Sun's Technology Foundation for Selected Overseas Chinese Scholars from department of human resources and social security of Anhui Province (Project name: Research on structure from motion and its application on 3D face reconstruction). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

Non-rigid structure from motion (NRSFM) is the process of recovering the relative camera motion, and the time-varying 3D coordinates of feature points on a deforming object, by means of the corresponding 2D points in a sequence of images. In many cases, the recovered 3D shapes can effectively enhance the performances of existing systems in object recognition, face perception, etc. [1–3]. Nevertheless, in the NRSFM model, the objects generally undergo a series of shape deformations and pose variations. Thus, in the absence of necessary prior knowledge on shape deformation, recovering the 3D shape and motion of nonrigid objects from 2D point tracks remains a difficult and ill-posed problem.

As a pioneering work, a non-rigid model was proposed in [4] by formulating the 3D shape in each frame of a sequence as a linear combination of a set of basis shapes. Nevertheless, due to a lack of sufficient constraints on the shape deformation, the recovered 3D shapes are not unique under this model. In order to alleviate the ambiguities, recent research works have attempted to define additional constraints to make NRSFM more tractable [5]. More determined solutions are given in [6] by utilizing the facts that the bases degenerate under some special cases. In [7, 8], the 3D shape at each time instant is assumed to be drawn from a Gaussian distribution. Assuming that the 3D shape deformation is smooth over time, the time-varying structure of a nonrigid object is represented as a linear combination of a set of basis trajectories [9–11], e.g. the Discrete Cosine Transform (DCT) basis. Since the basis trajectories are known *a priori*, this method can significantly reduce the number of unknown parameters and improve the estimation stability. Instead of the time-varying structure, the camera’s trajectory is modeled as a linear combination of DCT basis vectors, which provides better results on complex articulated deformations [12, 13]. In [14], the complex deformable 3D shapes are represented as the outputs of a non-linear mapping via the kernel trick [15]. Recently, a novel NRSFM with a rotation-invariant kernel (RIK) was proposed in [16], which utilizes the spatial-variation constraint. A prominent advantage of this method is that it is able to deal with the data lacking temporal ordering or with abrupt deformations.

In practice, the number of available high-quality images may be limited in many cases, such as the face images in a surveillance system, etc. If the existing NRSFM algorithms are directly used to estimate the 3D coordinates of a small-size image sequence, the estimation accuracy may be relatively low. In this paper, a sub-sequence based integrated algorithm is proposed to deal with the small-sequence problem. In the proposed method, the 3D coordinates of each frame are estimated one by one. For a test frame, except for itself, a few frames are first randomly extracted from the original sequence. Then, the extracted frames, together with the test frame, form a sub-sequence to be applied to RIK. Similar to the classifier committee learning [17], the sub-sequence and the estimation process of RIK constitute a weaker estimator. Finally, the *z*-coordinates obtained by multiple weaker estimators are integrated and used as the final estimation for the test frame. Experimental results on several widely used image sequences demonstrate the effectiveness and feasibility of the proposed algorithm.

## Methodology

Fig 1 shows the flowchart of the sub-sequence-based integrated RIK algorithm. There are three main steps in our algorithm: extract the sub-sequences from the original sequences, construct the weaker estimators based on the RIK algorithm, and integrate the outputs of the weaker estimators. A detailed description of these three steps is presented in the following subsections.

### Sub-Sequence Extraction

The first step of our proposed method is to extract sub-sequences from a small-size sequence, as shown in Fig 2. For a sequence with *F* frames and *n* feature points in each of the frames, denote [*x*_{t, j}, *y*_{t, j}]^{T} (*t* = 1, 2, ⋯, *F*, *j* = 1, 2, ⋯, *n*) as the 2D projection of the *j*th 3D point observed on the *t*th image. The *n* 2D point tracks of the *F* images can be represented as a 2*F* × *n* observation matrix **W**, i.e.
(1)
For the *t*th frame, the observation **w**_{t} is a 2 × *n* matrix, as follows:
(2)
The observations of an original sequence with *F* images are derived. When the 3D coordinates of the *t*th image are to be estimated, the matrix **W**_{r} shown in Fig 2 can be given as follows:
(3)
Assuming that the number of frames in a sub-sequence is *F*_{s}, the observation matrix is constructed by randomly selecting *F*_{s}−1 observations from **W**_{r} and merging them with **w**_{t}. Thus, *N* sub-sequences are obtained when the sub-sequence extraction process is repeated *N* times.

### RIK-based Weaker Estimator

For each test frame **w**_{t}, we construct *N* sub-sequence observation matrices . In order to estimate the 3D coordinates of the *t*th frame, one sub-sequence is applied to the RIK algorithm. Assume that the number of basis shapes is *K*. In terms of the linear-subspace model [8], is factorized as a product of two matrices via singular value decomposition, i.e.
(4)
where **M** is a 2*F*_{s} × 3*K* camera matrix, and **S** includes *K* basis shapes, i.e.
(5)
Further, **M** is decomposed as follows:
(6)
where the block-diagonal rotation matrix **D** is obtained via an Euclidean upgrade step [10], and **C** and **I**_{3} represent a shape coefficient matrix and a 3 × 3 identity matrix, respectively. The operator ⊗ denotes the Kronecker product. Further, **C** is represented as a product of the coefficient matrix **X** and a new basis matrix **B** [13], i.e.
(7)
In the optimization procedure, **X** can be initialized as a low-rank identity matrix, and **B** is computed via the kernel mapping [15]. Let be the *t*th row of **C**. The 3D shape of the *t*th image can be given as follows:
(8)
where **M**^{†} denotes the Moore-Penrose pseudo-inverse of **M** [16].

### Integration of Weaker Estimators

For the *t*th test frame, we can see from Section 1 that one set of estimated **z**_{tj} can be obtained for the *j*th sub-sequence . When each sub-sequence is applied in turn to RIK, we can obtain *N* sets of estimated **z**_{tj} (*j* = 1, ⋯, *N*). Similar to the notation of classifier-committee learning [17] in pattern recognition, here each input and the corresponding reconstruction model can be considered as a weaker estimator. In order to integrate the results obtained by the *N* weaker estimators, the arithmetic average of **z**_{t1}, ⋯, **z**_{tN} is a relatively simple implementation, i.e.
(9)
which can be used as the final estimated z-coordinates of the *t*th test image. Compared to the arithmetic average, the trimmed mean is a more robust integration estimation. Assuming that *P* percentage of the observations is trimmed, the number (*N*_{d}) of the smallest or the largest observations to be discarded is
(10)
where [⋅] denotes a rounding operation. Further, assuming that the entries of **z**_{tj} are ordered such that **z**_{t1} < **z**_{t2} < ⋯ < **z**_{tN}, the trimmed mean can be computed as follows:
(11)

## Experimental results

### Experimental data

We evaluate the performance of our proposed method on three synthetic-image sequences (*stretch*, *face1*, *face2*) and three real-image sequences (*cubes, dance, matrix*), which are widely used sequences and are publicly available [11, 16]. For these 6 sequences, the corresponding number of frames (*T*) and the number of point tracks (*n*) are shown in Table 1.

Besides these data, some real face-image sequences from the Bosphorus database are also used in the experiments. Bosphorus is a relatively new 3D face database that includes face images with a rich set of expressions and a systematic variation in poses [18].

To evaluate the estimation accuracy, two performance indices are adopted here to compare the true 3D shapes and the estimated results. One performance index is the Pearson’s linear correlation coefficient between the true z-coordinates **z** and the estimated z-coordinates , i.e.
(12)
where *μ*_{z} and *σ*_{z} are the respective mean and standard deviation of **z**, and and are the respective mean and standard deviation of . A higher absolute value of means that is closer to **z**. The other performance index is the mean error between the true z-coordinates **z** and the estimated z-coordinates , i.e.
(13)

### Experiments

In order to verify the performance of our proposed sub-sequence-based integrated RIK algorithm (denoted as SSI-RIK), we compare it to the original RIK method [16], EM-SFM [7], and CSF [14], which have relatively good performances among existing algorithms.

As the challenge addressed in this paper is the NRSFM problem with small-size image sequences, we first extract a small sequence from an original sequence, to be used as the experimental data. Take the sequence *stretch*, for example: the first 15 frames are used to form a small sequence. i.e. *F* = 15. The length of sub-sequences (*F*_{s}) and the number of weaker estimators (*N*) are set at 6 and 10, respectively. For the four algorithms, Table 2 shows the correlation coefficients of the 15 frames, and the corresponding mean (*μ*) and standard deviation (*σ*). Table 3 shows the correlation coefficient increasing percentages (%) of SSI-RIK compared to EM-SFM, CSF and RIK. Additionally, Tables 4 and 5 show the similar performance comparisons of the *z*-coordinate errors. In these Tables, the numbers 1 to 15 denote the 1th to 15th frame in the small sequence.

From Tables 2 and 3, we can see that the correlation coefficients of SSI-RIK are obviously higher than those of EM-SFM, CSF and RIK. Moreover, it can be seen from Tables 4 and 5 that the *z*-coordinate errors of SSI-RIK are significantly lower than those of EM-SFM, CSF and RIK. Thus, SSI-RIK has a higher estimation accuracy than the other methods. In addition, we can see from Tables 2 and 4 that the standard deviations of SSI-RIK are lower than those of the other three methods. This indicates that SSI-RIK is a more robust approach.

Taking the first frame of *stretch* as an example, Figs 3 and 4 show the comparisons of the true values and the estimated values for the *z*-coordinate values and the 3D feature points, respectively. We can see that the *z*-coordinate values and the 3D feature points estimated by SSI-RIK are closer to the true values than those estimated by the other three methods, which coincides with the performance indices of the correlation coefficients and the *z*-coordinate errors.

In order to investigate the effect of sequence size (*F*) on the performances of the various algorithms, Tables 6 and 7 tabulate the mean and standard deviation (*μ* ± *σ*) of the correlation coefficients and the *z*-coordinates errors, respectively, when the sequence sizes vary from 15 to 50 with an equal interval of 5. Moreover, for the mean values of the correlation coefficients and the *z*-coordinates errors, Tables 8 and 9 show the corresponding increasing percentages and decreasing percentages of SSI-RIK compared to EM-SFM, CSF and RIK, respectively.

Further, Figs 5 and 6 show the overall mean and standard deviation (*μ* ± *σ*) of the correlation coefficients and the *z*-coordinate errors for different sequence sizes, respectively. In these two figures, the *x* axis denotes image sequences in terms of the numbers shown in Table 1. From Tables 6–9 and Figs 5 and 6, we can see that SSI-RIK has a better performance than EM-SFM, CSF and RIK for different sequence sizes.

We also present the experimental results on the real Bosphorus database. In experiments, the *z*-coordinates of the frontal-view images are estimated. As an example, Tables 10 and 11 show the correlation coefficients and the *z*-coordinate errors, respectively, when the sequence sizes vary from 7 to 14 for one individual. Moreover, Tables 12 and 13 show the corresponding increasing and decreasing percentages of SSI-RIK compared to EM-SFM, CSF and RIK, respectively. It can be seen that, for different sequence sizes, SSI-RIK generally achieves a better performance than EM-SFM, CSF and RIK.

Further, Figs 7 and 8 show the overall mean and standard deviation (*μ* ± *σ*) of correlation coefficients and *z*-coordinate errors for 10 individuals, respectively. In these two figures, the *x* axis denotes the individuals in terms of their corresponding number in the database. We can see that, again, SSI-RIK has a better performance than EM-SFM, CSF and RIK for different individuals.

### Discussions

There are two possible methods to integrate the outputs of the weaker estimators, i.e. the arithmetic average (denoted as AA-SSI-RIK) and the trimmed mean (denoted as TM-SSI-RIK). For the results given in Tables 10, 11 and 14 tabulates the correlation coefficients, the *z* coordinate errors, and the corresponding mean (*μ*) and standard deviation (*σ*) when the sequence sizes vary from 7to 14 using the different integration methods. Moreover, Table 15 shows the corresponding increasing and decreasing percentages of TM-SSI-RIK compared to AA-SSI-RIK. We can see that TM-SSI-RIK generally has a higher estimation accuracy than AA-SSI-RIK. Therefore, the trimmed mean is adopted in our proposed method to integrate the outputs of the weaker estimators.

As RIK has been developed originally for the long sequences, we also present here the experimental comparison of RIK and SSI-RIK when the entire sequence is used to estimate the 3D shapes. Tables 16 and 17 show the mean and standard deviation (*μ* ± *σ*) of the correlation coefficients and the *z*-coordinate errors, respectively. We can see that the performance of SSI-RIK is better than RIK for most sequences.

Similar to pattern recognition, we tried to search for the optimal values of parameters *Fs*, *N* and *P* with the cross validation method, which is a widely used parameter selection approach. After the small-size sequences are extracted from the original sequences, the remained frames are divided into 5 folds and used as the validation sets. Furthermore, the grid divisions are carried out on the three parameters. The *z*-coordinates of the validation sets are estimated via the proposed method with each possible set of parameters *Fs*, *N* and *P*. Take the sequence *stretch* for example, Fig 9 shows the mean *z*-coordinate errors of 5-fold validation sets for different *Fs*, *N* and *P*. Correspondingly, Fig 10 shows the *z*-coordinate errors of the testing sequences. We can see that the testing error may not be small for a set of parameter with a small validation error. Thus, it is not effective to search for the optimal parameters with the cross validation method. On the other hand, it can be seen from Fig 10 that the *z*-coordinate errors vary with different parameter values, but the variations are not so significant. Besides the cross validation, there are many other parameter selection methods. Thus, how to devise a more effective method to accurately determine the optimal parameter values should be a meaningful and valuable work.

## Conclusions

In this paper, a sub-sequence-based RIK algorithm is proposed for NRSFM for small-size sequences. Compared to some existing algorithms, the proposed method has a higher estimation accuracy. Moreover, the robustness of the proposed method is better than those of the existing algorithms. The experimental results on both the artificial and the real data have verified the effectiveness and feasibility of the proposed method.

## Author Contributions

Conceived and designed the experiments: YPW ZLS. Performed the experiments: YPW ZLS. Analyzed the data: YPW ZLS. Contributed reagents/materials/analysis tools: YPW ZLS. Wrote the paper: YPW ZLS KML.

## References

- 1. Sun ZL, Lam KM, Dong ZY, Wang H, Gao QW. Face recognition with multi-resolution spectral feature images. PLOS ONE. 2013; 8(2): e55700. PubMed Central PMCID: PMC3572116 pmid:23418451
- 2. Stonkute S, Braun J, Pastukhov A. The role of attention in ambiguous reversals of structure-from-motion. PLOS ONE. 2012; 7(5): e37734. PubMed Central PMCID: PMC3358281. pmid:22629450
- 3. Scocchia L, Valsecchi M, Gegenfurtner KR, Triesch J. Visual working memory contents bias ambiguous structure from motion perception. PLOS ONE. 2013; 8(3): e59217. PubMed Central PMCID: PMC3602104. pmid:23527141
- 4.
Bregler C, Hertzmann A, Biermann H. Recovering non-rigid 3D shape from image streams. IEEE Conference on Computer Vision and Pattern Recognition. 2000; 2: 690–696.
- 5.
Paladini, M, Del Bue, A, Stosic, M, Dodig, M, Xavier, J, Agapio, L. Factorization for non-rigid and articulated structure using metric projections. IEEE Conference on Computer Vision and Pattern Recognition. 2009; 2898–2905.
- 6.
Xiao J, Kanade T. Non-rigid shape and motion recovery: degenerate deformations. IEEE Conference on Computer Vision and Pattern Recognition. 2004; 1: 675–668.
- 7.
Torresani L, Hertzmann A, Bregler C. Learning non-rigid 3D shape from 2D motion. Proceedings of Neural Information Processing Systems. 2003; 1555–1562.
- 8. Torresani L, Hertzmann A, Bregle C. Nonrigid structure-from-motion: Estimating shape and motion with hierarchical priors. IEEE Transactions on Pattern Analysis and machine Intelligence. 2008; 30(5): 878–892. pmid:18369256
- 9.
Akhter I, Sheikh YA, Khan S, Kanade T. Nonrigid structure from motion in trajectory space. Neural Information Processing Systems. 2009; 42–48.
- 10.
Akhter I, Sheikh Y, Khan S. In defense of orthonormality constraints for nonrigid structure from motion. IEEE Conference on Computer Vision and Pattern Recognition. 2009; 1534–1541.
- 11. Akhter I, Sheikh Y, Khan S, Kanade T. Trajectory space: a dual representation for nonrigid structure from motion. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2011; 33(7): 1442–1456.
- 12.
Gotardo PFU, Martinez AM. Non-rigid structure from motion with complementary rank-3 spaces. IEEE Conference on Computer Vision and Pattern Recognition. 2011; 3065–3072.
- 13. Gotardo PFU, Martinez AM. Computing smooth time-trajiectories for camera and deformable shape in structure from motion with occlusion. IEEE Transactions on Pattern Analysis and machine Intelligence. 2011; 33(10): 2051–2065. PubMed Central PMCID: PMC3825848.
- 14.
Gotardo PFU, Martinez AM. Kernel non-rigid structure from motion. IEEE International Conference on Computer Vision. 2011; 802–809. PubMed Central PMCID: PMC3758879.
- 15.
Scholkopf B, Smola AJ. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press. 2002; 2491–2521. PubMed Central PMCID: PMC3758879.
- 16.
Hamsici OC, Gotardo PFU, Martinez AM. Learning spatially-smooth mappings in non-rigid structure from motion. European Conference on Computer Vision. 2012; 260–273. PubMed Central PMCID: PMC3740973.
- 17. Kittler J, Hatef M, Duin PW, Matas J. On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1998; 20(3): 226–239.
- 18.
http://bosphorus.ee.boun.edu.tr/