Algebraic Error Based Triangulation and Metric of Lines

Line triangulation, a classical geometric problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras with known projection matrices. Compared to point features, line segments are more robust to matching errors, occlusions, and image uncertainties. In addition to line triangulation, a better metric is needed to evaluate 3D errors of line triangulation. In this paper, the line triangulation problem is investigated by using the Lagrange multipliers theory. The main contributions include: (i) Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is provided, and from the formula, a new linear algorithm, LINa, is proposed for line triangulation; (ii) two optimal algorithms, OPTa-I and OPTa-II, are proposed by minimizing the algebraic error; and (iii) two metrics on 3D line space, the orthogonal metric and the quasi-Riemannian metric, are introduced for the evaluation of line triangulations. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.


Introduction
Line triangulation [1], [2] refers to the process of determining a 3D line given its projections in two or more images and the corresponding camera matrices. As one of the fundamental problems in computer vision, this problem is trivial in theory, since the corresponding 3D line is the intersection of the back-projection planes of the image lines. However, when the number of views is larger than 2, the back-projection planes usually do not intersect at one line in the 3D space due to measurement errors and image noise. This leads to find a 3D line that fits the measured data optimally, i.e., optimal line triangulation.
Minimizing the algebraic error of line triangulation is a linear least squares problem with a quadratic constraint (called the Klein constraint), as defined in Section 2 of this paper. Adrien and Sturm [3], [4] proposed a linear algorithm for the algebraic error minimization. This algorithm first finds a solution of the corresponding linear least squares problem (i.e., by ignoring the Klein constraint), then, the solution is corrected subsequently by a singular value decomposition (SVD) method with the Klein constraint enforcement. This algorithm yields only an approximation of the optimal solution to the algebraic error minimization. The paper [5] proposed a suboptimal solution to algebraic-error line triangulation. This algorithm finds a suboptimal solution of the original problem by relaxing the quadratic unit norm constraint to six linear constraints. However, this still cannot yield an optimal solution to algebraic error minimization. To the best of our knowledge, how to find optimal solution of the algebraic error minimization is still an open problem.
In studies on line triangulation, a natural question is that which one of the above three optimality criteria is the "best"? In order to answer this question, we need a criterion which is independent of the three optimality criteria to describe the "bestness". One intuitive criterion is the 3D error, i.e. distance between a reconstructed line and its ground truth. The Euclidean distance does not give a reasonable measure since it is not an intrinsic distance on 3D line space. So far, no study on the metrics of 3D lines is available in the literature, and thus, it is still an open problem for the evaluation of line triangulations.
This paper focuses on the triangulations and metrics of lines. The main contributions are summarized as follows: • Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is given and this Plücker correction formula is used to establish a quasi-Riemannian metric in 3D line space. From the formula, a new linear algorithm, LINa, is proposed for line triangulation. The computational complexity of our new linear algorithm is much simpler compared with the SVD method in the literature.
• For the algebraic error minimization, two new algorithms, OPTa-I and OPTa-II, are proposed to find the optimal solution. The OPTa-I is based on finding roots of a system of 2-degree polynomial equations in five variables; and the OPTa-II is based on solving a system of polynomial equations in two variables (one polynomial is of 6-degree and the other is of 10-degree). The continuous homotopy algorithm [6], [7] is used to solve these systems of polynomial equations.
• Two new metrics on 3D line space, named as the orthogonal metric and the quasi-Riemannian metric, are proposed for the evaluation of line triangulations. The orthogonal metric is based on the angular distance on rotation groups [8] and the orthogonal representation of 3D lines [4]; and the quasi-Riemannian metric is based on the Riemannian metric on the 5-dimensional unit sphere and our proposed Plücker correction formula.
The rest of the paper is organized as follows. Section 2 presents some preliminaries used in the paper. The Plücker correction formula and a new linear algorithm are presented in Section 3. Section 4 elaborates the two optimal algorithms for the algebraic error minimization. Section 5 gives two new metrics on 3D line space. Some experimental results with synthetic and real data are presented in Section 6 and Section 7, respectively. Finally, the paper is concluded in Section 8.

Plücker Coordinates
In 3D projective space, the Plücker coordinates of a line is defined by a nonzero 6-vector: where X ¼ x ! are two non-coincident points on the line. The Plücker coordinates is homogeneous since the two 6-vectors computed with two different pairs of points on the line are equal up to a nonzero factor. From Eq (1), it is easy to see that u T v = (x 4 y −y 4 x) T (x×y) = 0, i.e. the Plücker coordinates satisfies u T v = 0, or written in a matrix form: In 5D projective space, the quadric defined by Eq (2) is called the Klein quadric [9], thus, the Plücker coordinates satisfies the Klein quadric constraint. Conversely, if a nonzero 6-vector satisfies the Klein constraint, it must be the Plücker coordinates of a line in a 3D projective space.

Point-Line Distance
In the image plane, the algebraic distance from a point x = (x,y,1) T to a line l is defined as [10]: Given a measured point set of a line l, ℓ = {x j = (x j ,y j ,1) T : 1 j M}, and let then,l a is called the linear least squares fitting of the measured point set ℓ, which has linear solution [10].

Optimality Criteria
Given N line-projection matrices,P i ð1 i NÞ, and let ℓ i = {x ij = (x ij ,y ij ,1) T : 1 j M i } be a measured point set from the imaged line P i L of a 3D line L, the line triangulation is meant to estimate the 3D line L from these measured point sets ℓ i (1 i N). The algebraic distance of point-line in the image plane leads to the following optimality criteria to solve this problem [4], [10]: where L Ã a is called the optimal solution to minimize algebraic error. L Ã a makes the sum of squared algebraic distances from the measured points x ij to the re-projection lines P i L Ã a reach a minimum, thus, fP 1 L Ã a ; P 2 L Ã a ; . . . ; P N L Ã a g are the linear least squares fittings of the measured point sets {ℓ 1 ,ℓ 2 ,. . .,ℓ N }.
The minimization term in Eq (5) can be expressed as Thus, the cost function Eq (5) can be rewritten as which means that the minimization of the algebraic error is a linear least squares problem with the Klein constraint.

Linear Solution to Minimize Algebraic Error
Adrien and Sturm [4] first proposed a linear algorithm to estimate L Ã a , which is divided into the following two steps: (a) Solve the linear least squares problem: The solution L is the eigenvector correspond to the matrix A's smallest eigenvalue. (b) Compute the nearest point L Ã k from L to the Klein quadric as the final estimate: Adrien and Sturm [4] gave an SVD method to compute the nearest point L Ã k . The step (b) in the above algorithm is called the Plücker correction. When there are errors in the measurement data, L does not strictly satisfy the Klein constraint, hence, it can not be the Plücker coordinates of a line in the 3D projective space. Thus, the Plücker correction is an important step in the algorithm. This section presents a formula to compute the Plücker correction and a new linear algorithm 'LINa'.

Linear Algorithm LINa
We consider the following minimization: Although this minimization contains a unit norm constraint, it is in fact equivalent to Eq (9) according to the following Lemma.
Lemma 1: (a) If L Ã s is the optimal solution of Eq (10), then L k ≜L ÃT s L Á L Ã s must be the optimal solution of Eq (9).
(b) Conversely, if L Ã k is the optimal solution of Eq (9), then L s ≜ðL ÃT k L Ã k Þ À1=2 L Ã k must be the optimal solution of Eq (10).
Proof: For an arbitrary unit 6-vector L, there must be Since L Ã s is the optimal solution of Eq (10), 0 L ÃT s L 1 and Since L Ã k is the optimal solution of Eq (9), (a): If L k is not the optimal solution of Eq (9), then,kL k À Lk > kL Ã k À Lk. From Eqs (13) and (15), we have and thus,L ÃT s L < L T s L. Then, by Eqs (12) and (14),kL Ã s À Lk > kL s À Lk, which is contrary to the fact that L Ã s is the optimal solution of Eq (10). Therefore,L k must be the optimal solution of Eq (9).
Similarly, (b) can be proved. According to Eq (11), the minimization problem Eq (10) is simplified to Proposition 1 below gives an analytical expression of L Ã s . Compared with the SVD method to compute L Ã k , the computation of L Ã s is much simpler.
The minimization Eq (10) has a unique solution if u 6 ¼ AE v as: (b) The minimization Eq (10) has infinitely many solutions if u ¼ AE v as: The proof of the proposition is given in the next subsection. The geometric interpretations for Eqs (18a) and (18b) are shown in Fig 1 , Eq (18a) can be rewritten as Thus, when L satisfies the Klein constraint on the above discussion, our linear algorithm LINa can be summarized in Table 1. Remark 1: In practice, the case (b) in Proposition 1 happens rarely. This is because the Klein constraint makes {u, v} orthogonal to each other, thus, they must be linearly independent of each other. When there are errors in the measurement data, the solution of Eq (8) cannot guarantee the orthogonality of {u, v}, except for their linear independency. Hence, the case (b) rarely happens in practice.
By Lemma 1 and Proposition 1, the optimal solution of Eq (15) can be obtained as:

Proof of Proposition 1
Construct the Lagrange function of Eq (17) as follows: According to the optimization theory [11], the solution of Eq (17) must be a stationary point of the Lagrange function, i.e., there are multipliers (α Ã ,β Ã ) such that ðL Ã s ; a Ã ; b Ã Þ is a doi:10.1371/journal.pone.0132354.g001 Table 1. Linear algorithm: LINa. solution of the following Lagrange equations: Thus, by solving the Lagrange equations we can obtain the optimal solution L Ã s . The first equation in Eq (25) can be rewritten as From the last two equations in Eq (22), we have (23) and (24), we have Let α 0 = (α+β) −1 and β 0 = (α−β) −1 , then from Eq (23) we have Thus, Therefore, the following linear equations on (α 0 2,β 0 2) hold: then, Substituting Eq (30) into Eq (26) gives the following four solutions to L: The geometric interpretations of the four solutions are shown in Fig 2. It can be easily verified that L þ;þ ¼ arg minf1 À L T AE;AE L AE;AE g, and thus, According to Eq (24) and the second equation of Eq (23), we have β = α. Substituting it into the first equation in Eq (23), we have Therefore, By the last two equations in Eq (22), we have If Thus, Similarly By Eqs (40) and (42), L Ã s has infinitely many solutions: Next, we consider the set S þ . Let u = sd (where d is a unit 3-vector, s6 ¼0), then Therefore Eq (43) can be rewritten as (iii) Similarly, when u ¼ À v, L Ã s also has infinitely many solutions: Optimal Solution by Minimizing Algebraic Error The algorithm LINa can only provide an approximate solution by minimizing algebraic errors. This section will present two algorithms 'OPTa-I' and 'OPTa-II' to compute the optimal solution. The algorithm OPTa-I converts the optimization problem to that of finding the real solutions of two systems of 2-degree polynomial equations in five variables, and the algorithm OPTa-II to that of finding the real solutions of a system of polynomial equations in two variables (one is of 6-degree, and the other is of 10-degree).

Algorithm OPTa-I
The optimal algorithm OPTa-I is summarized in Table 2. Eq (47) is a system of 2-degree polynomial equations in six variables, and it has at most 64 real solutions based on the algebraic equations theory. Proposition 2 next shows this system can be simplified into two systems of 2-degree polynomial equations in five variables. Here we at first prove that Eq (48) is the optimal solution to the algebraic error minimization.
Proof: Consider the Lagrange function and the Lagrange equations of Eq (7): (a) Construct the following system of polynomial equations by the matrix: A (see Eq (6)): Then compute its real solution set S I ; (b) Determine the optimal solution by doi:10.1371/journal.pone.0132354.t002 The first equation in Eq (50) can be rewritten as ( It is obvious that this equation is equivalent to the following equation: ( By eliminating the multipliers (α,β) in the above equation, we obtain the following 2-degree polynomial equations in (u,v): ( The last two equations in Eq (50) can be rewritten as ( By combining Eqs (53) and (54), we have Eq (47). L from the stationary points (L,α,β) of the Lagrange function Eq (49) must be a real solution of Eq (47), thus the optimal solution of Eq (7) must belong to the real solution set of Eq (47), i.e.,L Ã a 2 S I . Therefore, Proposition 2: The solution set of Eq (47) is the union of solution sets of two systems of 2-degree polynomial equations in five variables.
Proof: Let S be the solution set of Eq (47), then it must be the union of the following two sets: Clearly,S 0 is the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v 3 = 0 in Eq (47). For the set S 1 , we consider the resulting equation system obtained by removing the unit norm constraint in Eq (47): It is second order homogeneous on L = (u T ,v T ) T , and the set formed by normalizations of its all nonzero solutions is just the solution set S of Eq (47). Thus, letS 1 be the solution set of the system of 2-degree polynomial equations in five variables obtained by setting v 3 = 1 in Eq (57), there must be Hence, Proposition 2 holds.
Let A dðkÞ ða; bÞ ¼ ða 1 ða; bÞ; a 2 ða; bÞ; . . . ; a 6 ða; bÞÞ be the sub-matrix formed by deleting the k-th row of A(α,β), then A Ã k ða; bÞ ¼ 0 can be expressed as ( This is because: From (b), both a 1 and a 2 can be linearly represented with A≜fa 3 ; a 4 ; a 5 ; a 6 g, thus, for arbitrary a i ; a j ; a k 2 A, {a 1 , a 2 ,a i , a j ,a k } must be linearly dependent, i.e., det(a 1 , a 2 ,a i , a j ,a k ) = 0. Hence, solutions of (b) must be the ones of (a). Obviously, solutions of (a) are also the ones of (b). Therefore, (a) has the same solutions with (b).
Since non-real solutions of a system of real polynomial equations occur in complex conjugate pairs, there is at least one real solution in the 25 solutions of (b). Thus, A Ã k ða; bÞ ¼ 0 has at least one real solution.
The algorithm OPTa-II is summarized in Table 3.
For the two polynomial equations in the step (b) of OPTa-II, one is of 6-degree and the other is of 10-degree, and thus it has at most 60 real solutions based on the algebraic equations theory. Next, we prove that Eq (65) is the optimal solution to the algebraic error minimization.
Remark 2: In the experiments of this paper, we use the continuous homotopy method [6] [7] to solve the system of polynomial equations. The method is first proposed in [12]. Through 30 years of efforts of many researchers, the method has made a great success in computing zero points of non-linear mappings. It can give all zero points of a polynomial mapping [6][7] [13]. In the field of computer vision, the method has been used to solve self-calibrations of cameras, such as the Kruppa equations [14], the modulus constraint equations and the absolute quadric constraint equations [15]. For the 2-degree polynomials with five variables in OPTa-I and the high-degree polynomials with two variables in OPTa-II, the continuous homotopy method is of high computational efficiency.

Metrics on 3D Line Space
In order to evaluate 3D errors of line triangulations, we need a metric in 3D line space. The Euclidean distance d E ðL; L 0 Þ≜minfkL À L 0 k; kL À ðÀL 0 Þkg(where L, L 0 are the normalized Plücker coordinates of lines L; L 0 ) is not appropriate for the evaluation of line triangulations since is not an intrinsic distance on 3D line space. The aim of this section is to introduce two new metrics on 3D line space, called the orthogonal metric and the quasi-Riemannian metric. Compared with the Euclidean metric and the orthogonal metric, the quasi-Riemannian metric appears more appropriate.
In this section, the 5-dimensional unit sphere centered at the origin in R 6 is denoted by S 5 ð1Þ, and the intersection of the Klein quadric K and S 5 ð1Þ is denoted by Kð1Þ≜S 5 ð1Þ \ K, which is a 4-dimensional smooth sub-manifold of S 5 ð1Þ, called the unit Klein quadric.

Orthogonal Metric in 3D Line Space
The proposed orthogonal metric is mainly from the angular distance of rotation matrices [8] and the orthogonal representation of 3D lines [4]. The angular metric on rotation group is given in Appendix I. Let K a ð1Þ ¼ fL 2 Kð1Þ : u 6 ¼ 0; v 6 ¼ 0g,K b ð1Þ ¼ fL 2 Kð1Þ : u ¼ 0; kvk ¼ 1g and K c ð1Þ ¼ fL 2 Kð1Þ : kuk ¼ 1; v ¼ 0g. We define L = (u, v)for L 2 K a ð1Þ, then Thus, from the following mappings: we obtain the mapping : K a ð1Þ7 !SOð3Þ Â SOð2Þ by [4]: and it is called the orthogonal representation of L 2 K a ð1Þ.
The above mapping fails for K b ð1Þ and K c ð1Þ. In order to obtain a complete mapping from Kð1Þ into SOð3Þ Â SOð2Þ, we add definition for K b ð1Þ and K c ð1Þ as follows: ( where W π/2 is the 2D rotation of angle π/2. An explanation of this definition will be given later. Using the angular distance on SOð3Þ Â SOð2Þ, the following distance on Kð1Þ is obtained: and it is called the orthogonal distance of 3D lines. Now, we can give an explanation for the definition Eq (75). If L; L 0 2 K b ð1Þ, then Thus, the first mapping in the definition Eq (75) is meant the orthogonal distance of two lines passing through the origin is just twice their included angle.
Similarly, If L; L 0 2 K c ð1Þ, then d O ðL; L 0 Þ ¼ 2 Á yðu;u 0 Þ. Since u and u 0 are the normalized coordinates of the infinite lines L and L 0 , respectively, and they are the normal vectors of plane passing through L and that passing through L 0 . Hence, the second mapping in the definition Eq (75) is meant the orthogonal distance of two infinite lines L and L 0 is just twice the included angle of the two planes.

Quasi-Riemannian Metric on 3D Line Space
Based on the Riemannian metric [16] and analysis in Appendix II, the quasi-Riemannian distance on Kð1Þ leads directly to the quasi-Riemannian distance on Lð3Þ: It is not difficult to verify that: lines L and L 0 are coplanar if and only if their Plücker coordinates satisfy L T KL 0 = 0. Thus, the quasi-Riemannian distance of coplanar lines is given by the following formula:

Comparison of the Three Metrics
In order to compare the performance of different metrics, we gerenated a 3D unit cube centered at the origin in space, and Based on their relative positions, the edge pairs belong to either the two parallel relationships (P-I and P-II) or the two orthogonal relationships (O-I and O-II) are listed as below: Each of the three metrics can give a unique distance for each relationship, as shown in Table 4. However, from Table 4 it can be seen that the Euclidean metric could not distinguish between O-I and O-II; the orthogonal metric could not distinguish between P-I and O-I; while the quasi-Riemannian metric gives different distances for all four relationships, and these distances are consistent with our intuition that the distances for P-I, P-II, O-I and O-II should increase gradually. This observation implies that the quasi-Riemannian metric is reasonable than the Euclidean metric or the orthogonal metric.
In the experiments of this paper, the quasi-Riemannian metric is used to evaluate the 3D errors of line triangulations. In real experiment, the true line and the estimated line are close with each other, so they can be considered as lying on the same plane. Therefore, the Quasi-Riemannian metric would be d QR ðL; L 0 Þ ¼ arccosðL T L 0 Þ. Let arccos(L T L 0 ) = θ. Since the angle of the two lines is small, so the Euclidean metric (kLÀL 0 k ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 À 2L T L 0 p ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 À 2cosy p ¼ 2sinðy=2Þ) can be approximated by 2sin(θ/2)%θ. Therefore, the Quasi-Riemannian metric is equal to the Euclidean metric. The same situation also applies to the Orthogonal metric. As a result, the three metrics would be equal to each other or equal up to a scale factor.

Experiments with Simulated Data
In the experiments of this section, we simulated eight 3D space lines on two orthogonal planes, as shown in Fig 5. Using the synthetic data, we generated six images by adjusting the cameras location and parameters. The size of the images is of 1024×1024. In order to simulate the effect of image noise, we evenly sample 20 points on each image line segment, and add Gaussian noise with zero mean and σ standard deviation to these sampled image points, then, the actual projected image line is fitted by the orthogonal least squares fitting from these noise-corrupted point set.
We evaluated and compared the performance of the linear algorithm LIN [4], the proposed linear algorithm LINa; and the optimal algorithms based on the algebraic optimality criterion (AOC): OPTa-I and OPTa-II. The used criteria of evaluation are RMS (root mean square) of the 3D errors (i.e., the quasi-Riemannian distance of reconstructed line to its ground truth), the algebra errors, and the orthogonal errors.

Stability to Noise
This experiment is to test the numerical stability of the algorithms with respect to different noise levels in the same geometric configuration. During the experiment, Gaussian noise with zero mean and σ standard deviation is added to each image point, and the noise level σ varies from 0.0 to 3.0 pixels in steps of 0.5, and 150 independent trials are carried out under each noise level. Fig 6 shows the experimental results on 6 views.
According to Lemma 1, LIN and LINa algorithm should yield the same result. On the other hand, since OPTa-I and OPTa-II algorithms both solve the algebraic-error minimization problem with the same error cost function, the two optimization algorithms should yield comparable estimation results, and only difference may be caused by the computational errors From this experiment, we can see that the RMS errors of all the algorithms increase with the increase of noise levels. The two optimal algorithms based on the AOC yield lower 3D errors, algebraic errors, and orthogonal errors than the two linear algorithms. Please note that since the three criteria are with different meanings and units, they are not comparable to each other.
In the experiments, both the OPTa-I and OPTa-II algorithms rarely have the situation of no real solutions. With the increase of noise level and image number, the possibility of no real solutions will increase slowly.
We also compared the computational cost of these algorithms. The real computation time of the LIN, LINa, OPTa-I, and OPTa-II algorithms are 0.002, 0.002, 11.681, 36.688 seconds, respectively. The two linear algorithms have comparable running time, while the two optimal algorithms are much computational intensive. Among the two optimal algorithms, the OPTa-I is faster than the OPTa-II since the former only needs to solve a 2-degree polynomial equation system, while the OPTa-II needs to solve one 6-degree and one 10-degree systems. Thus, OPTa-I is a better choice in practice.

Stability to Configurations
This experiment is to test the numerical stability of the algorithms with respect to geometrical configurations. The number of views varies from 4 to 12 in steps of 2 during the experiments. At each number of views, 150 independent trials are carried out. Fig 7 shows the experimental results at noise level σ = 1.5, where only the results from LINa and OPT-II are plotted, as analyzed in Section 6.1, the LIN and LINa algorithms yield the same results, and the OPTa-I and OPTa-II produce very similar results. We can see from this experiment that the RMS error of all the algorithms decreases when the number of view increases. The two optimal algorithms outperform the two linear algorithms in term of 3D error, algebraic error, and orthogonal error.

Experiments with Real Images
The proposed algorithms were evaluated using extensive real images. The experimental results on four data sets are reported below. As shown in Fig 8, the used images include a calibration cube, a planar checkerboard, and the Oxford datasets "model house" and "corridor" (http:// www.robots.ox.ac.uk/~vgg/data/data-mview.html). The lines marked with white and red in these images are used to test the algorithms.
For the calibration cube, six images were taken by a Nikon D40 camera, with the image size of 3008×2000. The correspondences between the 3D points on the cube and their images are used to compute the camera matrices. For the planar checkerboard, six images were taken by a Sony HX5C camera, with the image size of 2592×1944, while the camera matrices are computed by the calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/). For the model house images and the corridor images, the camera matrices and the two end coordinates of the image lines are provided by the Oxford datasets.   Fig 9 shows the 3D errors, algebra errors, and orthogonal errors of different algorithms associated with the four data sets. From these experiments we can obtain the same conclusion as the simulation tests. The two optimal algorithms yield similar results which are better than those from the two linear algorithms. Although we plot the 3D error, algebraic error, and orthogonal error in one graph in Fig 9, these three errors are not comparable to each other since they are obtained using different criteria with different units. Fig 10 shows the 3D reconstruction results of the fours objects using the OPTa-I algorithm. The 3D models of these lines are correctly recovered by the proposed algorithm.

Conclusion
In this paper, we have investigated line triangulations and line metrics. First, a new formula for the Plücker correction is introduced, by which a new linear algorithm for line triangulation is proposed. Then, two optimal algorithms are proposed from the algebraic optimality criterion. In addition, two metrics in 3D line space, the orthogonal metric and the quasi-Riemannian metric, are proposed for the quality evaluation of line triangulations. The experiments using simulated data and real images validate the proposed algorithms and show that the optimal solution can reconstruct more accurate 3D lines.
¼ LðεðY 0 ; Y 1 ÞÞ whereεðY 0 ; Y 1 Þ is the short arc from Y 0 to Y 1 on a great circle in S 5 ð1Þ.
It is not difficult to verify that the Riemannian distance d s and the Euclidean distance d E (= kY 0 −Y 1 k) both satisfy the following relation: Next, we introduce the quasi-Riemannian distance on Kð1Þ from the Riemannian metric on S 5 ð1Þ.