Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Optimal reduction and conversion of range-difference measurements for positioning

Abstract

For positioning an object with m references, there are m−1 linearly independent range differences and measuring them is essential. However, none of m(m−1) possible range differences should be considered redundant unless their measurements are free of noise and locations of the references are exactly known. From all available range-difference measurements, m range measurements are obtained for positioning based on the least squares principle. The problem formulation treats missing and weighted range-difference measurements simultaneously. The exact relationships among several formulations of least squares positioning are established. A numerical example illustrates the results.

Introduction

Positioning an energy-emitting or reflecting object is an intensively studied topic due to its importance in wide applications [1]. Often it is based on the principle of time differences of arrival (TDOA), which means use of indirect measurements of range differences between the object at an unknown location and references at known locations. Positioning with TDOA measurements is different but closely related to that using time of arrival (TOA) measurements with or without a bias. With all possible range measurements between each object pair, positioning multiple objects is possible [2].

Positioning an nD object with m references requires m > n and up to m(m−1) range differences can be formed, but only m−1 of them are linearly independent. Due to noise effects, measurements of all available range differences should be used for positioning in applications. This work shows how to combine all available TODA measurements to form m TOA measurements for least squares positioning. Examinations of least squares criteria of several types of TOA and TDOA measurement equations establish equivalent and other exact relationships among these positioning formulations. The cases of positioning with missing and weighted TDOA measurements are treated inclusively in this work.

Investigation on the underlying problem is important from both theoretical and practical viewpoints. This kind of study answers the question of whether or not different TODA and TOA formulations are equivalent for positioning, and provides simplified equations for algorithm development and implementation in applications.

Related work

For positioning an nD object with m known references, most methods have used m TOA equations or m−1 TODA equations for positioning, and normally the minimum number of references m = n + 1 is assumed. In the majority of TDOA methods, measurements of the remaining possible (m−1)2 range differences are unused or assumed to be unavailable.

In an early study [3], a large number of TDOA measurements with the minimum number of references were combined to form TODA triads for improvement on positioning. Optimality of the combination was not addressed nevertheless.

The problem of TDOA denoising [4] is to find a range-measurement vector for generation of an ideally structured TDOA measurement matrix closest to the original noise-corrupted same matrix by least squares. This problem is related but not equivalent to the TDOA positioning problem directly addressed in the current study.

The problem formulation in the current study avoids the assumptions on skew symmetry of the noise-corrupted TDOA measurement matrix, and on Gaussian distribution of the noise [4, 5]. This allows a more general coverage of noise conditions and consideration of up to m(m−1) TDOA measurements rather than half of them. Normally missing and weighted TDOA measurements are treated separately, for instance, in [4, 6], but simultaneously in the current study.

The focus of this work is on optimal conversion of range-difference equations rather than solving them. This is because, assuming exact solvability, closed-form solutions are known for positioning with m biased TOA measurements [712], and with m − 1 TDOA measurements [1315]. As well known, m TOA measurements can be trivially converted to m − 1 TDOA measurements, although optimality of such a conversion is unclear. In real applications, closed-form solutions offer fine approximations, and can also be used to initiate an iterative algorithm for improving the solutions. These methods can be applied to the m TOA or further m − 1 TDOA measurement equations converted from possible m(m − 1) range-difference equations studied in the current work.

Notations

All considered quantities are real numbers. Scalars are lowercase letters, (column) vectors and matrices are boldfaced lowercase and uppercase letters respectively. Set {ai} contains elements with a known number, and they can form a vector a = [ai]. The vector of ones is denoted by e. A = [ai,j] is a matrix of a known size with aij being its element in the ith row and jth column. Diagonal matrix Da has elements of a on its diagonal, and De is the identity matrix I. A′, trA, rankA and A+ are the transpose, trace, rank and Moore-Penrose inverse of A respectively. The norm of a is , and that of A is . Denoted by AB is the entry-wise multiplication, namely Hadamard product of the two matrices of the same size. is a positive semi-definite matrix decomposed by its square root (matrix). A = UΣV′ is the singular value decomposition of A with U′U = I, V′V = I, and diagonal matrix Σ consisting of (non-negative) singular values of A. Denoted by arg minx f(x) is the argument of the minimum of a scalar function, namely x minimizing f(x). Denoted by is a random variable ni satisfying the Gaussian distribution with mean and variance . Similarly, stands for a Gaussian random vector with mean and variance matrix Σn.

Problem formulation

Denote the matrices of range differences and their measurements by R = [rij] and T = [τij], respectively. On the TDOA principle, noisy measurements {τij} of range differences {rij} are described by scalar equations (1) where nij is a random variable with zero mean, m the number of references, and rij = rirj the difference of ranges rk = |ppk| for k = i, j, from an unknown object p to known references pi and pj. If p is of dimension n, m > n is required. To have a unique p in the noise-free case, {pi} are assumed to be non-coplanar, namely they are not located in an (n − 1)D linear subspace.

Likely not all m(m − 1) measurements in (1) are available in applications even under the assumption that, without loss of generality, all m references have been used in generation of the measurements. Available measurements may also be weighted according to a priori knowledge of noise statistics. To consider cases of missing and weighted measurements simultaneously, define a masking matrix as (2) where weight wij > 0, and in the case of non-weighting wij = 1.

Weights {wij} in (2) could be chosen as the components of the inverse variance matrix of noise {nij} in (1). This resembles the treatment of measurement noise in the Kalman filtering. However, the problem considered in this study is not the problem of tracking a moving target because, if any, dynamics of object p is not considered in the current study. Hence, positioning an object based on the equivalent range equations is generally not the minimum variance estimation intended with a Kalman filter.

Based on all available range-difference measurements and possibly also with weighting, positioning an object is to find a least squares solution of p to the matrix equation (3) The objective of this study is to convert (3) which may have up to m(m − 1) range-difference equations to m range equations. The conversion is optimal in the sense of least squares.

In the case where no range-difference measurement is weighted or missing, the scalar equations in (1) are identical to the matrix equation in (3) except that the noise terms in the former are set to be zero in the latter. In general, (3) is a compact notation of (1) by setting the unknown noise to be zero but with simultaneous consideration of weighted and missing measurements. Clearly, (3) does not normally have an exact solution for p, and hence an estimation of p is sought in respect to least squares of (3).

Linear dependence of {rij} and properties of

Range differences {rij} are clearly related to each other, and linear independence of a subset of them is defined conventionally.

Definition 1 for il, jl ∈ {1, 2, …, m} and l = 1, 2, …, k with an arbitrary integer k > 0, are said to be linearly independent from each other if implies coefficient αl = 0 for all l.

Range difference rij can be expressed as a linear combination of any m − 1 linearly independent elements in {rij} for i, j = 1, 2, …, m. It is easy to verify that among others, {ri + 1,1}, {r1,i + 1} or {ri + 1, i} for i = 1, 2, …, m − 1, consists of m − 1 linearly independent elements. Clearly, in general, noise-corrupted range-difference measurements {τij} are linearly independent from each other.

Masking matrix E is non-negative, namely none of its components is negative. Also, E0 because at least two range-difference measurements are available under the necessary condition m > n for unique positioning of an nD object with m references. If every reference has been used in the generation of TDOA measurements, for all i, the ith row and column of E cannot be simultaneously zero. This amounts to, for at least one j, (4) The case eij + eji = 0 for a particular i and all j, corresponds to non-use of the ith reference, which can be handled by dropping pi and reducing number m by one in (1). As implied in (4), measurements of m − 1 linearly independent range differences are automatically available in (3).

Pseudo range-measurement vector τ and properties of companion matrix

Define a companion matrix of E as (5) and a pseudo range measurement vector as (6) where is the Moore-Penrose inverse of , and e the vector of ones.

In the special case where all m(m − 1) measurements in (1) are available and no weighting is applied to them, it is ready to obtain the simplifications E = ee′ − I, and .

Companion matrix is obviously symmetric, and due to E0. To explore its properties, some basic definitions related to matrix irreducibility are needed. These properties are important for reduction and conversion of the weighted range difference matrix equation in (3).

Definition 2 (Definition 6.2.25 [16]) Square matrix A = [ai,j] is said to be irreducibly diagonally dominant if

  1. it is irreducible, namely it is not similar to a block upper triangular matrix by permutation.
  2. it is diagonally dominant, namely |aii| ≥ ∑ij|aij| for all i;
  3. there is an i such that |aii| > ∑ij|aij|.

Theorem 1

  1. is diagonally dominant;
  2. is positive semi-definite;
  3. for arbitrary ;
  4. is irreducible;
  5. .

Equivalence and optimality of range and range-difference equations

In terms of an arbitrary vector x of dimension m, define a matrix as (7) which has the same structure as R, in fact R = Tr with range r = [ri]. For least squares positioning, exact relationships among (3) and the following three equations (8) with , are shown in the next theorem.

Theorem 2 There are two least squares positioning equivalences: (9) (10) For arbitrary p, r, and E, the following relations hold (11)

The significance of Theorem 2 lies in establishment of the equivalence of matrix Eq (3) and that in (8) to the vector equations in (8) respectively through (9) and (10) for least squares positioning. Although further equivalence between (9) and (10) cannot be established, (11) implies that if a p diminishes considerably, it is a superb approximation of (9), and confirms the supremacy of over and for determination of p by the least squares principle.

Corollary 1 The denoising problem has the general solution (12) with an arbitrary r.

The result presented in Corollary 1 was first obtained in [4], and now given without imposing any particular assumptions on T in (1) and E in (4). Theorem 2 and Corollary 1 indicate exactly the relationship between the positioning and denoising problems. Basically, for positioning , while for denoising, . In general, , and the equality holds if p satisfies for some r. Note that normally is not exactly solvable for p and r.

It is well known that biased TOA equations are often described by with r representing the clock bias between the transmitter and receiver. Interestingly, in , is specified as the average of the difference between {τi} and {ri}. As implied in the proof of Theorem 2, is actually the least squares solution of r to .

Theoretical verification

Some primary results on matrix irreducibility are needed for proving Theorem 1.

Lemma 1 (Corollary 6.2.27 [16]) An irreducibly diagonally dominant matrix is non-singular.

Lemma 2 (Proposition 1.1 [17]) If A1 and A2 are irreducible, A12 ≠ 0 and A21 ≠ 0, then is irreducible.

Proof of Theorem 1

Direct calculations give (13) where , and is the diagonal matrix formed by with (14) Obviously, is symmetric, and diagonally dominant, which is a). It is also at least positive semi-definite which is part of b) due to (15) following from some simple properties of Hadamard products [16], such as (16) and identities (17) for arbitrary vectors x and y, and arbitrary matrices A, B and C, all with compatible dimensions. This verifies c) due to the existence of decomposition . It also leads to b) because of rank deficiency of in view of from (13). Moreover, follows from the skew symmetry of , which implies . A deductive verification of irreducibility of and in the following completes the proof of d) and e).

Denote E by Em and by , and set . For m = k > 0, (18) with (19) (20) and (21) where for i = 1, …, k, and is given in (14) for i = m = k + 1. Clearly, is irreducible and due to in view of (4).

Suppose is irreducible and for k > 2. Trivially, irreducibility of implies the same of . In view of (14), and , which ensures irreducibility of according to Lemma 2. Moreover, is irreducibly diagonally dominant, and hence non-singular according to Lemma 1. This verifies , where the inequality cannot hold nevertheless because follows from (13).

Proof of Theorem 2

By definition, the left side of (9) specifies least squares solutions to (3). Using (6), (16), (17), De = I, and some simple properties of the trace of matrix products, the following is obtained (22) (23) Completing the square in (22) has used and due to c) and e) of Theorem 1 respectively. The second term in (22) and that in (23) are obtained from , symmetry of (and hence ), and . Since the first two terms in (23) are independent of p, the equivalence in (9) is then proved.

Direct calculations produce (24) which verifies the equivalence in (10).

From (and hence ) and , follows. Consequently, for arbitrary p and r. By setting and noticing Ab| ≤ |A||b| for arbitrary A and b, the first inequality in (11) is verified. Setting partial differentiation leads to , which implies for arbitrary p and r, and hence confirms the second inequality. From this, the third inequality follows immediately.

Proof of Corollary 1

Noting R = Tr and from (23) with Tx replacing R, it is ready to have The general least squares solution of x to is where y is arbitrary but subject to . Considering singular value decomposition and hence , it is ready to verify Recalling and rank , y must be parallel to E, which leads to the general expression with r being an arbitrary scalar.

Illustrative example

A numerical example of 3D positioning is used to illustrate the developed results. When used for illustrating effects of different sets of range and range-difference equations on positioning, simulated datasets are considered most effective than practical datasets. This is because noise levels for range-difference measurements and inaccuracy of the reference locations could be easily set and examined in numerical examples. It is however not the case in a real setup where inaccuracies of the measurements and reference locations are coupled with the uncertainty of the object location.

Let four references be located precisely at , but inexactly known as with noise vector . Measurements of range differences are produced according to τij = rij + nij in (1) with rij = rirj and for k = i, j and noise variable . An object is with its coordinates randomly generated within range (1, 10) as (25) The standard deviations are set as σp = 0.1% and σn = max{|rij|} × 1%. For instance, one simulation run produced reference and measurement matrices as As expected, since the noise levels are low, deviations of {pi} from are insignificant, and also due to rij = −rji, T is approximately skew symmetric.

To find the object position, the unconstrained multivariable minimization algorithm fminsearch in MATLAB has been used. In numerical minimization, referring to Theorem 2, the squared forms of the criteria (26) were used, and the initial estimate of (p, r) was taken as in each simulation run.

Fig 1 indicates the geometric setup for positioning a 3D object with four references. Table 1 shows the estimates of the object position in 50 runs of simulations under the noise conditions stated above for the reference locations and rage-difference measurements. The estimates using the squared criteria (26a) and (26b) are very close to each other, while those using criteria (26b) to (26e) are indistinguishable from each other. Due to r = − 11.81 on average over the 50 simulation runs, the estimates using the squared criteria (26f) are too poor to be useful. This value of r should not be interpreted as a clock bias because the generation of the ranges and their noisy measurements had not introduced an offset in each run of the simulations. In fact, even if introduced, a bias in TOA measurements cannot be recovered from TDOA measurements.

thumbnail
Fig 1. Geometric setup for positioning a 3D object with four references.

https://doi.org/10.1371/journal.pone.0273617.g001

The best and worst estimates are determined with respect to which cannot be evaluated in real applications nevertheless. Worst cases of the randomly generated reference locations and range difference measurements ought to be responsible for worst estimates of p in Table 1. This is because the evaluations of criteria (26b) to (26e) have generated insignificant values at level 10−7 which corresponds to level 10−14 produced by least squares. As expected, if no noise is added to the reference locations and range-difference measurements, all estimates, except for those using (26f), recover p up to a computational error at the level of 10−14 close to the machine epsilon 10−16.

thumbnail
Table 1. Estimation of the object position using algorithm fminsearch with initial estimate and use of noise corrupted references and measurements of all range differences.

https://doi.org/10.1371/journal.pone.0273617.t001

It is interesting to know how availability of measurements affects estimations. Consider the following three cases of availability of range-difference measurements in T = [τij], where no weighting is applied to available measurements:

  1. Case 1: [τij] for all i and ji (12 measurements);
  2. Case 2: [τij] for all i and j > i (6 measurements);
  3. Case 3: τ12, τ14, τ23 and τ24 (4 measurements).

In Table 2, three averaged values of each minimization criterion and estimation error correspond to the above three cases. On average, use of more measurements is shown to have better estimations. This indicates, as expected, that all available range-difference measurements should be used for positioning. Use of different squared criteria in (26), except for (26f), in minimization has produced estimations of the object position with similar or identical averaged errors of in all case of measurement availability. This shows the desired close performances of minimizing the biased range Eq (8) and minimizing the original range-difference Eq (3) by least squares. The poor performance with (26f) indicates unsuitability of the range equation for positioning.

thumbnail
Table 2. Evaluations of least squares criteria and estimation errors over 50 simulation runs with averaged values in a bracket corresponding to cases 1, 2 and 3 of measurement availability.

https://doi.org/10.1371/journal.pone.0273617.t002

Concluding remarks

Given m references, following the procedure in [18] on the basis of processing individually received signals, up to m(m − 1)/2 TDOA measurements could be made available. The procedure in [19] on the basis of processing each pair of received signals could produce up to m(m − 1) TDOA measurements. The current work has proposed a general method for use of these multiple TDOA measurements for positioning.

While weighted least squares positioning has been considered in this work, it has not addressed issues of selection of weighting coefficients. As widely used, for instance in [5, 6] among others, an obvious choice of weightings is the inverse variances of the measurement noise components, which could be obtained from a processor of generating TDOA measurements such as those in [18, 19].

Minimizing the difference between the noise-corrupted TDOA measurement matrix and the well-structured matrix formed by a range-measurement vector is the denoising problem explored in [4]. The current work directly addresses the problem of minimizing TDOA equations with respect to the object location and automatically obtains the range red-measurement vector. The current focus is on positioning in a general setting with a simultaneous treatment of missing and weighted TDOA measurements. The current work has however not considered the issue of eliminating outlier measurements as examined in [4] and comprehensively explored in [20].

A numerical example has been used to illustrate the theoretical results presented in this paper. To evaluate effectiveness of the proposed method in real applications, an experiment could be designed. It would need a high-precision positioning system for referencing, and apply the method to a low-precision TDOA dataset. A practical application of the results in this paper could be in wireless sensor networks [21] with massive low-cost miniature sensors often randomly deployed in a geographical area, where a sensor could be localized by using a huge number of TDOA measurements from references including already localized sensor nodes [22].

References

  1. 1. Dogandzic A, Riba J, Seco-Granados G, Swindlehurst AL (2005) Positioning and navigation with applications to communications. IEEE Signal Process Mag 22:10–11.
  2. 2. Dokmanić I, Parhizkar R, Ranieri J, Vetterli M (2015) Euclidean distance matrices—Essential theory, algorithms, and applications. IEEE Signal Process Mag 32:12–30.
  3. 3. Schmidt R (1996) Least squares range difference location. IEEE Trans Aerosp Electron Syst 32:234–242.
  4. 4. Velasco J, Pizarro D, Macias-Guarasa J, Asaei A (2016) TDOA Matrices: Algebraic properties and their application to robust denoising with missing data. IEEE Trans Signal Process 64:5242–5254.
  5. 5. So HC, Chan YT, Chan FKW (2008) Closed-form formulae for time-difference-of-arrival estimation, IEEE Trans Signal Process 56:2614–2620.
  6. 6. Chan YT, Ho KC (1994) A simple and efficient estimator for hyperbolic location. IEEE Trans Signal Process 42:1905–1915.
  7. 7. Bancroft S (1985) An algebraic solution of the GPS equations. IEEE Trans Aerosp Electron Syst 21:56–59.
  8. 8. Abel JS, Chaffee JW (1991) Existence and uniqueness of GPS solutions. IEEE Trans Aerosp Electron Syst 27:952–956.
  9. 9. Leva JL (1996) An alternative closed-form solution to the GPS pseudo-range equations. IEEE Trans Aerosp Electron Syst 32:1430–1439.
  10. 10. Awange JL, Grafarend EW (2002) Algebraic solution of GPS pseudo-ranging equations. GPS Solut 5:20–32.
  11. 11. Caravantes J, Gonzalez-Vega L, Piñera A (2017) Solving positioning problems with minimal data. GPS Solut 21:149–161.
  12. 12. Hou M (2022) Uniqueness and hyperconic geometry of positioning with biased distance measurements. GPS Solut 26, 79.
  13. 13. Smith J, Abel J (1987) Closed-form least squares source location estimation from range difference measurements. IEEE Trans Acoust 35:1661–1669.
  14. 14. Gillette MD, Silverman HF (2008) A linear closed-form algorithm for source localization from time-differences of arrival. IEEE Signal Process Lett 15:1–4.
  15. 15. Derevyankin AV, Matasov AI (2016) Finite algorithm for determining a vehicle’s position by differences in measured pseudoranges. Gyroscopy Navig 7:100–106.
  16. 16. Horn RA, Johnson R (2013) Matrix Analysis. Cambridge University Press.
  17. 17. Shao J (1985) Products of irreducible matrices. Linear Algebra Appl 68:131–143.
  18. 18. Hahn WR, Tretter SA (1973) Optimum processing for delay-vector in passive signal arrays. IEEE Trans Inf Theory 19:608–614.
  19. 19. Knapp CH, Carter GC (1976) The generalized correlation method for estimation of time delay. IEEE Trans Acoust 24:320–327.
  20. 20. Compagnoni M, Pini A, Canclini A, Bestagini P, Antonacci F, Tubaro S, et al (2017) A geometrical–statistical approach to outlier removal for TDOA measurements. IEEE Trans Signal Process 65:3960–3975.
  21. 21. Halili R, Weyn M, Berkvens R (2021) Comparing Localization Performance of IEEE 802.11p and LTE-V V2I Communications. Sensors 21:2031. pmid:33805615
  22. 22. Luo X-L, Li W, Lin J-R (2012) Geometric Location Based on TDOA for Wireless Sensor Networks. International Scholarly Research Notices. Article ID 710979.