Figures
Abstract
In this contribution, we use Gaussian posterior probability densities to characterize local estimates from distributed sensors, and assume that they all belong to the Riemannian manifold of Gaussian distributions. Our starting point is to introduce a proper Lie algebraic structure for the Gaussian submanifold with a fixed mean vector, and then the average dissimilarity between the fused density and local posterior densities can be measured by the norm of a Lie algebraic vector. Under Gaussian assumptions, a geodesic projection based algebraic fusion method is proposed to achieve the fused density by taking the norm as the loss. It provides a robust fixed point iterative algorithm for the mean fusion with theoretical convergence, and gives an analytical form for the fused covariance matrix. The effectiveness of the proposed fusion method is illustrated by numerical examples.
Citation: Chen X, Chen C, Lu X (2024) Algebraic method for multisensor data fusion. PLoS ONE 19(9): e0307587. https://doi.org/10.1371/journal.pone.0307587
Editor: Yan Wang, The Hong Kong Polytechnic University, HONG KONG
Received: October 11, 2023; Accepted: July 9, 2024; Published: September 27, 2024
Copyright: © 2024 Chen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Since our study is rooted in theoretical work and includes a detailed presentation of numerical experiments within the paper, we believe that the provided information is sufficient for replicating our results. We share the program codes of our simulation examples in a public repository on Figshare, available at doi.org/10.6084/m9.figshare.25116515. Moreover, the paper encompasses all essential data, such as model equations, parameters, and experimental design, to ensure that our results can be reproduced with ease and precision.
Funding: This work was supported in part by the Sichuan Province Science and Technology Support Program under Grant 2022JDRC0068 received by XC and under Grant 2021JDRC0080 received by CC, the National Natural Science Foundation of China under Grant 12271376, and the Fundamental Research Funds for the Central Universities under Grant 24CAFUC03055 received by CC.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
In recent decades, multisensor data fusion has been studied widely in many fields, such as target tracking, image processing, remote sensing and robotics [1–3]. The key to data fusion is to best utilize useful information contained in multiple sensors for the purpose of estimating an unknown quantity. Extensive research generally centers on two basic architectures, i.e., distributed fusion and centralized fusion. The former has received more attention due to less communication burden, higher reliability and flexibility, but is relatively more challenging for coping with unavailable correlations [4, 5].
In many cases, however, it is difficult to exactly know the correlations. The data fusion under uncertain correlations was first investigated with the covariance intersection (CI) method [6, 7], which utilizes the posterior mean and covariance. As a primitive CI version, the determinant-minimization CI (DCI) method [8] determines the normalized weights through minimizing the determinant or trace of estimation error covariance matrix. Considering the difference among the components of the local state estimate from each sensor, the CI fusion weighted by diagonal matrix (WDCI) [9] has higher fusion accuracy than the classical CI method. Recently, a linear estimation fusion method, termed DDF in [10], has been proposed for uncertain diagonal-constrained cross-covariance. The fused estimator is a linear unbiased combination of the posterior means whose weight coefficients are obtained by a semidefinite programming. Intuitively, complete probability density functions (PDFs) of local estimates are preferred to the first two moments for data fusion. The fast CI (FCI) method [11] minimizes Chernoff information to locate the “halfway” point between two local posterior PDFs, and has been generalized to deal with the case of more sensors. The Kullback–Leibler averaging (KLA) method [12] defines an average PDF as the one that minimizes the sum of KL divergences from local PDFs. In the context of fusing PDFs, more fusion methods are proposed in [13–18]. As it turns out, the PDF is more suitable for describing each local information.
It is well known that a typical problem in distributed data fusion is how to combine the point estimates of the quantity of interest with the corresponding mean-square error (MSE) matrices. When the local estimation information (an estimate and its MSE matrix) is represented as the PDF, the data fusion is performed on the space of PDFs with non-Euclidean geometrical structure [19, 20]. Information geometry regards the parametric space composed of PDFs as a Riemannian manifold, and studies information science using some basic geometrical quantities such as distance and curvature [21]. Apparently, the geometrical illustration develops an intrinsic understanding of statistical models, and also provides an intuitive perspective for estimation fusion. Under the information-geometric framework, the geodesic projection (GP) method [22] takes the fused PDF in the Gaussian manifold as an informative barycenter of all local posterior PDFs by minimizing the sum of their geodesic projection distances onto the Gaussian submanifold with a fixed mean μ. In practice, the GP is an effective approach as its invariance when any affine transformation is done on the local estimates, but it has two major disadvantages. One is that the fused covariance estimate is not specifically designed, just adopting the same form as the CI, and the other is that the convergence of the GP cannot be guaranteed theoretically.
In the paper, we propose a novel algebraic fusion method, based on the Lie algebraic theory. The main contributions are listed as follows:
- (i) By introducing a proper Lie algebraic structure for
, the dissimilarity between two Gaussian posterior densities can be evaluated with a Lie algebraic vector. Taking the norm of the average of Lie algebraic vectors as the loss, we construct an algebraic criterion for fusion.
- (ii) The proposed fusion algorithm obtains the fused mean estimate by iteratively solving a fixed point equation, and then yields an explicit expression for the fused covariance estimate. The convergence of this iterative algorithm is proved theoretically, and some good properties for the fused estimates are discussed.
The outline of this paper is organized as follows. In Section 2, the basic principles of information geometry and some useful results concerning the manifold of multivariate Gaussian distributions are introduced. Section 3 formulates a novel Lie algebraic fusion criterion and then develops the geodesic projection based algebraic fusion method. Simulation examples in Section 4 are provided to demonstrate the efficiency. Section 5 concludes.
1.1 Notations
Throughout this paper, all vectors are column vectors; Lightface letters denote scalars and scalar-valued mappings; Boldface lowercase letters denote vectors and tangent directions at a point; Boldface uppercase letters denote matrices and matrix-valued mappings; represents the set of all m-dimensional real vectors;
is the set of m × m real symmetric and positive-definite matrices; The symbol Im is the m × m identity matrix; tr(⋅) and (⋅)T denote the trace and the transpose of a matrix, respectively; diag(⋅) means the diagonal matrix with a vector as its diagonal elements; Log(⋅) and EXP(·) are matrix logarithm and matrix exponential functions, respectively.
2 Preliminaries
2.1 Statistical manifold and fisher metric
Consider a family of probability densities
(1)
with the global coordinate system θ = (θ1, …, θm), where the parameter space Θ is an open set of
. For simplicity, we shall abbreviate the probability density p(x; θ) in
as its coordinate θ. The parameterized family
can be regarded as a Riemannian manifold by introducing the Fisher metric g derived from the m × m Fisher information matrix with
(2)
as the (i, j)-th entry, where
is the mathematical expectation operator with respect to the PDF p(x; θ).
Denote as the vector space of tangent vectors at the point
, and the Fisher metric g can define an inner product on
, written as
(3)
Thus the norm of a tangent vector is given by
, and the induced geodesic distance called the Fisher information distance or the Rao distance is computed as follows:
(4)
where the set Γ consists of all piecewise smooth curves connecting two endpoints θ0 and θ1, and the length of the geodesic segment γ(t), t ∈ [0, 1] is obtained by integrating the norm of its tangent vector
:
(5)
Given any , there exists a geodesic curve γ(t; θ0, ν), t ∈ [0, 1] starting from θ0 in the direction ν. In differential geometry, the exponential mapping is defined as
(6)
Conversely, denoting θ1 = γ(1; θ0, ν), the logarithmic mapping is equal to the tangent vector ν at the initial point θ0. The manifold retraction R(⋅), as the generalization of the exponential mapping, is essentially a smooth mapping from the tangent bundle
into the manifold
(see, e.g., [23, 24]), and its inverse
is called the lifting mapping that exists in some neighborhood of θ. In practice, the logarithmic and exponential mappings are the commonly used lifting and retraction mappings, respectively.
2.2 Information geometry of Gaussian distributions
An m-dimensional random vector x has the Gaussian distribution N(μ, Σ) with mean vector μ and covariance matrix . The Gaussian manifold
(7)
with Fisher metric and Levi–Civita connection, has the global coordinate system θ = (μ, Σ). For brevity, we shall abbreviate the Gaussian distribution N(μ, Σ) in
to its coordinate θ = (μ, Σ) without confusion.
For the frequently discussed Gaussian submanifold
(8)
with a fixed mean vector
, the geodesic projection distance from a given point θ0 = (μ0, Σ0) in
onto
is defined as
(9)
Although the geodesic distance between two points in is not available in the general case with m > 1, the resulting geodesic projection θp = (μ, Σp) on
has been derived in [22, 25] with an explicit expression
(10)
3 Lie algebraic estimation fusion
Consider a distributed dynamic system with N sensors observing a common state . As practical scenarios mentioned in [11], the dynamic system is assumed to have Gaussian process and observation noises. Denote the local PDF from the k-th sensor by
with known local estimate
, and the fused PDF by
.
According to Eq (10), we project each pk along the orthogonal geodesic curve in onto the submanifold
with μ = x to get the geodesic projection
, where
(11)
Moreover, we replace each local posterior density pk with its geodesic projection to measure the dissimilarity between pk and the sought-for fused posterior density p = N(x, Σ) with undetermined x and Σ.
3.1 Lie algebraic settings
Since inherits the topology and metric from
and only uses Σ coordinate system [26], the inner product defined on
is given as
(12)
for any fixed g = (x, G) in
with μ = x, and two arbitrary vectors u1, u2 in the tangent space
at the point g. Then the norm of a tangent vector
is defined as
(13)
We can identify the symmetric positive-definite matrix space with the submanifold
owing to the same Riemannian metric induced by Eq (12). Meanwhile,
can even be regarded as a Lie group by introducing two operations from [27], which are compatible with the differential structure of
. One is the logarithmic multiplication ⊕ given by
(14)
for any
, and the other is the logarithmic scalar multiplication ⊗ defined by
(15)
for any
. Throughout the remainder, we denote
as
to stress its algebraic structure. For any
, the left translation by A−1 is denoted as
for any
. In Lie group theory [23],
represents the difference between A and B.
The tangent space at the identity element e = Im, also known as the Lie algebra
, is linearly isomorphic to the space
of real symmetric matrices [28]. By respectively assimilating ⊕ and ⊗ to addition + and scalar multiplication × and identifying the vector space
with
, the map Log(⋅) constructs a linear isomorphism from
to
, and is also a diffeomorphism from a neighborhood of
in
to that of the null element 0 in
[29]. Also, Re(·) = Exp(·) and its inverse
are the commonly used retraction and lifting maps of
at e, respectively. Therefore, the displacement between A and B in
can be evaluated by the vector
in
.
3.2 Fusion criterion
For convenience, we denote p = N(x, Σ) and as their coordinates Σ and
, respectively. By fusing the geodesic projections
, the detailed operations of the Lie algebraic fusion to achieve the fused posterior density p are illustrated as follows:
- (i) Move all geodesic projections
into a neighborhood of the identity element
via the left translation
to get
, and then shift them into the Lie algebra
by the lifting map
to get
(16)
Here, vk measures the displacement between the fused posterior density p and the geodesic projection.
- (ii) Motivated from the commonly adopted arithmetic mean as the barycenter of data points in Euclidean space, we can handle the estimation fusion with the Euclidean operation, i.e., the arithmetic averaging in the Log-Euclidean domain, due to the linear structure of
. Let
measure the average displacement between the fused posterior density p and all geodesic projections
:
(17)
Consider applying the real weight vector c = [c1,…,cN]T to minimize the sum of squared norm distances from the average vectorto N vectors vk:
(18)
This is a convex quadratic programming problem. Setting partial derivatives of the objective function of Eq (18) with respect to cl to zeros, i.e.,(19) we obtain the optimal weight coefficients
(20) and then from Eqs (16), (17) and (20),
(21)
- (iii) Using the Euclidean structure on the Lie algebra
, we take the norm of the average displacement vector
as the cost, and by minimizing the cost we formulate a novel algebraic fusion criterion as
(22)
3.3 Fusion algorithm
The optimization of Eq (22) can be further decomposed into two steps—first over x and then over Σ:
(23)
First, seek the fused mean to minimize the norm of the average vector
for any Σ:
(24)
Second, for , seek the fused covariance estimate P to minimize the norm of
:
(25)
Theorem 1. The fused mean estimate in Eq (24) satisfies an implicit expression
(26)
with the weights
(27)
and the fused covariance P in Eq (25) is
(28)
Proof. Denoting a function of variable x as
(29)
for any given Σ, we can easily obtain the derivative
(30)
The fused mean estimate in Eq (24) satisfies d ψΣ(x)/d x = 0. Considering the arbitrariness of variable Σ in Eq (24), we let
satisfy the stationary condition
(31)
Inserting Sherman–Morrison formula in [30] into Eq (11), we have
(32)
Then, substituting Eq (32) into Eq (31) yields
(33)
and thus Eq (26) follows from Eq (33).
Owing to the property of the function tr(⋅), the fused covariance (28) for the minimization problem in Eq (25) is obvious.
Remark 1. As an arithmetic mean in the domain of matrix logarithms, the Log-Euclidean mean as shown in Eq (28) has been successfully applied in many areas such as elasticity theory [31] and image processing [32–34].
In Theorem 1, the objective function in Eq (22) reaches its minimum value 0 through the fused estimate . Moreover, due to the term
in Eq (27), the Lie fusion for
in Eq (26) is deemed robust and reliable against outliers. To further solve the implicit Eq (26) for the fused mean estimate
, the following theorem provides a rigorous proof about the convergence of the fixed point iteration for
.
Theorem 2. By adopting the fixed point iteration
(34)
with
(35)
for k = 1, …, N, the iterative sequence
is bounded and its accumulation point
as the final fused mean estimate satisfies the stationary condition shown in Eq (33).
Proof. Denote two functions of variable as
(36)
(37)
It is easily verified from Eqs (36) and (37) that
(38)
Since U(ξ, ξt) is convex as a function of ξ, the minimizer ξt+1 in Eq (39) is unique. Further setting the gradient of U(ξ, ξt) with respect to ξ to zero, we have
(40)
Meanwhile, combining ϕ(ξ) ≥ N due to Eq (37), and the fact
(41)
we know that ϕ(ξt) is decreasing and converges as t tends to + ∞, so the sets
and
are bounded. Moreover, for any subsequence
converging to
, applying the continuity of maps ϕ(⋅) and T(⋅) leads to
. As a result, it follows from the second inequality in Eq (41) that
, and then
is the unique minimizer of
, i.e.,
, by the definition of T(⋅) in Eq (39). The theorem thus follows.
Note that the two fused estimates as shown in Eqs (26) and (28) satisfy the following desirable properties:
- (i) Invariance under any affine transformation Q. If
is the fused estimate of
,
is the fused estimate of
. Also, if Q is an orthogonal matrix and P is the fused estimate of
, QPQT is the fused estimate of
.
- (ii) Invariance under the inversion. If P is the fused estimate of
, P−1 is the fused estimate of
.
In summary, combining the fixed point iteration (34) for with the explicit expression (28) for the fused covariance estimate P, we outline the above fusion method in Algorithm 1 and call it the algebraic (GPA) fusion method based on geodesic projection.
Remark 2. Many existing fusion methods, including the DCI, WDCI, FCI, KLA and GP, have identical forms as the CI, but in general with different weights. Especially when two sensors are used to track a one-dimensional target, the fused estimates for these fusion methods theoretically lie on the Euclidean line segment between two local estimates from sensors, whereas according to Eqs (26) and (28), the GPA does not. As shown in [19], the Gaussian Riemannian manifold is not flat, so the proposed GPA is more reasonable owing to its utilizing the geometric and algebraic structures on
. In Section 4.1, we take the two-sensor case for instance to validate the above fact.
Algorithm 1: GPA Distributed Fusion Algorithm.
Input: and tolerances ϵ1, ϵ2
Output:
1 Set t = 0, r1 = ϵ1 and r2 = ϵ2;
2 Initiate ξ0 using the CI state estimate;
3 while r1 ≥ ϵ1 and r2 ≥ ϵ2 do
4 Run mean iteration shown in Eq (34);
5 Compute the Frobenius norm r1 = ∥ξt+1 − ξt∥F;
6 Compute by Eq (33) the Frobenius norm ;
7 Set t ← t + 1
8 end
9
;
10 Compute by inserting
into Eq (11);
11 Compute the covariance estimate P using Eq (28);
12 return
.
Remark 3. An extension of the GPA method to non-Gaussian PDFs is far more difficult. For the Riemannian manifold composed of multivariate non-Gaussian distributions, it is a great challenge to obtain the explicit geodesic projection, which is the key requirement for our fusion method.
4 Simulations
In this section, three numerical examples (i.e., one-dimensional static target, and linear and nonlinear dynamic systems) are provided to demonstrate the performance of the proposed GPA method in distributed estimation fusion. All fusion algorithms are implemented in Octave on a computer with Intel Core i7-10870H 2.20 GHz processor and the corresponding program codes are accessible via the doi “10.6084/m9.figshare.25116515”.
4.1 One-dimensional static target
To intuitively compare the performance difference of these fusion methods, we use two local estimates and
from two sensors for fusion. As defined in [22], the informative barycenter is an optimal point on the geodesic segment by minimizing the sum of its squared geodesic distances to two endpoints
and
. Fig 1 clearly displays all geodesic distances between the informative barycenter and the fused densities. As stated in Remark 2, the fused estimates for the DCI, FCI, KLA, GP and WDCI lie on the (straight) Euclidean segment between
and
, while the GPA and the DDF do not. Moreover, compared to other methods, we can observe that the estimate of GPA is closest to the barycenter. Therefore, the GPA is considered more reasonable because the Gaussian manifold is indeed non-Euclidean.
The solid red curve represents the geodesic curve linking two local densities.
4.2 Linear dynamic system
Consider the following dynamic system with one fusion center and N sensors for tracking a two-dimensional target:
(42)
(43)
(44)
where the sampling period T = 1s, the state xk has two components (i.e., the position and the velocity),
,
and
for j = 3, …, N. The joint observation noise
follows N(0, Rk) and its covariance Rk is a positive-definite Toeplitz matrix with main diagonal
and two sub-diagonals
, where 1n is an n-dimensional vector with all entries 1 and −1/(2 cos(π/(2*N − 1))) < ρ < 1/(2 cos(π/(2*N − 1))) ensures that Rk is positive-definite. Moreover, the process noise ωk follows N(0, Qk with Qk = γ · diag([12, 4]), the initial state x0 is generated by
with
and P(0|0) = diag([100, 25]), and ωk and vk are mutually uncorrelated at each instant k.
The local estimate is calculated by the standard Kalman filter, and then propagated synchronously to the fusion center. Further, the fusion center applies a specified fusion method to obtain the fused estimate
, and transmits it back to the local sensors. In order to show the performance of various fusion algorithms, i.e., the DCI, WDCI, FCI, KLA, DDF, GP and GPA, with different cross-correlations among local estimation errors, different number of sensors, different measurement matrices, and different Qk and Rk, we respectively vary ρ, γ, N and σ2, and compare the averaged root mean squared errors (ARMSEs) of the position and the velocity over 100 time steps and 500 Monte Carlo runs. Specifically, four different cases are listed as follows:
- Case I: Fix γ = 1, N = 3 and σ2 = 10, and the correlation coefficient ρ varies from −0.6 to 0.6 with step 0.1.
- Case II: Fix ρ = 0.1, N = 3 and σ2 = 10, and γ varies from 1 to 6 with step 1.
- Case III: Fix ρ = 0.1, γ = 1 and σ2 = 10, and the number N of sensors varies from 2 to 7.
- Case IV: Fix ρ = 0.1, γ = 1 and N = 3, and σ2 varies from 5 to 30 with step 5.
As shown in Figs 2 and 3(A)–3(D) illustrate the fusion performance of all compared fusion methods as the correlation coefficient, the process noise, the number of sensors and the measurement noise increase, respectively. It is evident that the proposed fusion algorithm GPA is consistently better for the ARMSEs of the position and the velocity than the other fusion methods in four different cases. We contribute it to utilizing the geometric and algebraic structures on the Gaussian manifold to fuse local estimates. Note that the WDCI in Figs 2(A) and 3(A) has a very close performance as the GPA only if each pair of local estimates is not correlated (i.e., ρ = 0), and otherwise performs poorly. Also, the total (100 steps, 500 Monte Carlo runs, ρ = 0.1) computation time of the compared fusion algorithms in Case I is reported in Table 1, which demonstrates that the proposed GPA has a low computation cost.
(A) Case I with ρ ∈ [−0.6, 0.6]; (B) Case II with γ ∈ {1, 2, …, 6}; (C) Case III with N ∈ {2, 3, …, 7}; (D) Case IV with σ2 ∈ {5, 10, …, 30}.
(A) Case I with ρ ∈ [−0.6, 0.6]; (B) Case II with γ ∈ {1, 2, …, 6}; (C) Case III with N ∈ {2, 3, …, 7}; (D)Case IV with σ2 ∈ {5, 10, …, 30}.
4.3 Nonlinear dynamic system
As [35], we consider a two-dimensional dynamic system with the linear motion model
(45)
and the nonlinear observation model
(46)
where Fk = diag([1, 1]), and the state xk = [xk, yk]T represents the position of the target moving in the xy-plane. The joint observation noise
follows N(0, Rk) and the Toeplitz matrix Rk has the main diagonal σ21N and two sub-diagonals ρσ21N−1. Also, the process noise ωk follows the Gaussian distribution N(0, Qk) with Qk = γ · diag([2, 1]), the initial state x0 is generated from
with
and P(0|0) = diag([16, 9]), and ωk and vk are mutually uncorrelated at each instant k.
The local estimate is calculated by the unscented filter [36], and then propagated synchronously to the fusion center. As in the linear Gaussian system (42)–(44), we respectively vary ρ, γ, N and σ2, and compare the ARMSEs of the state estimate over 100 time steps and 500 Monte Carlo runs. Specifically, four different cases are listed as follows:
- Case I: Fix γ = 1, N = 3 and σ2 = 4, and the correlation coefficient ρ varies from −0.7 to 0.7 with step 0.1.
- Case II: Fix ρ = −0.1, N = 3 and σ2 = 4, and γ varies from 1 to 6 with step 1.
- Case III: Fix ρ = −0.1, γ = 1 and σ2 = 4, and the number N of sensors varies from 2 to 7.
- Case IV: Fix ρ = −0.1, γ = 1 and N = 3, and σ2 varies from 2 to 12 with step 2.
From the comparisons of the ARMSEs of position estimates in Fig 4, we can find that the proposed fusion algorithm GPA is consistently better than the other fusion methods in four different cases, and the KLA, FCI, DCI, and WDCI have nearly the same performance. Moreover, Table 2 reports the total (100 steps, 500 Monte Carlo runs, ρ = −0.1) computation time of all compared fusion algorithms in Case I, indicating the low computation cost of the GPA. It is worth noting that the DDF is not shown in Fig 4 owing to its very poor performance and high computation cost.
(A) Case I with ρ ∈ [−0.7, 0.7]; (B) Case II with γ ∈ {1, 2, …, 6}; (C) Case III with N ∈ {2, 3, …, 7}; (D) Case IV with σ2 ∈ {2, 4, …, 12}.
4.4 Vehicle dynamic model
Consider a three-degree-of-freedom (TDF) model (see, e.g., [37]) to describe the coupled dynamics characteristics of the proceeding vehicle with one fusion center and N sensors for tracking this vehicle, whose state equations can be formulated as
(47)
(48)
(49)
and measurement equation as
(50)
where the three dimensional state vector consists of the yaw rate r, the sideslip angle β, and the longitudinal velocity vx, and the lateral acceleration ay is the measurement output. The simulation test environment in CarSim 2016 is set to a typical sinusoidal steering condition with a friction coefficient of 0.85, the lateral acceleration ax is from the CarSim output, and the curve of the front-wheel steering angle δ is depicted in Fig 5(A) with a amplitude of 0.1744 rad and a period of 2 s. Other parameters in this model are listed as follows: the distance from the center of gravity to the front axle a = 1.066 m, the distance from the center of gravity to the rear axle b = 1.544 m, the vehicle mass m = 1458.4 kg, the cornering stiffness of the front axle k1 = −105 N/rad, the cornering stiffness to the rear axle k2 = −11 × 104 N/rad, the moment Iz = 2768 kg ⋅ m2 of inertia around the Z axis.
To estimate the vehicle state vector, the TDF model can be transformed into the following discrete-time state-space model
(51)
(52)
where the sampling period Δt = 0.02s. The joint observation noise
follows the Gaussian distribution N(0, Rk) and the Toeplitz matrix Rk has the main diagonal 1N and two sub-diagonals σ1N−1. Also, the process noise ωk follows N(0, Qk) with Qk = diag([π/180, π/180, 0.1]), the initial state x0 is generated from
with
and P(0|0) = diag([1, 1, 1]), and ωk and νk are mutually uncorrelated at each instant k.
The local estimate is calculated by the cubature Kalman filter [38], and then propagated synchronously to the fusion center. We vary the correlation coefficient ρ ∈ [−0.7, 0.7] with step 0.1 and compare the ARMSEs of the state estimates over 500 time steps and 100 Monte Carlo runs. From the comparisons of the ARMSEs in Fig 5(B)–5(D), we can find that the proposed fusion algorithm GPA is consistently better than the other fusion methods, and the KLA, FCI, DCI, and WDCI have nearly the same performance while the GP behaves worst. Moreover, Table 3 reports the total (500 steps, 100 Monte Carlo runs, ρ = −0.1) computation time of all compared fusion algorithms, indicating the low computation cost of the GPA.
5 Conclusion
In this work, we propose a distributed estimation fusion method GPA under Gaussian assumptions. On the Gaussian submanifold with a fixed mean and a specified Lie algebraic structure, the GPA method fuses the geodesic projections of posterior PDFs in sense of minimizing the norm of the average displacement vector between a sought-for fused density and these projections. Simulation examples have illustrated that the GPA outperforms some existing fusion methods. It shows the significance of introducing the geodesic projection and Lie algebraic setting into distributed estimation fusion.
Acknowledgments
We thank the anonymous reviewers for their suggestions on improving the quality of this paper.
References
- 1.
Liggins ME, Hall DL, Llinas J, editors. Handbook of Multisensor Data Fusion: Theory and Practice. 2nd ed. Boca Raton, FL: CRC Press; 2009.
- 2. Salcedo-Sanz S, Ghamisi P, Piles M, Werner M, Cuadra L, Moreno-Martínez A, et al. Machine learning information fusion in earth observation: A comprehensive review of methods, applications and data sources. Information Fusion. 2020; 63: 256–272.
- 3. Shafran-Nathan R, Etzion Y, Broday DM. Fusion of land use regression modeling output and wireless distributed sensor network measurements into a high spatiotemporally-resolved NO2 product. Environmental Pollution. 2021; 271: 116334. pmid:33388684
- 4. Li XR, Zhu Y, Wang J, Han C. Optimal linear estimation fusion—Part I: Unified fusion rules. IEEE Transactions on Information Theory. 2003; 49(9): 2192–2208.
- 5. Li G, Battistelli G, Yi W, Kong L. Distributed multi-sensor multi-view fusion based on generalized covariance intersection. Signal Processing. 2020; 166: 107246.
- 6.
Julier S, Uhlmann J. A non-divergent estimation algorithm in the presence of unknown correlations. In: Proceedings of The 1997 American Control Conference. Albuquerque, NM, USA; 1997. pp. 2369–2373.
- 7. Reinhardt M, Noack B, Arambel PO, Hanebeck UD. Minimum Covariance Bounds for the Fusion under Unknown Correlations. IEEE Signal Processing Letters. 2015; 22(9): 1210–1214.
- 8.
Hurley MB. An information theoretic justification for covariance intersection and its generalization. In: Proceedings of the 5th International Conference on Information Fusion. Annapolis, MD, USA; 2002. pp. 505–511.
- 9. Liu J, Hao G. The covariance intersection fusion estimation algorithm weighted by diagonal matrix based on genetic simulated annealing algorithm and machine learning. Asian Journal of Control. 2022; 25(2): 1448–1463.
- 10. Tang J, Zhou J, Rong Y. Estimation fusion for distributed multi-sensor systems with uncertain cross-correlations. International Journal of Systems Science. 2019; 50(7): 1378–1387.
- 11. Wang Y, Li XR. Distributed estimation fusion with unavailable cross-correlation. IEEE Transactions on Aerospace Electronic Systems. 2012; 48(1): 259–278.
- 12. Battistelli G, Chisci L. Kullback–Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability. Automatica. 2014; 50(3): 707–718.
- 13.
Julier S, Bailey T, Uhlmann J. Using exponential mixture models for suboptimal distributed data fusion. In: Nonlinear Statistical Signal Processing Workshop. Cambridge, UK; 2006. pp. 160–163.
- 14. Qu X, Zhou J, Song E, Zhu Y. Minimax robust optimal estimation fusion in distributed multisensor systems with uncertainties. IEEE Signal Processing Letters. 2010; 17(9): 811–814.
- 15.
Duan Z, Li XR, Hanebeck UD. Multi-sensor Distributed Estimation Fusion Using Minimum Distance Sum. In: 17th International Conference on Information Fusion. Salamanca, SPAIN; 2014. pp. 1–8.
- 16. Takatsu A. Wasserstein geometry of Gaussian measures. Osaka Journal of Mathematics. 2011; 48(4): 1005–1026.
- 17. Puccetti G, Rüschendorf L, Vanduffel S. On the computation of Wasserstein barycenters. Journal of Multivariate Analysis. 2020; 176: 104581.
- 18. Eldar YC, Beck A, Teboulle M. A minimax Chebyshev estimator for bounded error estimation. IEEE Transactions on Signal Processing. 2008; 56(4): 1388–1397.
- 19. Costa SIR, Santos SA, Strapasson JE. Fisher information distance: A geometrical reading. Discrete Applied Mathematics. 2015; 197: 59–69.
- 20. Rong Y, Tang M, Zhou J. Intrinsic losses based on information geometry and their applications. Entropy. 2017; 19(8): e19080405.
- 21. Chen X, Zhou J, Hu S. Upper bounds for Rao distance on the manifold of multivariate elliptical distributions. Automatica. 2021; 129: 109604.
- 22. Tang M, Rong Y, Zhou J, Li XR. Information geometric approach to multisensor estimation fusion. IEEE Transactions on Signal Processing. 2019; 67(2): 279–292.
- 23.
Absil PA, Mahony R, Sepulchre R. Optimization Algorithms on Matrix Manifolds. Princeton, NJ, USA: Princeton University Press; 2009.
- 24.
Jost J. Riemannian Geometry and Geometric Analysis, 6th ed. Heidelberg, Germany: Springer; 2011.
- 25. Rong Y, Tang M, Chen X, Zhou J. Correction to “Information Geometric Approach to Multisensor Estimation Fusion”. IEEE Transactions on Signal Processing. 2021; 69: 4556–4556.
- 26. Calvo M, Oller JM. A distance between multivariate normal distributions based in an embedding into the Siegel group. Journal of Multivariate Analysis. 1990; 35(2): 223–242.
- 27. Fiori S, Tanaka T. An Algorithm to Compute Averages on Matrix Lie Groups. IEEE Transactions on Signal Processing. 2009; 57(12): 4734–4743.
- 28. Arsigny V, Fillard P, Pennec X, Ayache N. Geometric means in a novel vector space structure on symmetric positive-definite matrices. Siam Journal on Matrix Analysis and Applications. 2007; 29(1): 328–347.
- 29.
Baker A. Matrix Groups: An Introduction to Lie Group Theory. London, UK: Springer-Verlag; 2002.
- 30. Sherman J, Morrison WJ. Adjustment of an Inverse matrix corresponding to changes in the elements of a given column or a given row in the original matrix. Bulletin of the American Mathematical Society. 1949; 55(11): 1077–1078.
- 31. Cowin SC, Yang G. Averaging anisotropic elastic constant data. Journal of Elasticity. 1997; 46(2): 151–180.
- 32. Li P, Wang Q, Zeng H, Zhang L. Local log-Euclidean multivariate Gaussian descriptor and its application to image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017; 39(4): 803–817.
- 33. Fillard P, Arsigny V, Pennec X, Hayashi KM, Thompson PM, Ayache N. Measuring brain variability by extrapolating sparse tensor fields measured on sulcal lines. Neuroimage. 2007; 34(2): 639–650. pmid:17113311
- 34. Basser PJ, Mattiello J, Lebihan D. MR diffusion tensor spectroscopy and imaging. Biophysical Journal. 1994; 66(1): 259–267. pmid:8130344
- 35. Hu S, Guo L, Zhou J. An iterative nonlinear filter based on posterior distribution approximation via penalized Kullback–Leibler divergence minimization. IEEE Signal Processing Letters. 2022; 29: 1137–1141.
- 36. Julier S, Uhlmann J, Durrant-Whyte HF. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Transactions on Automatic Control. 2000; 45(3): 477–482.
- 37. Wang Y, Yan Y, Shen T, Bai S, Hu J, Xu L, et al. An event-triggered scheme for state estimation of preceding vehicles under connected vehicle environment. IEEE Transactions on Intelligent Vehicles. 2023; 8(1): 583–593.
- 38. Jia B, Xin M, Cheng Y. High-degree cubature Kalman filter. Automatica. 2013; 49(2): 510–518.