Next Article in Journal
Efficient Approximation of the Conditional Relative Entropy with Applications to Discriminative Learning of Bayesian Network Classifiers
Previous Article in Journal
Urban Dynamics, Fractals and Generalized Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Linear Fusion of Observations Provided by Two Sensors

Laboratoire d'Informatique, Signal et Image de la Côte d'Opale (LISIC), Univ Lille Nord de France, Université du Littoral Côte d'Opale (ULCO), 50 rue Ferdinand Buisson, BP719, 62228 Calais Cedex, France
*
Author to whom correspondence should be addressed.
Entropy 2013, 15(7), 2698-2715; https://doi.org/10.3390/e15072698
Submission received: 22 April 2013 / Revised: 5 June 2013 / Accepted: 8 June 2013 / Published: 11 July 2013

Abstract

:
When we try to make the best estimate of some quantity, the problem of combining results from different experiments is encountered. In multi-sensor data fusion, the problem is seen as combining observations provided by different sensors. Sensors provide observations and information on an unknown quantity, which can differ in precision. We propose a combined estimate that uses prior information. We consider the simpler aspects of the problem, so that two sensors provide an observation of the same quantity. The standard error of the observations is supposed to be known. The prior information is an interval that bounds the parameter of the estimate. We derive the proposed combined estimate methodology, and we show its efficiency in the minimum mean square sense. The proposed combined estimate is assessed using synthetic data, and an application is presented.

1. Introduction

The problem of making a combined estimate as a weighted mean with weights inversely proportional to the variance was discussed previously in [1]. Research in this area is conducted to derive a combined estimate when the observations are partially consistent. For example, suppose two or more sensors perform; these sensors are used to provide observations on a common response variable. The sensors measure the same parameter, but their measurements differ in precision. The observations are partially consistent, because the observations obtained with the same sensor are assumed to be consistent and the observations provided by different sensors are assumed to be non-consistent. We present in Figure 1 the case of two sensors. To make inferences about the common parameter, we wish to make use of the combined sample data. These sensors are assumed to have no systematic biases, but to differ in precision. This is the case, for example, of the Global Navigation Satellite System (GNSS) application, where the same position can be observed with signals broadcast on different frequency bands and signals broadcast by different satellite systems. Heteroscedasticity is the key feature of the combined sample data. Research in this domain focuses on the joint definition of weights and combined estimators. In [2], the efficiency of the weighted mean with estimate variances was studied. The variance of the weighted regression was derived in [3]. A new estimator that is more efficient than the maximum likelihood estimate was developed in [4]. Further references and proposed estimators can be found in [5,6].
Figure 1. Heteroscedasticity framework in information fusion.
Figure 1. Heteroscedasticity framework in information fusion.
Entropy 15 02698 g001
Most of the multi-sensors data fusion systems assume that the variances of processes are known (or estimated on the processes, but the error of estimation is not taken into account in the definition of the weights), and the classical Bayesian estimation is used to design fusion operators. When the measurements are independent, the least square (LS) estimate and the maximum likelihood (ML) estimate are the same. This classical fusion operator is the weighted mean with weights inversely proportional to the variance [7]. For dependent or correlated measurements, the weights are defined by the variances and covariance of the observations. They are a solution of a set of linear equations that link the variance of the fusion operator to the variances and covariance of the measures [8]. Unfortunately, in practice, the covariance is difficult to estimate, and few applications use the optimal fusion operator in this case [9].
Maximum a posteriori (MAP) estimators use in their implementation prior information to estimate a quantity. The Bayesian modeling of a problem allows one to integrate prior information on the data to fuse [10]. The a posteriori distribution that models the problem is defined by the product of a priori law with the likelihood distribution of the multi-sensor data. The prior information can be a priori law on the parameter to estimate [11,12] or on the mutual behavior of processes [13,14]. In the sequential Bayesian inference framework, the prior law models the information brought by the dynamic state equation of the Bayesian filter [15,16]. In all those cases, the fusion operator that merges the prior information with the observations is a weighted sum.
Some multi-sensor data fusion systems defined in the sequential Bayesian inference framework have to combine distributed estimates. Let us consider an estimate of the same state provided by different filters working in parallel. This is the case of filters having different observations provided by sensors of different kinds. It is shown in [17] that for two Kalman filters working in parallel, the process noise brings a correlation between the estimate states. The Track-to-Track fusion filter proposed by [17] uses a weighted sum to fuse the correlate estimate. The weights are, in this case, the function of the variances and covariance of the estimate state. In the proposed filter, a recursive expression of the estimate state covariance is derived in order to compute the weighted sum. Many different implementations of the Track-to-Track fusion filter can be found in published works [18]. The extension of the fusion technique to N filters in parallel was proposed by [19], but the filter is difficult to process in practice. Finally, a sub-optimal version of the filter proposed in [8] allows one to solve the problem of covariance estimation and to provide an easy solution to implement with good accuracy.
In the present paper, we consider a simple aspect of the problem, two sensors, i 1 , 2 , and observations, z ( i ) , distributed according to a Gaussian distribution of mean, m, and variance, σ i 2 . We propose a non-linear estimate that uses prior information in order to combine the observations. This information is an approximate value of m, m a assumed to be distributed according to a Gaussian law of variance, σ 3 2 . Furthermore, m is assumed to be inside the interval, [ m a σ 3 , m a + σ 3 ] . It is shown in [10] that the classical (MAP) estimate is a weighted sum of the observations and of the prior information. We show that the proposed fusion operator outperforms the MAP estimate in the minimum quadratic error sense. The proposed fusion operator is assessed for a multi-sensor system that estimates the position of an object. The multi-sensor system is composed of a multi-band GNSS receiver, a yaw rate sensor and a speed meter sensor. In this case, the data fusion system has to combine the estimated position obtained for each band of the GNSS receiver. The measurements of direction and velocity provided by the dead reckoning sensors are the prior information on the parameter to estimate.
The rest of the paper is organized as follows: In Section 2, we state the problem. The fusion operator is presented in Section 3. Section 4 is dedicated to the experiments. We conclude in Section 5.

2. Problem Statement

Let us consider Z ( 1 ) and Z ( 2 ) , two random variables, and z k ( 1 ) and z k ( 2 ) , their realizations at instant, k. We consider the case of two sensors providing two measures, z k ( 1 ) and z k ( 2 ) . The random variables, Z ( 1 ) and Z ( 2 ) , are assumed to be independent with the same mean, m, and distributed according to Gaussian law, f 1 ( ) and f 2 ( ) . The variance of these random variables are, respectively, σ 1 2 and σ 2 2 . The maximum-likelihood estimation (MLE) fusion operator is the following weighted sum:
z k , M L E = α 1 z k ( 1 ) + α 2 z k ( 2 ) with α 1 + α 2 = 1
The MAP fusion operator is the following weighted sum:
z k , M A P = α 1 z k ( 1 ) + α 2 z k ( 2 ) + α 3 m a with α 1 + α 2 + α 3 = 1
where m a is the imprecise prior information assumed to be distributed according to a Gaussian law, f 3 ( ) , of variance, σ 3 2 , and mean, m. The fused measurement at instant k is z k , M A P , the realization of the random variable Z M A P . The weights are defined as follows:
α i = 1 σ i 2 i = 1 3 1 σ i 2
We can distinguish two kinds of realizations of the random variable pair, Z ( 1 ) , Z ( 2 ) :
  • Realizations Z ( n 1 ) , Z ( n 2 ) situated on both sides of the mean, m, such as { z k ( 1 ) < m { and { z k ( 2 ) > m { or { z k ( 1 ) > m { and { z k ( 2 ) < m { .
  • Realizations Z ( p 1 ) , Z ( p 2 ) situated on the same side of the mean, m, such as { z k ( 1 ) < m { and { z k ( 2 ) < m { or { z k ( 1 ) > m { and { z k ( 2 ) > m { .
We show in Appendix 1 that the realizations on the same side of the mean are positively correlated, while the realizations on both sides of the mean are negatively correlated. In order to improve the MAP fusion operator (decrease its output variance), we propose to reduce the covariance between the random variables, Z ( 1 ) and Z ( 2 ) , by decreasing the covariance between Z ( p 1 ) and Z ( p 2 ) . Two problems come up, the selection of the realizations of Z ( p 1 ) and Z ( p 2 ) in the realizations of Z ( 1 ) and Z ( 2 ) and the definition of a transformation that reduces the covariance of Z ( p 1 ) and Z ( p 2 ) . The proposed fusion operator is non-linear, because it needs a stage of selection.

3. Non-Linear Fusion Operator

3.1. The Non-Linear Transformation

3.1.1. Definition of the Non-Linear Transformation

Let z k , N L F be the non-linear fusion (NLF) of z k ( 1 ) and z k ( 2 ) . We propose to define the non-linear MAP fusion operator, for σ 2 > σ 1 , as follows:
  • If z k ( 1 ) > B u and z k ( 2 ) > B u
    z k , N L F = α 1 z k ( 1 ) + α 2 z k ( 2 ) g ( z k ( 1 ) , z k ( 2 ) ) + α 3 m a
  • If z k ( 1 ) < B d and z k ( 2 ) < B d
    z k , N L F = α 1 z k ( 1 ) + α 2 z k ( 2 ) + g ( z k ( 1 ) , z k ( 2 ) ) + α 3 m a
  • otherwise z k ( 1 ) < B d and z k ( 2 ) > B d or z k ( 1 ) > B d and z k ( 2 ) < B d
    z k , N L F = α 1 z k ( 1 ) + α 2 z k ( 2 ) + α 3 m a with g ( z k ( 1 ) , z k ( 2 ) ) = 0
where B d = m a σ 3 , the lower bound, and B u = m a + σ 3 , the upper bound, which are derived from the prior information and used to select the realizations of Z ( p 1 ) and Z ( p 2 ) . In these expressions, g ( z k ( 1 ) , z k ( 2 ) ) is a function that reduces the correlation between Z ( p 1 ) and Z ( p 2 ) . The fusion operator can be rewritten as follows:
z k , N L F = α 1 z k ( 1 ) + α 2 z k ( 2 ) ± g ( z k ( 1 ) , z k ( 2 ) ) + α 3 m a
where the weights are defined with Equation (3). The non-linear fusion operator is biased. The upper and lower bound are indeed not necessarily centered on the mean value, m. The mean square error (MSE) takes into account the variance and bias of an estimator. We derive in the next paragraph a condition on the proposed non-linear transformation that guarantees a smaller MSE for the proposed operator.

3.1.2. A Condition on the Non-Linear Transformation

Let us define the following notation for the fused measurements:
  • If z k ( 1 ) > B u and z k ( 2 ) > B u or z k ( 1 ) < B d and z k ( 2 ) < B d , we notice M S E ( Z N L F ( p ) ) , M S E ( Z M A P ( p ) ) , the respective MSE of the non-linear fusion operator output and of the linear fusion operator output.
  • otherwise, we notice M S E ( Z N L F ( n ) ) , M S E ( Z M A P ( n ) ) , the respective MSE for the non-linear fusion operator and for the linear fusion operator.
Let us define the following expressions:
M S E ( Z M A P ) = P M S E ( Z M A P ( p ) ) + N M S E ( Z M A P ( n ) )
M S E ( Z N L F ) = P M S E ( Z N L F ( p ) ) + N M S E ( Z N L F ( n ) )
where the coefficients of normalization, P and N, are defined as follows:
P = B d B d f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B u B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
N = B u B d f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B d B u B d f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B d B u B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B d B d B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B u B d B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B d B u B d B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + B d B u f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
We show in Appendix 2 that the quadratic error, M S E ( Z N L F ) , will be smaller than the quadratic error, M S E ( Z M A P ) , if the following inequality holds:
α 2 2 E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] + 2 α 1 α 2 E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + 2 α 2 2 E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + ( 2 α 2 α 3 m a 2 m α 2 ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] 0
The terms of this equation are defined by:
Term 1: 
E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] = N E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] D
N E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] = B d B d g ( z ( 1 ) , z ( 2 ) ) 2 f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) B u B u g ( z ( 1 ) , z ( 2 ) ) 2 f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
D = m m f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + m m f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
Term 2: 
E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = N E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] D
N E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = B d B d z ( 1 ) g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) B u B u z ( 1 ) g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
Term 3: 
E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = N E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] D
N E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = B d B d z ( 2 ) g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) B u B u z ( 2 ) g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
Term 4: 
E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = N E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] D
N E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = B d B d g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) B u B u g ( z ( 1 ) , z ( 2 ) ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
where in the Terms 1–4, the difference between the integrals are due to the signs of ± g ( . . ) , as defined in Equations (4) and (5).
The condition of Equation (10) depends on the variance of the fused random variables, on the interval size and on its eccentricity, B u + B d 2 m . We show in Figure 2 several realizations of Z ( 1 ) and Z ( 2 ) , an interval and its associated eccentricity. We show in this figure the realizations that will be modified by the non-linear transformation, the observations that satisfy the conditions: z k ( 1 ) , z k ( 2 ) < B d or z k ( 1 ) , z k ( 2 ) > B u . In the next paragraph, we propose a function, ± g ( z k ( 1 ) , z k ( 2 ) ) , that holds the inequality in Equation (10) for an interval that contains the parameter for estimating.
Figure 2. Example of non-linear transformation.
Figure 2. Example of non-linear transformation.
Entropy 15 02698 g002

3.2. Transformation Function

3.2.1. Conditions on the Transformation Function

The function, ± g ( z k ( 1 ) , z k ( 2 ) ) , is defined heuristically. In the centered case, B u + B d 2 = m , we can define two conditions to hold the inequality in Equation (10):
  • Condition 1:
    E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] = E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = 0
  • Condition 2:
    E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] < 0  and  E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] < 0
These two conditions are verified for a positive function of the difference between two measurements, g ( z k ( 1 ) z k ( 2 ) ) > 0 . For z k ( 1 ) , z k ( 2 ) > B u and z k ( 1 ) , z k ( 2 ) < B d , the function, g ( z k ( 1 ) z k ( 2 ) ) , is indeed centered and symmetric. The Terms 1 and 4 are null in this case. Furthermore, if the function, g ( z k ( 1 ) z k ( 2 ) ) , is positive, the Terms 2 and 3 are negative, and the inequality in Equation (10) is negative. In the next paragraph, we propose a non-linear function that matches these conditions, and we show, in the experimentation, that the inequality in Equation (10) for this function is always negative, even if the interval is not centered on the parameter to be estimated.

3.2.2. Definition of the Transformation Function

The principle of the non-linear transformation is to, respectively, add or subtract a positive offset, d > 0 , if z k ( 1 ) , z k ( 2 ) < B d or z k ( 1 ) , z k ( 2 ) > B u . Let us consider the following statistic distributions:
  • Let f p ( z k ( p 1 ) z k ( p 2 ) ) be the Gaussian distribution of the difference between the random variables, Z ( p 1 ) and Z ( p 2 ) . This distribution is a zero-mean, and its variance is given by ( σ p ) 2 = σ 1 2 + σ 2 2 2 σ 12 p .
  • Let f n ( z k ( n 1 ) z k ( n 2 ) ) be the Gaussian distribution of the difference between the random variables, Z ( n 1 ) and Z ( n 2 ) . This distribution is a zero-mean, and its variance is given by ( σ n ) 2 = σ 1 2 + σ 2 2 2 σ 12 n .
where σ 12 p and σ 12 n are obtained by numerical integration of the Equations (15) and (16) of Appendix 1. The value of d is the solution of the following equality for z k ( 1 ) , z k ( 2 ) < B d or z k ( 1 ) , z k ( 2 ) > B u :
f n ( d ) = f p ( z k ( p 1 ) z k ( p 2 ) ) C
with:
C = f p ( 0 ) f n ( 0 ) = σ n σ p
Let us define the following relation:
1 2 π 1 σ n e x p 1 2 ( d ) 2 σ n 2 = 1 C 1 2 π 1 σ p e x p 1 2 ( z k ( p 1 ) z k ( p 2 ) ) 2 σ p 2
then, we can derive the following expression for d:
d = σ n σ p z k ( 1 ) z k ( 2 ) for z k ( 1 ) , z k ( 2 ) < B d or z k ( 1 ) , z k ( 2 ) > B u
We explain in Figure 3 the principle of the definition of d. The experimentation presented in the next paragraph shows that the proposed non-linear function holds the inequality in Equation (10). We assess in this experimentation the proposed fusion operator on synthetic data.
Figure 3. Principle of the non-linear function.
Figure 3. Principle of the non-linear function.
Entropy 15 02698 g003

4. Experimentation

In this experimentation, we first verify the inequality in Equation (10) for the proposed non-linear function. The fusion operator is assessed in a second experimentation, and finally, we demonstrate the feasibility of the proposed operator, in a third experimentation, for the fusion of data provided by a multi-sensor system.

4.1. Verification of the Inequality

In order to show that the proposed non-linear fusion operator has an output MSE lower than the classical linear fusion operator, we compute the left term of inequality in Equation (10). This term is defined as a constraint on the non-linear function and is equal to M S E ( Z N L F ) V a r ( Z M A P ) . The evaluation of this constraint is processed by numerical integration of Terms 1–4 defined in paragraph 3.1.2. We show that the proposed non-linear function holds the inequality in Equation (10), because the constraint is negative or null for all interval sizes and eccentricity.
We show in Figure 4 the constraint as a function of the interval length, B u B d , and of the eccentricity, B u + B d 2 m . The curves of Figure 4 are obtained for σ 1 = σ 2 and σ 2 > > σ 1 . When σ 2 > > σ 1 , the fusion is less accurate, because the random variable, Z ( 2 ) , brings little information on the parameter to estimate (so α 2 0 ). In this case, the constraint is close to zero, so the proposed and the MAP fusion operator are similar. When σ 2 = σ 1 = 3 , the constraint is negative for all the intervals and values of eccentricity. We can notice that for small values of the interval size, the constraint is close to zero. In this case, α 3 1 , because the prior information is very accurate, and the fusions operators do not use the observations. In conclusion, the greatest is the difference between the variances of Z ( 1 ) and Z ( 2 ) ; less interesting is the fusion. We can notice that this conclusion is also valid for the classical fusion operator. However, the constraint is always negative, and it shows that the proposed fusion operator is more accurate than the classical MAP fusion operator.
Figure 4. The constraint as a function of the interval and eccentricity.
Figure 4. The constraint as a function of the interval and eccentricity.
Entropy 15 02698 g004
We show in Figure 5 the normalized value of the constraint, M S E ( Z N L F ) V a r ( Z M A P ) V a r ( Z M A P ) , as a function of the interval size and of the eccentricity. The constraint is processed for two values of σ 1 = σ 2 and the normalization allows one to compare these two cases. We can observe in Figure 5 that when the interval length and the eccentricity increase for σ 1 = σ 2 = 1 , the constraint tends towards zero more quickly than for σ 1 = σ 2 = 5 . The probability of z k ( 1 ) and z k ( 2 ) to be in a given interval is indeed higher for σ 1 = σ 2 = 5 than for σ 1 = σ 2 = 1 . When this probability decreases, less observations are modified by the non-linear transformation, and the output variance of the non-linear fusion operator tends to the output variance of the MAP fusion operator.
We show in this experimentation that from a theoretical point of view, the proposed fusion operator is more accurate than the classical fusion operator, because the proposed non-linear function holds the inequality in Equation (10). In the next paragraph, we assess the proposed operator with synthetic data.
Figure 5. The constraint for two values of σ 1 = σ 2 .
Figure 5. The constraint for two values of σ 1 = σ 2 .
Entropy 15 02698 g005

4.2. Assessment of the Fusion Operator

In this experimentation, we compare the non-linear fusion operator with the classical ML and MAP fusion operators. We consider a constant signal of 1,000 samples. We show in Figure 6a a temporal series of Z ( 1 ) . The prior information is an interval represented on this figure by two horizontals dotted lines.
To compare the operators, we estimate the MSE between the fused measurements and the parameters to be estimated. For the ML fusion operator, the MSE is equal to the variance, because the combined estimate is unbiased. We report in Table 1 the parameters of the signals and the theoretical output variance of the classical ML fusion operator.
In order to assess the MAP and the non-linear fusion operator, we process 1,000 realizations of the temporal series, Z ( 1 ) and Z ( 2 ) . For each experiment, we randomly generate the prior information, m a , according to a Gaussian distribution of mean m and variance σ 3 . We report in Table 2 and we show in Figure 6b the MSE of the MAP and nonlinear fusion operator as a function of σ 3 .
We notice in Figure 6b and in Table 2 that the MSE of the proposed non-linear fusion operator is always inferior or equal to the MSE of the MAP and LS fusion operators. When σ 3 is small, the prior information is accurate and α 3 tends to one. The MSE of the MAP estimate and of the non-linear fusion operator tends to zero. In this case, the MAP estimate tends to be the same as the non-linear fusion operator, because α 1 and α 2 tend to zero. When σ 3 is large, the prior information is inaccurate. The MSE of the MAP estimate and of the non-linear fusion operator tends to the MSE of the LS estimate. The three fusion operators tend to be identical, because α 3 is close to zero, and the interval length Bu-Bd is large. Then, no transformation is processed by the non-linear fusion operator.
Figure 6. Example of process and mean standard error (MSE) of the proposed operator.
Figure 6. Example of process and mean standard error (MSE) of the proposed operator.
Entropy 15 02698 g006
Table 1. Theoretical parameters of the signals.
Table 1. Theoretical parameters of the signals.
MeanMSE
First signal z ( 1 ) 41
Second signal z ( 2 ) 42
Linear fusion40.666
The only difference between the MAP and the non-linear fusion operators is the observation weighted by α 2 . In the non-linear fusion case, this observation is processed with a nonlinear function in order to decrease the correlation between z k ( 1 ) and z k ( 2 ) . Therefore, this transformation decreases the MSE of the fusion operator compared to the MSE of the MAP estimate. We notice in Figure 6b that for σ 3 < 2 , the difference between the MSE increases when α 2 increases, so, when σ 3 increases ( α 3 decreases because the prior information is less accurate). However, for σ 3 > 2 , the number of transformations processed by the non-linear transformation decreases, and the difference between the MSE decreases.
Table 2. MSE of the NL (non-linear) and Maximum a posteriori (MAP) fusion operators.
Table 2. MSE of the NL (non-linear) and Maximum a posteriori (MAP) fusion operators.
Prior informationMean Square Error
Accuracy σ3MAP FusionNL Fusion
0.00.00000.0000
0.10.00300.0030
0.30.02940.0274
0.50.08760.0779
1.00.28840.2566
1.40.41060.3695
2.00.51350.4711
3.00.59020.5534
4.00.62180.5909
5.00.63690.6114

4.3. Multi-Sensor Estimation

In this experimentation, we want to assess a multi-sensor fusion system that uses the proposed non-linear transformation. The system, embarked on a vehicle, is composed of a multi-band GNSS receiver, a speedometer and a magnetometer.
The multi-bands GNSS receiver provides measurements of positions obtained in the GNSS L1 band and in the GNSS L2C band. The measurement rate is equal to 10 Hz, and we assume a standard positioning service, non-differential horizontal (latitude & longitude) positioning and SAoff. The classical receiver at the L1 band, simulated in this experimentation, has a four meter root means square (RMS) error (the error is the distance between the position and its estimate). For a measurement rate of 10 Hz, the estimate position is the averaging of 100 observations for the L1 band and five observations for the L2C band. A receiver can indeed compute a position every millisecond at the L1 band and every 20 milliseconds at the L2C band. If the horizontal latitude and longitude errors were not correlated, the RMS error would be inversely proportional to the square root of the number of measurements. However, the errors are correlated, and this causes the error from averaging to decrease at a slower rate than if the errors were not correlated. It is a reasonable assumption to consider a double RMS error for the L2C band. A speedometer is installed in a vehicle. The uncertainty of the speedometer is equal to 5% of the vehicle speed. In this context, we assume a bias of 5% and an additive Gaussian noise on the measurements. The noise variance is equal to 0.1 m/s. Finally, we assume an additive Gaussian noise on the measurements of direction. The noise variance is equal to two degrees.
The plots in Figure 7 show the trajectory and the sensors’ measurements. In Figure 7, the left figure shows the trajectory and the observed positions provided by the GNSS receiver. In Figure 7, the top right figure presents the measurements of direction provided by the magnetometer and the filtered directions obtained with a circular filter [20]. The bottom right figure presents the measurements of speed, V x and V y , along the x- and y-axis. These measurements are the projections of the velocity vectors processed with the observed noisy direction and speed (vector magnitude).
In our implementation, the upper and lower bound intervals of the proposed non-linear transformation are defined at time, t + Δ t , with direction and speed. The non-linear transformation is applied to each component, x, y, of the position. For the x component, the interval center is the estimated x component at t plus the variation in position during Δ t along the x-axis ( V x * Δ t ). The size of the interval is fixed to B u B d = 0.9 ; this value is fixed in practice. The same method is used for the non-linear fusion of the component, y.
Figure 7. Trajectory and sensors’ measurements.
Figure 7. Trajectory and sensors’ measurements.
Entropy 15 02698 g007
We report in Table 3 the mean square error of the position for the x and y directions and the position RMS error. We report this statistical parameter for the proposed non-linear fusion method, the ML and MAP fusion operators and for the GPS L1 positions.
Table 3. Error of position.
Table 3. Error of position.
Statistical parameterMSE xMSE yRMS error
GPS L12.05492.01412.0169
ML fusion1.35651.40391.6612
MAP fusion0.25240.25560.7128
non-linear fusion0.23500.23650.6867
This experimentation shows a possible application of the proposed combined estimate. The previous estimate with the measurements of displacement (provided by the dead reckoning sensors) are used to construct the prior information (lower and upper bound of the interval). In this case, the eccentricity is distributed according to a Gaussian law. As shown in the previous paragraph, the MSE of the proposed combined estimate is lower than the classical fusion operators. However, the difference between the MSE of the MAP estimate and of the non-linear fusion operator is not as high as the one obtained in the assessment of the fusion operators. For this practical case, the true position is some time not contained in the interval defined by the upper and lower bound, and the non-linear transformation has an additive bias that increases the MSE. This is the principal limitation of the proposed fusion operator.
The limitation of the non-linear fusion operator is due to the definition of an interval that contains the parameter to be estimated. If the interval is small, the probability to be outside this interval is high, and the MSE of the proposed method can be superior to the MSE of the MAP estimate. If the interval is large, the proposed estimate has little interest, because its MSE is close to the MSE of the LS estimate. Therefore, there is an ambivalence between these two cases, and the size of the interval must be, in practice, tuned by the user in order to improve the estimation.

5. Conclusions

In this article, we propose a new combined estimate that uses a prior law. The proposed new method combines observations provided by two sensors. The prior law is an interval that contains the quantity to estimate. The proposed fusion operator is more accurate in the minimum mean square sense than the classical ML and MAP fusion operators.
The proposed combined estimate is based on a non-linear transformation of the signals. The aim of this transformation is to decrease the covariance between the observations in order to decrease the output mean square error of the fusion operator. We define a condition for a non-linear transformation to decrease the covariance. We derive the function and the associated non-linear transformation that holds this condition.
As expected from the theoretical derivation provided in the article, we show in the experimentation that the proposed non-linear combined estimate outperforms the classical fusion operator when the quantity to be estimated is in the interval. We propose an example of a practical implementation of the proposed combined estimate. We show that the non-linear fusion improves the positioning of a multi-sensor system composed of a multi-band GNSS receiver and dead reckoning sensors.
The prospect of this work concerns the extension of the proposed method to more than two observations.

Conflict of Interest

The authors declare no conflict of interest.

References

  1. Cochran, W.G. Problem arising in the analysis of a series of similar experiments. J. R. Stat. Soc. 1937, 4, 102–118. [Google Scholar] [CrossRef]
  2. Cochran, W.G.; Carroll, S.P. A sampling investigation of the efficiency of weighting inversely as the estimated variance. Biometrics 1953, 9, 447–459. [Google Scholar] [CrossRef]
  3. Bement, T.R.; Williams, J.S. Variance of weighted regression estimators when sampling errors are independent and heteroscedastic. J. Am. Stat. Assoc. 1969, 64, 1369–1382. [Google Scholar] [CrossRef]
  4. Neyman, J.; Scott, E. Consistent estimators based on partially consistent observations. Econometrica 1948, 16, 1–32. [Google Scholar] [CrossRef]
  5. Rao, J.N.K. Estimating the common mean of possibly different normal populations: A simulation study. J. Am. Stat. Assoc. 1980, 75, 447–453. [Google Scholar] [CrossRef]
  6. Tsao, M.; Wu, C. Empirical likelihood inference for a common mean in the presence of heteroscedasticity. Can. J. Stat. 2006, 34, 45–59. [Google Scholar] [CrossRef]
  7. Gustafsson, F. Adaptative Filtering and Change Detection; Wiley: New York, NY, USA, 2001. [Google Scholar]
  8. Shin, V.; Lee, Y.; Choi, T.S. Generalized Millman’s formula and its application for estimation problems. Signal Process. 2006, 86, 257–266. [Google Scholar] [CrossRef]
  9. Kacemi, J.; Reboul, S.; Benjelloun, M. Information Fusion in a Multi-Frequencies GPS Receiver. In Proceedings of the IEEE International Conference on Aerospace And Electronics Systems Society (IEEE, AESS), Position Location and Navigation Symposium (PLANS), Monterey, CA, USA, 26–29 April 2004; pp. 399–404.
  10. Lavielle, M.; Lebarbier, E. An application of MCMC methods for the multiple change-points problem. Signal Process. 2001, 81, 39–53. [Google Scholar] [CrossRef]
  11. Robert, C.P. The Bayesian Choice, 2nd ed.; Springer: New York, NY, USA, 2001. [Google Scholar]
  12. Stienne, G.; Reboul, S.; Azmani, M.; Boutoille, S.; Choquel, J.B.; Benjelloun, M. Bayesian change-points estimation applied to GPS signal tracking. ISRN Signal Process. 2011. [Google Scholar] [CrossRef]
  13. Boutoille, S.; Reboul, S.; Benjelloun, M. A hybrid fusion system applied to off-line detection and change-points estimation. Inf. Fusion 2010, 11, 325–337. [Google Scholar] [CrossRef]
  14. Reboul, S.; Benjelloun, M. Joint segmentation of the wind speed and direction. Signal Process. 2006, 86, 744–759. [Google Scholar] [CrossRef]
  15. Mitchell, H.B. Multi-Sensor Data Fusion, An Introduction; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  16. Zhu, Y.; You, Z.; Zhao, J.; Zhang, K.; Li, X.R. The optimality for the distributed kalman filtering fusion with feedback. Automatica 2001, 37, 1489–1493. [Google Scholar] [CrossRef]
  17. Bar-Shalom, Y.; Campo, L. The effect of the common process noise on the two-sensor fused-track covariance. IEEE Trans. Aerosp. Electron. Syst. 1986, 22, 803–805. [Google Scholar] [CrossRef]
  18. Gao, J.B.; Harris, C.J. Some remarks on kalman filters for the multisensor fusion. Inf. Fusion 2002, 3, 191–201. [Google Scholar] [CrossRef]
  19. Chen, H.; Kirubarajan, T.; Bar-Shalom, Y. Performance limits of track-to-track fusion vs. centralized estimation: Theory and application. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 386–400. [Google Scholar] [CrossRef]
  20. Azmani, M.; Reboul, S.; Choquel, J.-B.; Benjelloun, M. A Recursive Fusion Filter for Angular Data. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Guilin, China, 19–23 December 2009; pp. 882–887.

Appendix 1

In this appendix, we show that the realizations of two random variables on the same side of the mean are positively correlated. We show that the realizations on both sides are negatively correlated. Let f ( . . ) be the distribution of the random variables, Z ( 1 ) and Z ( 2 ) . The covariance of Z ( 1 ) and Z ( 2 ) is given by:
σ 12 = E ( Z ( 1 ) Z ( 2 ) ) [ E ( Z ( 1 ) ) E ( Z ( 2 ) ) ] = z ( 1 ) z ( 2 ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
z ( 1 ) f 1 ( z ( 1 ) ) d z ( 1 ) z ( 2 ) f 2 ( z ( 2 ) ) d z ( 2 )
We develop the covariance Equation (13) for the different kind of realizations of Z ( 1 ) and Z ( 2 ) . It follows that:
σ 12 = ( P C 1 + P C 2 ) + ( N C 1 + N C 2 ) m 2
with:
P C 1 = m + m + z ( 1 ) z ( 2 ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) P C 2 = m m z ( 1 ) z ( 2 ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
N C 1 = m + m z ( 1 ) z ( 2 ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) N C 2 = m m + z ( 1 ) z ( 2 ) f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 )
( P C 1 + P C 2 ) is a term associated with the realizations of Z ( p 1 ) and Z ( p 2 ) , and ( N C 1 + N C 2 ) is a term associated with the realizations of Z ( n 1 ) and Z ( n 2 ) . Let σ 12 p be the covariance of Z ( p 1 ) , Z ( p 2 ) and σ 12 n be the covariance of Z ( n 1 ) , Z ( n 2 ) . These covariances are given by the following expressions:
σ 12 p = P C 1 + P C 2 D P C m 2
σ 12 n = N C 1 + N C 2 D N C m 2
with:
D P C = m + m + f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + m m f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) D N C = m + m f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) + m m + f 1 ( z ( 1 ) ) f 2 ( z ( 2 ) ) d z ( 1 ) d z ( 2 ) D P C + D N C = 1
It follows:
σ 12 = D P C σ 12 p + D N C σ 12 n
Furthermore, we state that:
m + z ( 1 ) f 1 ( z ( 1 ) ) d z ( 1 ) = m + a 2
It follows:
m z ( 1 ) f 1 ( z ( 1 ) ) d z ( 1 ) = m a 2
We can then define:
P C 1 = ( m + a ) 2 4
P C 2 = ( m a ) 2 4
N C 1 = N C 2 = ( m 2 a 2 ) 4
D P C = D N C = 1 2
Finally, it follows:
σ 12 p = a 2
σ 12 n = a 2
According to the previous equation, the covariance of Z ( p 1 ) , Z ( p 2 ) is positive, and the covariance of Z ( n 1 ) , Z ( n 2 ) is negative.

Appendix 2

In this appendix, we derive the condition that the mean square error, M S E ( Z N L F ) , of the proposed non-linear fusion operator is lower than the mean square error, M S E ( Z M A P ) , of the linear fusion operator. This error is lower if we decrease the covariance between the realizations of the processes that correlated positively, because M S E ( Z M A P ( n ) ) = M S E ( Z N L F ( n ) ) . The mean square errors, M S E ( Z M A P ( p ) ) and M S E ( Z N L F ( p ) ) , of the fusion operators are given by:
M S E ( Z M A P ( p ) ) = V a r ( Z M A P ( p ) ) + ( B i a i s ( Z M A P ( p ) ) ) 2 and M S E ( Z N L F ( p ) ) = V a r ( Z N L F ( p ) ) + ( B i a i s ( Z N L F ( p ) ) 2 = V a r ( Z N L F ( p ) ) + ( E [ Z N L F ( p ) ] m ) 2 = V a r ( Z M A P ( p ) + α 2 ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ) + ( E [ Z M A P ( p ) + α 2 ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ] m ) 2 = V a r ( Z M A P ( p ) ) + α 2 2 V a r ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + 2 α 2 c o v ( Z M A P ( p ) , ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + ( E [ Z M A P ( p ) + α 2 ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ] m ) 2 = V a r ( Z M A P ( p ) ) + α 2 2 V a r ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + 2 α 2 c o v ( Z M A P ( p ) , ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + ( E [ Z M A P ( p ) ] m + E [ α 2 ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ] ) 2
M S E ( Z N L F ( p ) ) = V a r ( Z M A P ( p ) ) + α 2 2 V a r ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + 2 α 2 c o v ( Z M A P ( p ) , ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + ( E [ Z M A P ( p ) ] m ) 2 + α 2 2 ( E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] ) 2 + 2 α 2 ( ( E [ Z M A P ( p ) ] m ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] )
M S E ( Z N L F ( p ) ) = V a r ( Z M A P ( p ) ) + ( E [ Z M A P ( p ) ] m ) 2 + 2 α 2 ( c o v ( Z M A P ( p ) , ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + α 2 2 V a r ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + ( E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] ) 2 + 2 α 2 E [ Z M A P ( p ) ] E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] m E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ]
With:
E [ Z M A P ( p ) ] m = 0 V a r ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + ( E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] ) 2 = E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] M S E ( Z M A P ( p ) ) = V a r ( Z M A P ( p ) )
It follows:
M S E ( Z N L F ( p ) ) = M S E ( Z M A P ( p ) ) + α 2 2 ( E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] ) + 2 α 2 c o v ( Z M A P ( p ) , ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) + 2 α 2 ( E [ Z M A P ( p ) ] E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] m E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] )
M S E ( Z N L F ( p ) ) = M S E ( Z M A P ( p ) ) + α 2 2 ( E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] ) + 2 α 2 α 1 E [ Z ( p 1 ) ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ] + 2 α 2 2 E [ Z ( p 2 ) ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ) ] + 2 α 2 α 3 E [ m a ( ± g ( Z ( p 1 ) , Z ( p 2 ) ) ) ] 2 α 2 ( α 1 E [ Z ( p 1 ) ] + α 2 E [ Z ( p 2 ) ] + α 3 E [ m a ] ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + 2 α 2 ( α 1 E [ Z ( p 1 ) ] + α 2 E [ Z ( p 2 ) ] + α 3 E [ m a ] ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] 2 m α 2 E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] = M S E ( Z M A P ( p ) ) + α 2 2 E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] + 2 α 2 α 1 E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + 2 α 2 2 E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] ( 2 α 2 α 3 m a 2 m α 2 ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ]
Then, M S E ( Z N L F ( p ) ) is lower than M S E ( Z M A P ( p ) ) , if the following condition holds:
α 2 2 E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) 2 ] + 2 α 2 α 1 E [ Z ( p 1 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + 2 α 2 2 E [ Z ( p 2 ) ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] + ( 2 α 2 α 3 m a 2 m α 2 ) E [ ± g ( Z ( p 1 ) , Z ( p 2 ) ) ] 0

Share and Cite

MDPI and ACS Style

Azmani, M.; Reboul, S.; Benjelloun, M. Non-Linear Fusion of Observations Provided by Two Sensors. Entropy 2013, 15, 2698-2715. https://doi.org/10.3390/e15072698

AMA Style

Azmani M, Reboul S, Benjelloun M. Non-Linear Fusion of Observations Provided by Two Sensors. Entropy. 2013; 15(7):2698-2715. https://doi.org/10.3390/e15072698

Chicago/Turabian Style

Azmani, Monir, Serge Reboul, and Mohammed Benjelloun. 2013. "Non-Linear Fusion of Observations Provided by Two Sensors" Entropy 15, no. 7: 2698-2715. https://doi.org/10.3390/e15072698

APA Style

Azmani, M., Reboul, S., & Benjelloun, M. (2013). Non-Linear Fusion of Observations Provided by Two Sensors. Entropy, 15(7), 2698-2715. https://doi.org/10.3390/e15072698

Article Metrics

Back to TopTop