Next Article in Journal
Anomaly Detection of Metallurgical Energy Data Based on iForest-AE
Previous Article in Journal
Distribution of Diatom Resting Stages in Sediment near Artificial Reefs Deployed in the Dysphotic Zone: Exploration of New Artificial Reef Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints

1
School of Automation, Central South University, Changsha 410083, China
2
School of Electrical and Information Engineering, Changsha University of Science and Technology, Changsha 410114, China
3
Department of Computer Science, University of Bradford, Bradford BD7 1DP, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9975; https://doi.org/10.3390/app12199975
Submission received: 5 September 2022 / Revised: 26 September 2022 / Accepted: 28 September 2022 / Published: 4 October 2022
(This article belongs to the Section Robotics and Automation)

Abstract

:
For the bearing-only target motion analysis (TMA), the pseudolinear Kalman filter (PLKF) solves the complex nonlinear estimation of the motion model parameters but suffers serious bias problems. The pseudolinear Kalman filter under the minimum mean square error framework (PL-MMSE) has a more accurate tracking ability and higher stability compared to the PLKF. Since the bearing signals are corrupted by non-Gaussian noise in practice, we reconstruct the PL-MMSE under Gaussian mixture noise. If some prior information, such as state constraints, is available, the performance of the PL-MMSE can be further improved by incorporating state constraints in the filtering process. In this paper, the mean square and estimation projection methods are used to incorporate PL-MMSE with linear constraints, respectively. Then, the linear approximation and second-order approximation methods are applied to merge PL-MMSE with nonlinear constraints, respectively. Simulation results demonstrate that the constrained PL-MMSE algorithms result in lower mean square errors and bias norms, which demonstrates the superiority of the constrained algorithms.

1. Introduction

Target motion analysis (TMA) refers to the real-time estimation of the position, speed, and other motion parameters of the tracked target by using sensors to obtain the measured information of the target by signal processing technology [1,2,3]. It has many applications in civilian and military fields, including military reconnaissance, intelligent transportation systems, and satellite navigation systems.The measurement information includes the angle of arrival (AOA) [4], time of arrival (TOA), time difference of arrival (TDOA) [5], and received signal strength (RSS) [6]. In this paper, we focus on AOA-based TMA, i.e., analyzing the target motion based on the bearing-only data emitted from the motion target and collected by the sensors.
The main difficulty of the bearing-only TMA is how to handle the nonlinear characteristic of the measurement equation. Methods for dealing with bearing-only problems can generally be divided into three categories. The first category is developed from the perspective of statistics [7]. The maximum likelihood estimator (MLE) uses the iterative optimization method to solve nonlinear equations to obtain the target position estimation. Since then, evolved methods [8,9] have been proposed to tackle the TMA problems. In [8], optimizing the likelihood function equipped with extra penalized terms gives the result that has a lower Cramér-rao bound than the standard estimator. The second category is the Kalman filter (KF) and its related methods. Due to the poor initialization, the standard Kalman filter [10] has shortcomings in robustness, convergence speed, and tracking accuracy. Many variant structures of the KF have been proposed to solve the nonlinear estimation problem. For example, Bucy et al. [11] proposes the nonlinear extended Kalman filter (EKF). Julier et al. [12,13] proposes the method of the unscented Kalman filter. The particle filter [14,15,16] (PF) is also used for the bearing-only target motion analysis. Zheng Yi et al. [17] proposes an initial value optimization method for inverse smoothing filtering. This method effectively solves the problem that Kalman filtering methods are sensitive to initial value selection and reduces the estimation error. The third category is to linearize the nonlinear angle measurement equation by using the pseudolinear estimator (PLE) method [18]. The pseudolinear Kalman filter (PLKF) [19,20] is produced by combining the Kalman filter with the pseudolinear estimator method. Compared with other filtering methods, the main advantages of PLKF are high stability, good tracking performance, and small initial error under lower computational complexity [21]. However, the PLKF has a large bias due to the correlation between the measurement matrix and the pseudo-linear noise variable. Hence, several methods have been proposed to improve the performance of PLKF by compensating or reducing the pseudolinear estimation bias, including the modified pseudolinear estimator (MPLE) [22], bias-compensated PLKF (BC-PLKF) [23], IV Kalman filter (IVKF) [24] and IVKF based on the selective-angle-measurement (SAM-IVKF) [25] strategy. These variants of PLKF based on bias compensation are not always perfect when the measurement noise is large and the geometry is unfavorable. Based on the PLKF, Bu et al. [26] proposes a new pseudolinear filter under the minimum mean square error (PL-MMSE) framework without offset compensation, which shows better tracking performance under the large measurement noise than the above algorithms.
If the prior information, such as linear or nonlinear constraints on motion state, is available, we can take these conditions into consideration to improve the state estimation [27]. For example, tracking the vehicle driven on a straight or curved road is a constrained state estimation problem with the available road information [28]. Similar models also appear in other engineering applications, including the compartmental models method [29], turbofan engine health estimation [30] and so on. To estimate the states in such systems, some methods have been proposed, e.g., the model parameters reduction method [31], perfect measurements approach [32], estimation projection [33], linear approximation [33], second-order approximation [34,35]. For linear constraints, the model parameters’ reduction method [31] transforms the constrained state estimation to the unconstrained state estimation. However, the reduction of the state constrained equations makes the interpretation such as the physical meaning of the states more difficult. The perfect measurements approach [32] adds state equality constraints into the measurement equation. The method increases the dimension of the state estimation problem and hence increases the computation effort. Estimation projection [33] incorporates the equality constraints into the state estimation frame. It projects the unconstrained state estimation to the constrained surface. For nonlinear constraints, linear approximation [33] uses the Taylor series expansion to the nonlinear state constraints. This method linearizes the nonlinear state constraints by keeping only the first-order terms. Distinct from the linear approximation, second-order approximation [34,35] keeps both the first-order and second-order terms to maintain the nonlinearity of constraints.
In practice, the bearing noise of the sensor is not always Gaussian. For example, the measurement disturbance is described by distribution with impulsive (heavy-tailed) properties in [36]. The performance of standard Kalman filters based on the MMSE framework does not behave well under such noise [37]. To study such heavy-tailed signals, Ref. [38] proposes a suitable method to approximate the heavy-tailed gamma distribution of random telegraph noise by Gaussian mixture distribution. Inspired by [38], the PL-MMSE is extended to estimate the bearings-only target motion model parameters in the presence of Gaussian mixture noise as the first contribution of this paper. This contribution can be deemed as the application of PL-MMSE under heavy-tailed noise with adaptive adjustment of the noise weights. Secondly, we focus on merging the PL-MMSE with constraints by four approaches to address TMA. The mean square method is applied with the PL-MMSE for linear constraints by minimizing the conditional mean square error subject to the state constraints. The estimation projection method is incorporated into the PL-MMSE by projecting the unconstrained estimate onto the constrained surface. For nonlinear constraints, the linear approximation method is to linearly approximate the nonlinear constraints by using Taylor series expansion. Then, the estimation projection method follows to equip with the PL-MMSE filter. The second-order approximation method views the nonlinear constraint function as a second-order approximation to the nonlinearity. It constructs an extra optimization step after the PL-MMSE by projecting an unconstrained state estimation onto a nonlinear constrained surface and solves this optimization to realize the estimation. Finally, the PL-MMSE filter with state constraints is tested for TMA on the straight line and the arc section. Experimental results confirm that the behavior of our constrained method is better than other competitors.
The rest of this paper is organized as follows. In Section 2, the PL-MMSE under Gaussian mixture noise is designed after the notations are introduced. Section 3 and Section 4 combine constrained estimation technologies with the PL-MMSE to derive the PL-MMSE filter with linear and nonlinear state constraints, respectively. Section 5 simulates the two bearings-only TMA examples to show the sound performance of the constrained PL-MMSE algorithms. Section 6 concludes the whole paper and points out future research directions.

2. PL-MMSE Kalman Filter Under Gaussian Mixture Noise

In the bearing-only two-dimensional (2D) plane TMA, the target-sensor model is established, as shown in Figure 1.
As shown in Figure 1, the moving target position and velocity are T k = t x , k t y , k T and V k = v x , k v y , k T , respectively, where
t x , k = t x , k 1 + v x , k 1 T ,
t y , k = t y , k 1 + v y , k 1 T .
T is the sampling interval. The sensor locates at S k = s x , k s y , k T . The real angle information received from the sensor is given by
φ k = tan 1 t y , k s y , k t x , k s x , k .
The bearing measurement is
φ ^ k = φ k + e k ,
which indicates that the sensor measurement is corrupted by the mixed Gaussian noise e k with zero mean at time k T k = 1 , 2 , 3 , , n . The Gaussian mixture noise
e k j = 1 n λ j N ( 0 , σ j 2 )
is composed of n independent distributed Gaussian noises with zero mean and variance σ j 2 , respectively, with
λ j > 0 a n d j = 1 n λ j = 1 .
The moving target state vector X k is given by
X k = t x , k t y , k v x , k v y , k T .
According to (3) and (4), the pseudolinear measurement equation between the target and the sensor is
sin φ ^ k t x , k cos φ ^ k t y , k = sin φ ^ k s x , k cos φ ^ k s y , k + T k S k sin e k .
The pseudolinear state-space model for bearings-only TMA is
X k = F X k 1 + ω k 1 ,
Z k = H k X k + τ k ,
where X k and X k 1 are the motion states at time k T and ( k 1 ) T , respectively. It is assumed that ω k 1 is the Gaussian mixture noise composed of n independent distributed Gaussian noises with zero mean and variance ξ i 2 , respectively. According to the motion model (8), the state transition matrix F and the process noise Jacobian D are
F = 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 ,
D = 0 0 T 2 / 2 0 0 0 0 T 2 / 2 0 0 T 0 0 0 0 T .
Ref. [39]. In the PLKF algorithm, the estimated state of the target at time k T is
X ^ k | k 1 = F X ^ k 1 .
The updated covariance matrix is
P k | k 1 = F P k 1 | k 1 F T + D Q k 1 D T
where the Gaussian mixture noise variance Q k 1 is
Q k 1 = i = 1 n ρ i ξ i 2
with
ρ j > 0 a n d i = 1 n ρ i = 1 .
The observation matrix is
H k = [ sin φ ^ i cos φ ^ i 0 0 ] T .
The Kalman gain is
K k = P k | k 1 H k T H k P k | k 1 H k T + R k 1
where the pseudolinear noise variance R k is given by
R k = T k S k i = 1 n λ i e 1 e 2 σ i 2 2 .
The updated pseudolinear measurement is
Z ^ k | k 1 = H k X ^ k | k 1 .
Therefore, the target state and covariance update equation at time k T can be described by
X ^ k | k = X ^ k | k 1 + K k ( Z k Z ^ k | k 1 ) ,
P k | k = P k | k 1 K k H k P k | k 1 .
Based on the PLKF, the PL-MMSE for Gaussian mixture noise can be rewritten as shown in Table 1 where the pseudolinear observation matrix is given by
H k 1 = [ tan 1 ( X ^ k | k 1 ( 2 ) S k ( 2 ) ) tan 1 ( X ^ k | k 1 ( 1 ) S k ( 1 ) ) 0 0 ] .

3. PL-MMSE Kalman Filter with Linear State Constraints

Assume that the motion state is bounded by linear constraints
G x k = g
where G is a known R d × n matrix while g is a known R d × 1 vector with d < n . It is also assumed that G has full rank. Next, we introduce two methods to encapsulate the PL-MMSE with linear state constraints.

3.1. Mean Square Method

The idea of the mean square method is to obtain the state estimation x ˜ of the moving target with linear constraints by minimizing the conditional mean square error. Let
x ˜ k = min x ˜ k E ( x k x ˜ k 2 | Z k ) s . t . G x ˜ k = g
where
E ( x x ˜ 2 | Z ) = ( x x ˜ ) T ( x x ˜ ) P ( x | Z ) d x = x T x P ( x | Z ) d x 2 x ˜ T x P ( x | Z ) d x + x ˜ T x ˜ .
A Lagrangian function is constructed to solve the constrained problem as
J = E ( x x ˜ 2 | Z ) + 2 λ T ( G x ˜ g ) = x T x P ( x | Z ) d x 2 x ˜ T x P ( x | Z ) d x + x ˜ T x ˜ + 2 λ T ( G x ˜ g ) .
The conditional mean of x is
x ^ = x P ( x | Z ) d x .
After substituting (28) into (27), taking the partial derivatives of x ˜ and λ , respectively, leads to
J x ˜ = 2 x ^ + 2 x ˜ + 2 G T λ = 0 ,
J λ = G x ˜ g = 0 .
Solving (29) and (30) gives
x ˜ = x ^ G T ( G G T ) 1 ( G x ^ g ) ,
λ = ( G G T ) 1 ( G x ^ g ) .
From (31), the constrained estimate of motion state is the unconstrained estimate minus the correction term.

3.2. Estimation Projection Method

As a standard method to deal with constraints, the estimation projection method obtains the constrained estimate x ˜ by projecting the unconstrained estimate x ^ onto the constrained surface. Define
x ˜ = a r g min x ( x x ^ ) T W ( x x ^ ) s . t . G x = g
where W is a positive definite weighting matrix. The Lagrangian function used to solve this problem is
J = ( x x ^ ) T W ( x x ^ ) + 2 λ T ( G x g ) .
The necessary conditions for the local minimum are given by
J x = 0 ,
J λ = 0 .
Solving (35) and (36) gives
x ˜ = x ^ W 1 G T ( G W 1 G T ) 1 ( G x ^ g ) ,
λ = ( G W 1 G T ) 1 ( G x ^ g ) .
It is worthwhile to point out that the result given by the estimation projection is equal to the mean square method when W = I . Hereby, Table 2 summarizes the steps of the PL-MMSE with linear constraints by the estimation projection method.

4. PL-MMSE Kalman Filter with Nonlinear State Constraints

Consider the nonlinear constraint on the system state is as
h ( x ) = q
where h ( · ) is a nonlinear function. q is a scalar. Next, we address the nonlinear constraint using the linear approximation method and second-order approximation, respectively.

4.1. Linear Approximation

Use the Taylor series to expand (39) at x ^ as
h ( x ) q = h ( x ^ ) + h ( x ^ ) T ( x x ^ + 1 2 ! ( x x ^ ) T h ( x ^ ) ( x x ^ ) + q = 0
where h ( · ) denotes the Jacobian matrix of h ( · ) and h ( · ) is the Hessian matrix of h ( · ) . Using only the first-order term to approximate the nonlinear state constraint leads to
h ( x ^ ) T x q h ( x ^ ) + h ( x ^ ) T x ^ .
Through observation, (41) has a similar structure with (24) where G of (24) is replaced with h ( x ^ ) T in (41) and g with g h ( x ^ ) + h ( x ^ ) T x ^ . After applying the estimation projection method, the constrained estimator for the linear approximation method becomes
x ˜ k = x ^ k | k ( h ( x ^ k | k ) ) T ( h ( x ^ k | k ) ( h ( x ^ k | k ) ) T ) 1 ( h ( x ^ k | k ) g ) .

4.2. Second-Order Approximation

When the first and second-order terms are both kept, (40) can be rewritten into
f ( x ) = x T 1 M m m T m 0 x 1 = x T M x + 2 m T x + m 0 = 0 .
Here
M = 1 2 h ( x ^ k | k ) ,
m = ( h ( x ^ k | k ) x ^ k | k T h ( x ^ k | k ) ) T / 2 ,
m 0 = h ( x ^ k | k ) h ( x ^ k | k ) x ^ k | k + ( x ^ k | k ) T M x ^ k | k q .
Construct an optimization problem by projecting an unconstrained state estimation onto a nonlinear constrained surface, i.e.,
x ˜ = a r g min x ( z H x ) T ( z H x ) s . t . f ( x ) = 0
The Lagrangian function is formed with the multiplier λ as
J = ( z H x ) T ( z H x ) + λ f ( x ) .
The optimal solution can be found by solving
J x = H T z + λ m + ( H T H + λ M ) x = 0 ,
J λ = x T M x + m T x + x T m + m 0 = 0 .
Assume the matrix H T H + λ M is invertible. The constrained solution x ˜ can be expressed by
x ˜ = ( H T H + λ M ) 1 ( H T z λ m )
which is the unconstrained solution when λ = 0 .
Applying the Cholesky factorization to M and S = H T H gives
M = L T L ,
S = E T E
where E is an upper right diagonal matrix. We can apply singular value decomposition (SVD) to the matrix L E 1 as
L E 1 = U Σ V T
where U and V are orthonormal matrices, and Σ is a diagonal matrix with its diagonal elements denoted by p i . In order to simplify (51), two additional vectors are defined as
e ( λ ) = [ e i ( λ ) ] T = V T ( E T ) 1 ( H T z λ m ) ,
t = [ t i ] T = V T ( E T ) 1 m .
With these new matrices and vector notations, (51) can be expressed as
x ˜ = E 1 V ( I + λ Σ T Σ ) 1 e ( λ ) .
With (55), (56) and (57),
x ˜ T M x ˜ = e ( λ ) T ( I + λ Σ T Σ ) T Σ T Σ ( I + λ Σ T Σ ) 1 e ( λ )
= i e i 2 ( λ ) p i 2 ( 1 + λ p i 2 ) 2 , m T x ˜ = t T ( I + λ Σ T Σ ) 1 e ( λ )
= i e i ( λ ) t i 1 + λ p i 2 ,
After plugging in (58) and (59), f ( x ) transforms into
f ( λ ) = e ( λ ) T ( I + λ Σ T Σ ) T Σ T Σ ( I + λ Σ T Σ ) 1 e ( λ ) + t T ( I + λ Σ T Σ ) 1 e ( λ ) + e ( λ ) T ( I + λ Σ T Σ ) 1 t + m 0 = i e i 2 ( λ ) p i 2 ( 1 + λ p i 2 ) 2 + 2 i e i ( λ ) t i 1 + λ p i 2 + m 0
Since (60) is a nonlinear equation of λ , it is difficult to obtain a closed-form solution. Numerical root-finding algorithms are used such as Newton method [40] to solve (60). The derivatives of f ( λ ) and e ( λ ) are
f ˙ ( λ ) = 2 i e i ( λ ) e ˙ i ( 1 + λ p i 2 ) p i 2 e i 2 ( λ ) p i 4 ( 1 + λ p i 2 ) 3
+ 2 i e ˙ i t i ( 1 + λ p i 2 ) e i ( λ ) t i σ i 2 ( 1 + λ p i 2 ) 2 ,
e ˙ = [ e ˙ i ] T = V T ( G T ) 1 m .
with respect to λ . Then, the iterative solution of λ with Newton method can be given by
λ k + 1 = λ k f ( λ k ) f ˙ ( λ k ) .
(63) starts with λ 0 = 0 . If | λ k + 1 λ k | < τ where τ is the tolerance, or the number of iterations reaches a preset value, the iteration stops. Then, we can obtain the constrained state estimate of the moving target by substituting the solution of λ into (57). Table 3 shows the steps of the PL-MMSE with nonlinear constraints by second-order approximation.
Remark 1.
The algorithm presented in Table 1 have many potential applications, which use the model shown in (9) and (10). For example, in the ocean environment, a self-moving ship monitors noisy sonar bearings to an acoustic target ship and then pours the measurements into the filters to estimate and predict the source position and velocity [19]. If the waterway of the target is known in advance, the constraint can be brought into the methods in Table 2 and Table 3 to further raise the estimate accuracy.

5. Simulation

This section simulates examples and compares the performance of PL-MMSE, PLKF, BC-PLKF, IVKF and the corresponding algorithms with state constraints for moving targets under linear or nonlinear constraints with Gaussian mixture noise. To clarify, the combination algorithm of PL-MMSE and the mean square method is defined as PL-MMSE-C ( W = I ) . Similarly, PL-MMSE combined with the estimation projection method is set to PL-MMSE-C ( W = P 1 ) . PL-MMSE incorporated with the linear approximation and second-order approximation methods are defined as PL-MMSE-L and PL-MMSE-S, respectively. Other constrained algorithms are named in the same way as above. Each simulation result is generated in M 0 = 1000 Monte Carlo experiments with N = 200 sampling time scans for each run.

5.1. Performance Metrics

As defined in this subsection, the performance is evaluated using the root mean square errors (RMSEs) and the bias norms (BNorms). The RMSE and BNorm of the target position estimation are
RMSE k p o s = 1 M 0 i = 1 M 0 x ^ k | k i 1 : 2 x k i 1 : 2 2 ,
BNorm k p o s = 1 M 0 i = 1 M 0 x ^ k | k i 1 : 2 x k i 1 : 2
where x ^ k | k i 1 : 2 is the estimated target position and x k i 1 : 2 is the true target position at time k T at the ith run, respectively.
The RMSE and BNorm of the target velocity estimation are
RMSE k v e l = 1 M 0 i = 1 M 0 x ^ k | k i 3 : 4 x k i 3 : 4 2 ,
BNorm k v e l = 1 M 0 i = 1 M 0 x ^ k | k i 3 : 4 x k i 3 : 4
where x ^ k | k i 3 : 4 is the estimated target velocity and x k i 3 : 4 is the true target velocity at time k T at the ith run.
Similarly, the time-averaged RMSE, BNorm of the target position and velocity estimation are
RMSE a v g p o s = 1 M 0 B i = 1 M 0 k = L 0 N x ^ k | k i 1 : 2 x k i 1 : 2 2 ,
BNorm a v g p o s = 1 B k = L 0 N 1 M 0 i = 1 M 0 x ^ k | k i 1 : 2 x k i 1 : 2 ,
RMSE a v g v e l = 1 M 0 B i = 1 M 0 k = L 0 N x ^ k | k i 3 : 4 x k i 3 : 4 2 ,
BNorm a v g v e l = 1 B k = L 0 N 1 M 0 i = 1 M 0 x ^ k | k i 3 : 4 x k i 3 : 4 .
Here, we set B = N L 0 + 1 with L 0 = 50 where L 0 is an offset parameter to reduce the time-averaged metrics affected by the initial tracking errors in the simulations.

5.2. Simulation Parameters

In order to objectively compare the performance of the constrained PL-MMSE with other algorithms, we adopt the same sensor moving trajectory as in [24], as shown in Figure 2. The sensor trajectory is divided into five constant velocity segments, where the end position of each segment trajectory is set as [ 60 0 ] T m, [ 0 7.5 ] T m, [ 60 15 ] T m, [ 0 22.5 ] T m, [ 60 30 ] T m and [ 0 77.5 ] T m. Starting from the initial position r 0 = [ 60 0 ] T m, the sensor takes the direct measurement value at every sampling interval T = 0.1 s. The bearing noise and process noise are a Gaussian mixture noise with zero mean. The estimated initial state x ^ 1 | 1 is sampled from the true initial state x 1 with a Gaussian mixture distribution, which has the initial covariance P 1 | 1 .

5.3. Simulation Scenarios

Designed algorithms are tested in two scenarios in this subsection. As can be observed in Figure 3a, the target moves on a straight line in the first scenario while the target moves on an arc in the second scenario, as shown in Figure 3b with nearly constant velocity magnitude V.
Under the known angle of the vehicle, the constrained matrix G and the vector g are governed by
G = 1 tan θ 0 0 0 0 1 tan θ ,
g = 0 , 0 T .
The constrained estimate can be produced by setting W = I and W = P 1 . In the simulation, the sampling interval T is set to 0.1 s. The total time span is the 20 s. The angle θ is set to π / 4 . The velocity V is set to 12 m/s. The target initial position is 0 0 T m. The true initial state is set to x 1 = 0 0 6 2 6 2 T . The initial covariance matrix is P 1 | 1 = diag ( [ 1 1 0.01 0.01 ] ) . In addition, the covariances of process noise ω k 1 and bearing noise e k are given by
ω k 1 = λ N ( μ x 1 , Q 1 ) + ( 1 λ ) N ( μ x 2 , Q 2 ) ,
e k = ρ 2 λ N ( μ z 1 , R 1 ) + ( 1 λ ) N ( μ z 2 , R 2 ) ,
respectively, where μ x 1 T = 0 0 0 0 , μ x 2 T = 0 0 0 0 , μ z 1 T = 0 , μ z 2 T = 0 , Q 1 = diag ( [ 0 0 0.15 0.15 ] ) , Q 2 = diag ( [ 0 0 0.23 0.23 ] ) , R 1 = 0.015 , R 2 = 0.019 and λ = 0.4 . The variable ρ controls the magnitude of bearing noise e k . By setting ρ from 1 to 10 with the difference of 1, Table 4 shows the standard deviation σ θ of bearing noise for the corresponding ρ .
Simulation results of the mean RMSEs and BNorms of the target position and velocity estimates against σ θ are presented in Figure 4.
It is noticeable that the performance metric values of all algorithms increase and finally tend to be stable with σ θ in Figure 4, where the performance of PL-MMSE is always better than that of PLKF, BC-PLKF, and IVKF. The RMSE a v g p o s of PL-MMSE is stable at 2.5 m at a large bearing noise level, which significantly outperforms other unconstrained algorithms. The evolution of RMSEs, BNorms of the target position and velocity estimates in time k T ( k = 1 , 150 ) for σ θ = 7 is shown in Figure 5.
The tracking performance of all algorithms gradually deteriorates with the rising scan under a large bearing noise level. The metric values of PL-MMSE gradually approach RMSE k p o s = 3.11 m, RMSE k v e l = 0.147 m/s, BNorm k p o s = 0.053 m, and BNorm k v e l = 0.011 m/s, which are remarkably lower than other unconstrained algorithms. It can also be observed that the constrained PL-MMSE is superior to the unconstrained PL-MMSE at all bearing noise levels in Figure 4 and Figure 5, which indicates the constrained algorithm has a better robustness and tracking performance. Comparisons of four algorithms combined with the mean square method and the estimation projection method for σ θ = 7 are presented in Figure 6 and Figure 7, respectively, which demonstrate that PL-MMSE-C ( W = I ) and PL-MMSE-C ( W = P 1 ) have less errors than other corresponding constrained algorithms at a large bearing noise level.
Table 5 shows the mean RMSEs and BNorms of different filters for σ θ = 7 .
The RMSE performance of PL-MMSE-C ( W = I ) is RMSE a v g p o s = 1.781 m and RMSE a v g v e l = 0.103 m/s at large bearing noise level σ θ = 7 , which are less than other constrained algorithms. Similarly, the BNorm performance of PL-MMSE-C ( W = I ) has BNorm a v g p o s = 0.084 m and BNorm a v g v e l = 0.001 m/s at large bearing noise level σ θ = 7 which are better than the others. The numerical results of PL-MMSE-C ( W = P 1 ) are similar to that of PL-MMSE-C ( W = I ) , as observed from the table. The filters combined with constraints can achieve better performance from Figure 6 and Figure 7 and Table 5. As shown in Table 5, the constraint algorithms with W = P 1 are not necessarily better than the corresponding filters with W = I , where RMSE a v g p o s = 2.537 m of IVKF-C ( W = I ) is less than RMSE a v g p o s = 2.842 m of IVKF-C ( W = P 1 ) and RMSE a v g p o s = 2.890 m of BC-PLKF-C ( W = I ) is greater than RMSE a v g p o s = 2.883 m of BC-PLKF-C ( W = P 1 ) . This difference is caused by the discrepancy between the actual distribution characteristics of the state estimate x ^ and P. When the distribution is close to P, the constraint algorithms with W = P 1 behave better than the corresponding constraint algorithms with W = I .
In the second scenario, the target moves on an arc, as shown in Figure 3b, with a nearly constant velocity magnitude V. The turning center is ( R x , R y ) with the radius R. Hence, the constrained equation h ( · ) , the vector g, the constrained matrix M, the vector m and the variable m 0 , are given by
h ( x ) = x 1 R x 2 + x 2 R y 2 ,
g = R 2 ,
M = 1 0 0 1 ,
m = R x R y T ,
m 0 = R x 2 + R y 2 R 2 .
In addition, the velocity constraint is introduced into the state estimation. The constrained velocity estimate v ˜ is
v ˜ = v ^ T μ μ
where the unconstrained velocity estimate v ^ and constrained unit direction vector μ are
v ^ = x ^ ( 3 ) x ^ ( 4 ) T ,
μ = sin θ cos θ T
with θ = tan 1 ( x ^ ( 2 ) / x ^ ( 1 ) ) .
In the simulation, the sampling interval T is set to 0.1 s. The total time scan is the 20 s. The turning center ( R x , R y ) is set to 100 , 0 m. The turning radius R is 100 m. The initial position for the target is 200 0 T m. To compare the linear approximation with the second-order approximation method, two experiments are carried out, where the magnitude of V is 10 4 m/s and 0.2 m/s, respectively.
For V = 10 4 m/s, the true initial state is x 1 = 200 0 0 10 4 T . The initial covariance matrix is P 1 | 1 = diag ( [ 10 8 10 8 10 10 10 10 ] ) . The process noise ω k 1 and bearing noise e k have the same composition as (74) and (75) where μ x 1 T = 0 0 0 0 , μ x 2 T = 0 0 0 0 , μ z 1 = 0 , μ z 2 = 0 ,   Q 1 = diag [ 0 0 10 9 10 9 ] , Q 2 = diag ( [ 0 0 2 × 10 9 2 × 10 9 ] ) , R 1 = 1.5 × 10 12 , R 2 = 1.909 × 10 12 and λ = 0.4 . The standard deviation σ θ of bearing noise for the corresponding ρ is set as shown in Table 6.
Simulation results of the time-averaged RMSEs and BNorms of the target position and velocity estimates and the bearing noise standard deviations is shown in Figure 8 where PL-MMSE has lower errors in the RMSE performance with RMSE a v g p o s = 2.526 × 10 4 m, RMSE a v g v e l = 1.783 × 10 5 m/s.
The evolution of RMSEs, BNorms of the target position and velocity estimates in time k T for σ θ = 7 at V = 10 4 m/s is presented in Figure 9.
It is noticeable that the curve trends in Figure 8 and Figure 9 are similar to Figure 4 and Figure 5 because of the weak model nonlinearity caused by small velocity. The performance metric values of PL-MMSE gradually approach RMSE k p o s = 3.099 × 10 4 m, RMSE k v e l = 2.053 × 10 5 m/s, BNorm k p o s = 5.774 × 10 5 m and BNorm k v e l = 3.497 × 10 6 m/s, which are superior to other unconstrained algorithms in Figure 9. It is also demonstrated that the constrained PL-MMSE is better than the unconstrained PL-MMSE and other constrained filters at all bearing noise levels under the arc section in Figure 8 and Figure 9. Performance comparisons of four algorithms combined with the linear approximation method and the second-order approximation method for σ θ = 7 at V = 10 4 m/s are shown in Figure 10 and Figure 11, respectively, which present that PL-MMSE-L and PL-MMSE-S have fewer errors than other constrained algorithms at a large bearing noise level.
Table 7 presents the results of the time-averaged RMSEs and BNorms of different filters for σ θ = 7 at V = 10 4 m/s. The RMSE and BNorm performance of PL-MMSE-L are RMSE a v g p o s = 1.579 × 10 4 m, RMSE a v g v e l = 1.415 × 10 5 m/s, BNorm a v g p o s = 1.518 × 10 5 m and BNorm a v g v e l = 1.590 × 10 6 m/s. The PL-MMSE algorithm combined with the linear approximation method is better than the second-order approximation method as BNorm a v g v e l = 1.708 × 10 6 m/s of the PL-MMSE-S is greater than BNorm a v g v e l = 1.590 × 10 6 m/s of the PL-MMSE-L in Table 7 due to the weak nonlinearity.
For V = 0.2 m/s, we have the true initial state x 1 = 200 0 0 0.2 T and the initial covariance matrix P 1 | 1 = diag ( [ 10 3 10 3 10 4 10 4 ] ) . The composition of the process noise ω k 1 , and the bearing noise e k is the same as (74) and (75), where μ x 1 T = 0 0 0 0 , μ x 2 T = 0 0 0 0 , μ z 1 = 0 , μ z 2 = 0 , Q 1 = diag ( [ 0 0 0.01 0.01 ] ) , Q 2 = diag ( [ 0 0 0.02 0.02 ] ) , R 1 = 1.5 × 10 3 , R 2 = 1.909 × 10 3 and λ = 0.4 . Table 8 presents the standard deviation σ θ of the bearing noise for the corresponding ρ .
Simulation results of the time-averaged RMSEs and BNorms of the target position and velocity estimates against σ θ are presented in Figure 12.
The performance of PL-MMSE and PL-MMSE with constraints is better than other filters, where RMSE a v g p o s = 0.3346 m of PL-MMSE and RMSE a v g p o s = 0.2954 m of PL-MMSE-S are less than other algorithms for σ θ = 3 . The evolution of RMSEs, BNorms of the target position and velocity estimates in time k T for σ θ = 7 at V = 0.2 m/s is shown in Figure 13.
It is evident that the constrained PL-MMSE has more stable and accurate tracking performance at large bearing noise levels under the arc section. Comparisons of four algorithms combined with the linear approximation method and the second-order approximation method for σ θ = 7 at V = 0.2 m/s are provided in Figure 14 and Figure 15, respectively. The time-averaged RMSEs and BNorms of different filters for σ θ = 7 are presented in Table 9. It is remarkable that the PL-MMSE with constraints has less errors than other filters, where RMSE a v g p o s of PL-MMSE-L is 0.176 m and RMSE a v g p o s of PL-MMSE-S is 0.176 m. On contrary to the first experiment with V = 10 4 m/s, the algorithms combined with the second-order approximation method perform better than the algorithms combined with the linear approximation method in Table 9, where RMSE a v g p o s of PL-MMSE-S is less than RMSE a v g p o s of PL-MMSE-L for σ θ = 7 at the scale of 10 6 because of strong nonlinearity.
It is worthwhile to point out that the errors of the filters are basically unchanged and slightly decrease as the noise level rises in Figure 12 when the velocity is relatively large. The cause of this phenomenon is the error in the state update equation introduced by the linearization of the arc movement. When V = 0.2 m/s, the error from the state update equation becomes the main source of the filter estimate error, which relatively reduces the effect of the bearing noise and breaks the average algorithm performance trend.
Remark 2.
It is noticeable that the magnitudes of the velocity in the second scenario are both small. There are two reasons for such a setting. Firstly, the PL-MMSE is still a linear Kalman filter under linear and nonlinear constraints. There is discrepancy between the TMA result from the linear motion model (9) and the actual arc trajectory in the circular section. The faster the target moves in an interval, the greater the discrepancy will be. Secondly, the sampling frequency of the sensor in reality is much higher than that in our experiment. For example, the sampling frequency of the radar is generally between 1 and 15 GHz. When the sampling frequency increases, the relative speed of the target also rises up proportionally to maintain the same traveling distance. Hence, if we set the sampling time T = 10 5 s rather than 0.1 s as in the simulation, the relative speed of the target becomes V = 1 m/s and 2000 m/s, respectively, which are quite common in practice. The transformation of the sampling frequency and target velocity corresponding to a real radar demonstrates that the setting is meaningful.
Since the distance the target moves in an interval is small, the corresponding RMSEs are low. Nevertheless, simulation results show that the RMSEs of our constrained algorithms are much smaller than those of other filters.

6. Conclusions and Future Works

In this paper, we propose a new pseudolinear Kalman filtering method based on TMA with available state constraints by combining the PL-MMSE and state constraints. The mean square and estimation projection methods are enveloped with PL-MMSE to address the linear constrained state estimation problem. The linear approximation and second-order approximation methods are used to refine PL-MMSE estimates under nonlinear constraints. The merged algorithms can effectively solve the bearings-only TMA problem under Gaussian mixture noise. Simulations show that the constrained PL-MMSE has better performance than other filters. In particular, when the target velocity is small, the algorithms combined with the linear approximation method perform better than those combined with the second-order approximation method under the circular road. It turns out just the opposite when the velocity is large.
Analyzing the statistical properties of the PL-MMSE with state constraints will be one topic of our future research. Applying the designed algorithms in actual engineering practice is the other direction we endeavor to study for the next step.

7. Patents

Yiqun Zou, Shuang Zou. A target tracking and positioning method, system, device and readable storage medium: ZL202110038824.8[P].2021.1.12.

Author Contributions

Conceptualization, M.L.; methodology, M.L.; software, M.L.; validation, Y.Z., M.L. and X.T.; formal analysis, Y.Z.; investigation, M.L.; resources, Y.Z.; simulation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L. and Y.Z.; visualization, X.T.; supervision, Q.Z.; project administration, X.T.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China (NSFC) [grant 61403427] and Hunan Provincial Natural Science Foundation of China [project 2020JJ5585 and project 2020JJ5777].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Shuang Zou for his valuable suggestions on this paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
TMATarget motion analysis
PLKFPseudolinear Kalman filter
PL-MMSEPseudolinear Kalman filter under the minimum mean square error framework
AOAAngle of arrival
BC-PLKFBias-compensated PLKF
IVKFIV Kalman filter
SAM-IVKFIVKF based on selective-angle-measurement
2DTwo-dimensional
RMSEsRoot mean square errors
BNormsBias norms

References

  1. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1333–1364. [Google Scholar]
  2. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking: III. Measurement models. In Proceedings of the Signal and Data Processing of Small Targets 2001, San Diego, CA, USA, 29 July–3 August 2001; SPIE: Bellingham, WA, USA, 2001; Volume 4473, pp. 423–446. [Google Scholar]
  3. Yi, W.; Yuan, Y.; Hoseinnezhad, R.; Kong, L. Resource scheduling for distributed multi-target tracking in netted colocated MIMO radar systems. IEEE Trans. Signal Process. 2020, 68, 1602–1617. [Google Scholar] [CrossRef]
  4. Genc, H.; Hocaoglu, A. Bearing-only target tracking based on big bang–big crunch algorithm. In Proceedings of the 2008 The Third International Multi-Conference on Computing in the Global Information Technology (ICCGI 2008), Athens, Greece, 27 July–1 August 2008; pp. 229–233. [Google Scholar]
  5. Alexandri, T.; Walter, M.; Diamant, R. A Time Difference of Arrival Based Target Motion Analysis for Localization of Underwater Vehicles. IEEE Trans. Veh. Technol. 2021, 71, 326–338. [Google Scholar] [CrossRef]
  6. Guo, F.C.; Li, T. Passive localization method and its precision analysis based on TDOA and FDOA of fixed sensors. Syst. Eng. Electron. 2011, 33, 1954–1958. [Google Scholar]
  7. Doğançay, K.; Hashemi-Sakhtsari, A. Target tracking by time difference of arrival using recursive smoothing. Signal Process. 2005, 85, 667–679. [Google Scholar] [CrossRef]
  8. Wang, Z.; Luo, J.A.; Zhang, X.P. A novel location-penalized maximum likelihood estimator for bearing-only target localization. IEEE Trans. Signal Process. 2012, 60, 6166–6181. [Google Scholar] [CrossRef]
  9. Nguyen, N.H. Optimal geometry analysis for target localization with bayesian priors. IEEE Access 2021, 9, 33419–33437. [Google Scholar] [CrossRef]
  10. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  11. Schmidt, S.F. Application of state-space methods to navigation problems. In Advances in Control Systems; Elsevier: Amsterdam, The Netherlands, 1966; Volume 3, pp. 293–340. [Google Scholar]
  12. Julier, S.J.; Uhlmann, J.K.; Durrant-Whyte, H.F. A new approach for filtering nonlinear systems. In Proceedings of the 1995 American Control Conference—ACC’95, Seattle, WA, USA, 21–23 June 1995; Volume 3, pp. 1628–1632. [Google Scholar]
  13. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 28 July 1997; SPIE: Bellingham, WA, USA, 1997; Volume 3068, pp. 182–193. [Google Scholar]
  14. Karlsson, R.; Gustafsson, F. Recursive Bayesian estimation: Bearings-only applications. IEEE Proc. Radar Sonar Navig. 2005, 152, 305–313. [Google Scholar] [CrossRef] [Green Version]
  15. Chang, D.C.; Fang, M.W. Bearing-only maneuvering mobile tracking with nonlinear filtering algorithms in wireless sensor networks. IEEE Syst. J. 2013, 8, 160–170. [Google Scholar] [CrossRef]
  16. Hong, S.; Shi, Z.; Chen, K. Novel roughening algorithm and hardware architecture for bearings-only tracking using particle filter. J. Electromagn. Waves Appl. 2008, 22, 411–422. [Google Scholar] [CrossRef]
  17. Zheng, Y.; Wang, M. An initial value optimization method of bearings-only target tracking based on backward smoothing. Ship Sci. Technol. 2020, 42, 140–147. [Google Scholar]
  18. Zou, Y.; Gao, B.; Tang, X.; Yu, L. Target Localization and Sensor Movement Trajectory Planning with Bearing-Only Measurements in Three Dimensional Space. Appl. Sci. 2022, 12, 6739. [Google Scholar] [CrossRef]
  19. Aidala, V.J. Kalman filter behavior in bearings-only tracking applications. IEEE Trans. Aerosp. Electron. Syst. 1979, 15, 29–39. [Google Scholar] [CrossRef]
  20. Aidala, V.J.; Nardone, S.C. Biased estimation properties of the pseudolinear tracking filter. IEEE Trans. Aerosp. Electron. Syst. 1982, 18, 432–441. [Google Scholar] [CrossRef]
  21. Song, T.; Speyer, J. A stochastic analysis of a modified gain extended Kalman filter with applications to estimation with bearings only measurements. IEEE Trans. Autom. Control 1985, 30, 940–949. [Google Scholar] [CrossRef]
  22. Holtsberg, A.; Holst, J. A nearly unbiased inherently stable bearings-only tracker. IEEE J. Ocean. Eng. 1993, 18, 138–141. [Google Scholar] [CrossRef]
  23. Nguyen, N.H.; Doğançay, K. Improved pseudolinear Kalman filter algorithms for bearings-only target tracking. IEEE Trans. Signal Process. 2017, 65, 6119–6134. [Google Scholar] [CrossRef]
  24. Lindgren, A. Properties of a nonlinear estimator for determining position and velocity from angle-of-arrival measurements. In Proceedings of the 14th Asilomar Conference on Circuits, Systems, and Computers, Pacific Grove, CA, USA, 4–6 November 1980. [Google Scholar]
  25. Doğançay, K.; Arablouei, R. Selective angle measurements for a 3D-AOA instrumental variable TMA algorithm. In Proceedings of the 2015 23rd European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 195–199. [Google Scholar]
  26. Bu, S.; Meng, A.; Zhou, G. A New Pseudolinear Filter for Bearings-Only Tracking without Requirement of Bias Compensation. Sensors 2021, 21, 5444. [Google Scholar] [CrossRef]
  27. Yang, C.; Bakich, M.; Blasch, E. Nonlinear constrained tracking of targets on roads. In Proceedings of the 2005 7th International Conference on Information Fusion, Philadelphia, PA, USA, 25–28 July 2005; Volume 1, p. 8. [Google Scholar]
  28. Simon, D. Kalman filtering with state constraints: A survey of linear and nonlinear algorithms. IET Control. Theory Appl. 2010, 4, 1303–1318. [Google Scholar] [CrossRef] [Green Version]
  29. Teixeira, B.O.; Chandrasekar, J.; Tôrres, L.A.; Aguirre, L.A.; Bernstein, D.S. State estimation for linear and non-linear equality-constrained systems. Int. J. Control 2009, 82, 918–936. [Google Scholar] [CrossRef]
  30. Simon, D.; Simon, D.L. Kalman filtering with inequality constraints for turbofan engine health estimation. IEE Proc.-Control Theory Appl. 2006, 153, 371–378. [Google Scholar] [CrossRef] [Green Version]
  31. Wen, W.; Durrant-Whyte, H.F. Model-based multi-sensor data fusion. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, Nice, France, 12–14 May 1997; pp. 1720–1726. [Google Scholar]
  32. Alouani, A.T.; Blair, W.D. Use of a kinematic constraint in tracking constant speed, maneuvering targets. IEEE Trans. Autom. Control 1993, 38, 1107–1111. [Google Scholar] [CrossRef]
  33. Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef] [Green Version]
  34. Yang, C.; Blasch, E. Kalman filtering with nonlinear state constraints. IEEE Trans. Aerosp. Electron. Syst. 2009, 45, 70–84. [Google Scholar] [CrossRef] [Green Version]
  35. Yang, C.; Blasch, E. Fusion of Tracks with Road Constraints; Technical Report; Air Force Research Lab: Wright-Patterson Air Force Base, OH, USA, 2008. [Google Scholar]
  36. Nguyen, N.H.; Doğançay, K.; Kuruoğlu, E.E. An iteratively reweighted instrumental-variable estimator for robust 3-D AOA localization in impulsive noise. IEEE Trans. Signal Process. 2019, 67, 4795–4808. [Google Scholar] [CrossRef]
  37. Stein, D.W. Detection of random signals in Gaussian mixture noise. IEEE Trans. Inf. Theory 1995, 41, 1788–1801. [Google Scholar] [CrossRef]
  38. Somha, W.; Yamauchi, H.; Zhang, Y. Fitting Mixtures of Gaussians to Heavy-Tail Distributions to Analyze Fail-Bit Probability of Nano-Scaled Static Random Access Memory. Adv. Mater. Res. Trans. Tech. Publ. 2013, 677, 317–325. [Google Scholar] [CrossRef]
  39. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  40. Süli, E.; Mayers, D.F. An Introduction to Numerical Analysis; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
Figure 1. Schematic diagram of 2D bearing-only TMA.
Figure 1. Schematic diagram of 2D bearing-only TMA.
Applsci 12 09975 g001
Figure 2. Sensor trajectory with five constant velocity segments and the initial position marked by a star.
Figure 2. Sensor trajectory with five constant velocity segments and the initial position marked by a star.
Applsci 12 09975 g002
Figure 3. (a) target travels in a straight line along the direction θ ; (b) target travels along the circular road with the turning center ( R x , R y ) and the radius R.
Figure 3. (a) target travels in a straight line along the direction θ ; (b) target travels along the circular road with the turning center ( R x , R y ) and the radius R.
Applsci 12 09975 g003
Figure 4. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with linear constraints proposed in the paper.
Figure 4. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with linear constraints proposed in the paper.
Applsci 12 09975 g004
Figure 5. RMSEs, BNorms at different time scan for σ θ = 7 for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with linear constraints in the paper.
Figure 5. RMSEs, BNorms at different time scan for σ θ = 7 for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with linear constraints in the paper.
Applsci 12 09975 g005
Figure 6. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the four algorithms combined with the mean square method.
Figure 6. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the four algorithms combined with the mean square method.
Applsci 12 09975 g006
Figure 7. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the four algorithms combined with the estimation projection method.
Figure 7. Time-averaged RMSEs, BNorms and bearing noise standard deviation for the four algorithms combined with the estimation projection method.
Applsci 12 09975 g007
Figure 8. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with nonlinear constraints proposed in the paper.
Figure 8. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with nonlinear constraints proposed in the paper.
Applsci 12 09975 g008
Figure 9. RMSEs, BNorms and time k T for σ θ = 7 at V = 10 4 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with nonlinear constraints in the paper.
Figure 9. RMSEs, BNorms and time k T for σ θ = 7 at V = 10 4 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with nonlinear constraints in the paper.
Applsci 12 09975 g009
Figure 10. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the four algorithms combined with the linear approximation method.
Figure 10. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the four algorithms combined with the linear approximation method.
Applsci 12 09975 g010
Figure 11. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the four algorithms combined with the second-order approximation method.
Figure 11. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 10 4 m/s for the four algorithms combined with the second-order approximation method.
Applsci 12 09975 g011
Figure 12. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with nonlinear constraints proposed in the paper.
Figure 12. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the PL-MMSE algorithm with nonlinear constraints proposed in the paper.
Applsci 12 09975 g012
Figure 13. RMSEs, BNorms and time k T for σ θ = 7 at V = 0.2 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with nonlinear constraints in the paper.
Figure 13. RMSEs, BNorms and time k T for σ θ = 7 at V = 0.2 m/s for the PL-MMSE, PLKF, BC-PLKF and IVKF algorithms, as well as, the proposed PL-MMSE algorithm with nonlinear constraints in the paper.
Applsci 12 09975 g013
Figure 14. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the four algorithms combined with the linear approximation method.
Figure 14. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the four algorithms combined with the linear approximation method.
Applsci 12 09975 g014
Figure 15. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the four algorithms combined with the second-order approximation method.
Figure 15. Time-averaged RMSEs, BNorms and bearing noise standard deviation at V = 0.2 m/s for the four algorithms combined with the second-order approximation method.
Applsci 12 09975 g015
Table 1. PL-MMSE under Gaussian mixture noise.
Table 1. PL-MMSE under Gaussian mixture noise.
1. Initialization
x ^ 0 = x ¯ 0
P 0 = E ( x 0 x ¯ 0 ) ( x 0 x ¯ 0 ) T
2. State Prediction
x ^ k | k 1 = F k 1 x ^ k 1
3. Covariance Prediction
P k | k 1 = F k 1 P k 1 | k 1 F k 1 T + D Q k D T
4. Filter Gain
K 1 = j = 1 n λ j e σ i 2 2 P k | k 1 H k 1 T
K 2 = H k P k | k 1 H k T + i = 1 n ρ i e e 2 ξ i 1 2 2 [ P k | k 1 ( 1 , 1 ) + P k | k 1 ( 2 , 2 ) ] + R k
K k = K 1 K 2 1
5. State Update
x ^ k = x ^ k | k 1 + K k ( z k H k x ^ k | k 1 )
6. Covariance Update
P k | k = P k | k 1 K k K 1 T
7. k = k + 1 , go to 2.
Table 2. PL-MMSE with Linear Constraints Algorithm.
Table 2. PL-MMSE with Linear Constraints Algorithm.
1. Initialization
x ^ 0 = x ¯ 0
P 0 = E ( x 0 x ¯ 0 ) ( x 0 x ¯ 0 ) T
2. Predict
x ^ k | k 1 = F k 1 x ^ k 1
P k | k 1 = F k 1 P k 1 | k 1 F k 1 T + D Q k D T
3. Filter Gain
K 1 = j = 1 n λ j e σ i 2 2 P k | k 1 H k 1 T
K 2 = H k P k | k 1 H k T + i = 1 n ρ i e e 2 ξ i 1 2 2 [ P k | k 1 ( 1 , 1 ) + P k | k 1 ( 2 , 2 ) ] + R k
K k = K 1 K 2 1
4. Update
x ^ k = x ^ k | k 1 + K k ( z k H k x ^ k | k 1 )
P k | k = P k | k 1 K k K 1 T
5. When the linear constraint is G x k = g ,
x ˜ k = x ^ k | k W 1 G T ( G W 1 G T ) 1 ( G x ^ k | k g )
6. k = k + 1 , go to 2.
Table 3. PL-MMSE with Nonlinear Constraints Algorithm.
Table 3. PL-MMSE with Nonlinear Constraints Algorithm.
1. Initialization
x ^ 0 = x ¯ 0
P 0 = E ( x 0 x ¯ 0 ) ( x 0 x ¯ 0 ) T
2. Predict
x ^ k | k 1 = F k 1 x ^ k 1
P k | k 1 = F k 1 P k 1 | k 1 F k 1 T + D Q k D T
3. Filter Gain
K 1 = j = 1 n λ j e σ i 2 2 P k | k 1 H k 1 T
K 2 = H k P k | k 1 H k T + i = 1 n ρ i e e 2 ξ i 1 2 2 [ P k | k 1 ( 1 , 1 ) + P k | k 1 ( 2 , 2 ) ] + R k
K k = K 1 K 2 1
4. Update
x ^ k = x ^ k | k 1 + K k ( z k H k x ^ k | k 1 )
P k | k = P k | k 1 K k K 1 T
5. When the nonlinear constraint is x k T M x k + m T x k + x k T m + m 0 = 0
Use the iterative method to find λ
Then x ˜ k = G 1 V ( I + λ k Σ T Σ ) 1 e ( λ k )
6. k = k + 1 , go to 2.
Table 4. Standard deviation σ θ against ρ in the first scenario.
Table 4. Standard deviation σ θ against ρ in the first scenario.
ρ 12345678910
σ θ ( ) 12345678910
Table 5. RMSE a v g p o s , RMSE a v g v e l , BNorm a v g p o s and BNorm a v g v e l of different filters for σ θ = 7 at V = 12 m/s on the straight line.
Table 5. RMSE a v g p o s , RMSE a v g v e l , BNorm a v g p o s and BNorm a v g v e l of different filters for σ θ = 7 at V = 12 m/s on the straight line.
Filter RMSE a v g p o s RMSE a v g v e l BNorm a v g p o s BNorm a v g v e l
PL-MMSE2.2940.1400.1390.011
PLKF27.0682.06922.2021.791
BC-PLKF3.7140.2730.2700.010
IVKF3.6480.2990.2190.021
PL-MMSE-C ( W = I ) 1.7810.1030.0840.001
PL-MMSE-C ( W = P 1 ) 1.7810.1030.0840.001
PLKF-C ( W = I ) 24.3501.90718.8471.557
PLKF-C ( W = P 1 ) 23.0271.80417.7391.458
BC-PLKF-C ( W = I ) 2.8900.2070.2340.008
BC-PLKF-C ( W = P 1 ) 2.8830.2070.2670.008
IVKF-C ( W = I ) 2.5370.1980.1040.013
IVKF-C ( W = P 1 ) 2.8420.2040.0970.016
Table 6. Standard deviation σ θ against ρ at V = 10 4 m/s.
Table 6. Standard deviation σ θ against ρ at V = 10 4 m/s.
ρ 12345678910
σ θ ( × 10 5 ) 12345678910
Table 7. RMSE a v g p o s ( × 10 4 m), RMSE a v g v e l ( × 10 5 m), BNorm a v g p o s ( × 10 5 m) and BNorm a v g v e l ( × 10 6 m) of different filters for σ θ = 7 at V = 10 4 m/s on the arc section.
Table 7. RMSE a v g p o s ( × 10 4 m), RMSE a v g v e l ( × 10 5 m), BNorm a v g p o s ( × 10 5 m) and BNorm a v g v e l ( × 10 6 m) of different filters for σ θ = 7 at V = 10 4 m/s on the arc section.
Filter RMSE a v g p o s RMSE a v g v e l BNorm a v g p o s BNorm a v g v e l
PL-MMSE2.2631.7263.5283.344
PLKF2.3461.9493.6733.541
BC-PLKF2.3451.9493.6673.537
IVKF2.3451.9493.6673.537
PL-MMSE-L1.5791.4151.5181.590
PL-MMSE-S1.5791.4081.5181.708
PLKF-L1.6771.6501.7161.994
PLKF-S1.6771.6501.7161.994
BC-PLKF-L1.6771.6721.7161.794
BC-PLKF-S1.6771.6501.7161.993
IVKF-L1.6771.6721.7161.794
IVKF-S1.6771.6501.7161.993
Table 8. Standard deviation σ θ against ρ at V = 0.2 m/s.
Table 8. Standard deviation σ θ against ρ at V = 0.2 m/s.
ρ 12345678910
σ θ ( / 10 )12345678910
Table 9. RMSE a v g p o s , RMSE a v g v e l , BNorm a v g p o s and BNorm a v g v e l of different filters for σ θ = 7 at V = 0.2 m/s on the arc section.
Table 9. RMSE a v g p o s , RMSE a v g v e l , BNorm a v g p o s and BNorm a v g v e l of different filters for σ θ = 7 at V = 0.2 m/s on the arc section.
Filter RMSE a v g p o s RMSE a v g v e l BNorm a v g p o s BNorm a v g v e l
PL-MMSE0.2240.0170.0250.004
PLKF0.7890.0580.4990.039
BC-PLKF0.3210.0250.0340.005
IVKF0.3260.0250.0280.004
PL-MMSE-L0.1760.0130.0110.001
PL-MMSE-S0.1760.0130.0110.001
PLKF-L0.3100.0240.0950.008
PLKF-S0.3100.0240.0950.008
BC-PLKF-L0.2710.0210.0160.001
BC-PLKF-S0.2710.0210.0160.001
IVKF-L0.2940.0230.0170.001
IVKF-S0.2940.0230.0170.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, M.; Tang, X.; Zhang, Q.; Zou, Y. Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints. Appl. Sci. 2022, 12, 9975. https://doi.org/10.3390/app12199975

AMA Style

Li M, Tang X, Zhang Q, Zou Y. Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints. Applied Sciences. 2022; 12(19):9975. https://doi.org/10.3390/app12199975

Chicago/Turabian Style

Li, Ming, Xiafei Tang, Qichun Zhang, and Yiqun Zou. 2022. "Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints" Applied Sciences 12, no. 19: 9975. https://doi.org/10.3390/app12199975

APA Style

Li, M., Tang, X., Zhang, Q., & Zou, Y. (2022). Non-Gaussian Pseudolinear Kalman Filtering-Based Target Motion Analysis with State Constraints. Applied Sciences, 12(19), 9975. https://doi.org/10.3390/app12199975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop