Next Article in Journal
Possibilities of Real Time Monitoring of Micropollutants in Wastewater Using Laser-Induced Raman & Fluorescence Spectroscopy (LIRFS) and Artificial Intelligence (AI)
Next Article in Special Issue
Video Watermarking Algorithm Based on NSCT, Pseudo 3D-DCT and NMF
Previous Article in Journal
Machine Learning White-Hat Worm Launcher for Tactical Response by Zoning in Botnet Defense System
Previous Article in Special Issue
Extended Object Tracking with Embedded Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bayesian Cramér-Rao Lower Bounds for Prediction and Smoothing of Nonlinear TASD Systems

1
Center for Information Engineering Science Research, School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
Independent Consultant, Anacortes, WA 98221, USA
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 4667; https://doi.org/10.3390/s22134667
Submission received: 18 April 2022 / Revised: 18 June 2022 / Accepted: 18 June 2022 / Published: 21 June 2022
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
The performance evaluation of state estimators for nonlinear regular systems, in which the current measurement only depends on the current state directly, has been widely studied using the Bayesian Cramér-Rao lower bound (BCRLB). However, in practice, the measurements of many nonlinear systems are two-adjacent-states dependent (TASD) directly, i.e., the current measurement depends on the current state as well as the most recent previous state directly. In this paper, we first develop the recursive BCRLBs for the prediction and smoothing of nonlinear systems with TASD measurements. A comparison between the recursive BCRLBs for TASD systems and nonlinear regular systems is provided. Then, the recursive BCRLBs for the prediction and smoothing of two special types of TASD systems, in which the original measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, are presented, respectively. Illustrative examples in radar target tracking show the effectiveness of the proposed recursive BCRLBs for the prediction and smoothing of TASD systems.

1. Introduction

Filtering, prediction and smoothing have attracted wide attention in many engineering applications, such as target tracking [1,2], signal processing [3], sensor registration [4], econometrics forecasting [5], localization and navigation [6,7], etc. For filtering, the Kalman filter (KF) [8] is optimal for linear Gaussian systems in the sense of minimum mean squared error (MMSE). However, most real-world system models are usually nonlinear, which does not meet the assumptions of the Kalman filter. To deal with this, many nonlinear filters have been developed. The extended Kalman filter (EKF) [9] is the most well-known one, which approximates nonlinear systems as linear systems by the first-order Taylor series expansion of the nonlinear dynamic and/or measurement systems. The divided difference filter (DDF) was proposed in [10] using the Stirling interpolation formula. DDFs include the first-order divided difference filter (DD1) and second-order divided difference filter (DD2), depending on the interpolation order. Moreover, some other nonlinear filters have also been proposed, including the unscented Kalman filter (UKF) [11,12], quadrature Kalman filter (QKF) [13], cubature Kalman filter (CKF) [14,15], etc. All these nonlinear filters use different approximation techniques, such as function approximation and moment approximation [16]. Another type of nonlinear filter is the particle filter (PF) [17,18], which uses the sequential Monte Carlo method to generate random sample points to approximate the posterior density. Prediction is also very important since it can help people make decisions in advance and prevent unknown dangers. Following the same idea of filters, various predictors have been studied, e.g., Kalman predictor (KP) [19], extended Kalman predictor (EKP) [20], unscented Kalman predictor (UKP) [21], cubature Kalman predictor (CKP) [22] and particle predictor (PP) [23]. It is well known that smoothing is, in general, more accurate than the corresponding filtering. To achieve higher precision for estimation, many smoothers have been proposed, such as the Kalman smoother (KS) [24], extended Kalman smoother (EKS) [25], unscented Kalman smoother (UKS) [26], cubature Kalman smoother (CKS) [27] and particle smoother (PS) [28].
Despite the significant progress in nonlinear filtering, prediction and smoothing, they mainly deal with nonlinear regular dynamic systems, in which the current measurement depends only on the current state directly. However, in practice, many systems may have two-adjacent-states dependent (TASD) measurements. For example, the nonlinear systems having autocorrelated measurement noises or cross-correlated measurement and process noises at one time step apart [24] can be regarded as systems with TASD measurements. These types of systems are common in practice. For example, in many radar systems, the auto-correlations of measurement noises can not be ignored [29,30] due to the high measurement frequency. In satellite navigation systems, multi-path error and weak GPS signals make the measurement noise regarded as integral to white noise [31]. Further, in signal processing, measurement noises are usually autocorrelated because of time-varying fading and band-limited channel [32,33]. In sensor fusion, the time alignment of different sensors will cause the dependency of process noise and measurement noise [34]. In target-tracking systems, the discretization of continuous systems can induce the cross-correlation between the process and measurement noises at one time step apart [35]. In aircraft inertial navigation systems, the vibration of the aircraft has a common effect on the sources of the process and measurement noises, which results in the cross-correlation between them [36]. For these systems, some estimators have been studied. To deal with the nonlinear systems with autocorrelated measurement noise, which is modeled as a first-order autoregressive sequence, a nonlinear Gaussian filter and a nonlinear Gaussian smoother were proposed in [37,38], respectively. It makes the new measurement noise white by reformulating a TASD measurement equation. A PF was proposed for the nonlinear systems with dependent noise [39], in which the measurement is dependent on two adjacent states due to the cross-correlation between process and measurement noises. For nonlinear systems with the cross-correlated process and measurement noises at one time step apart, the Gaussian approximate filter and smoother were proposed in [40].
As is well known, assessing the performance of estimators is of great significance. The posterior Cramér-Rao lower bound (PCRLB) defined as the inverse of Fisher information matrix (FIM), also called Bayesian Cramér-Rao lower bound (BCRLB), provides a lower bound on the performance of estimators for nonlinear systems [41,42], Ch. 4 of [43]. In [44,45], a recursive BCRLB was developed for the filtering of nonlinear regular dynamic systems in which the current measurement is only dependent on the current state directly. Moreover, the BCRLBs for the prediction and smoothing of nonlinear regular dynamic systems was proposed in [45]. Compared with the conventional BCRLB, a new concept called conditional PCRLB (CPCRLB) was proposed in [46]. This CPCRLB is conditioned on the actual past measurements and provides an effective online performance bound for filters. In [47], another two CPCRLBs, i.e., A-CPCRLB and D-CPCRLB, were proposed. Since the auxiliary FIM is discarded, A-CPCRLB in [47] is more compact than the CPCRLB proposed in [46]. D-CPCRLB in [47] is not recursive and directly approximates the exact bound through numerical computations.
Some recent work has conducted a filtering performance assessment of TASD systems. In [48], a BCRLB was provided for the filtering of nonlinear systems with higher-order colored noises. Further, they presented the BCRLB for a special case in which the measurement model is driven by first-order autocorrelated Gaussian noises. In [49], the BCRLBs were proposed for the filtering of nonlinear systems with two types of dependence structures, of which the type II dependency can lead to TASD measurements. However, both of them did not generalize the BCRLB in [48,49] to the general form of TASD systems. In addition, the recursive BCRLBs for the prediction and smoothing of TASD systems were not covered in [48,49]. For the general form of TASD systems, a CPCRLB for filtering was developed in [50], which is dependent on the actual measurements. Compared with the BCRLB, this CPCRLB can provide performance evaluations for a particular nonlinear system’s state realization and better criteria for online sensor selection. In practice, the TASD systems sometimes may incorporate some unknown nonrandom parameters. For the performance evaluation of joint state and parameter estimation for nonlinear parametric TASD systems, a recursive joint CRLB (JCRLB) was studied in [51].
As equally important as CPCRLB is the BCRLB. It only depends on the structures and parameters of the dynamic model and measurement model but not the specific realization of measurement. As a result of this, BCRLBs can be computed offline. The BCRLB for the filtering of the general form of TASD systems has been obtained as a special case of the JCRLB in [51] when the parameter belongs to the empty set. However, the BCRLBs for the prediction and smoothing of the general form of TASD systems have not been studied yet. This paper aims to obtain the BCRLB for the prediction and smoothing of such nonlinear systems. First, we develop the recursive BCRLBs for the prediction and smoothing of general TASD systems. A comparison between the BCRLBs for TASD systems and regular systems is also made, and specific and simplified forms of the BCRLBs for additive Gaussian noise cases are provided. Second, we study specific BCRLBs for the prediction and smoothing of two special types of TASD systems, with autocorrelated measurement noises and cross-correlated process and measurement noises at one time step apart, respectively.
The rest of this paper is organized as follows. Section 2 formulates the BCRLB problem for nonlinear systems with TASD measurements. Section 3 develops the recursions of BCRLB for the prediction and smoothing of general TASD systems. Section 4 presents specific BCRLBs for two special types of nonlinear systems with TASD measurements. In Section 5, some illustrative examples in radar target tracking are provided to verify the effectiveness of the proposed BCRLBs. Section 6 concludes the paper.

2. Problem Formulation

Consider the following general discrete-time nonlinear systems with TASD measurements
x k + 1 = f k ( x k , w k )
z k = h k ( x k , x k 1 , v k )
where x k R n and z k R m are the state and measurement at time k, respectively, the process noise w k and the measurement noise v k are mutually independent white sequences with probability density functions (PDFs) p ( w k ) and p ( v k ) , respectively. We assume that the initial state x 0 is independent of the process and measurement noise sequences with PDF p ( x 0 ) .
Definition 1.
Define X k = [ x 0 , , x k ] and Z k = [ z 1 , , z k ] as the accumulated state and measurement up to time k, respectively. The superscript “′” denotes the transpose of a vector or matrix.
Definition 2.
Define X ^ j | k and x ^ j | k as estimates of X j and x j given the measurement Z k , respectively. x ^ j | k are state estimates for filtering, prediction and smoothing when j = k , j > k and j < k , respectively.
Definition 3.
The mean square error (MSE) of X ^ j | k is defined as
M j | k E [ X ˜ j | k ( X ˜ j | k ) ] = R k m R ( j + 1 ) n X ˜ j | k ( X ˜ j | k ) p ( X j , Z k ) d X j d Z k
The MSE of x ^ j | k is defined as
M j | k E [ x ˜ j | k ( x ˜ j | k ) ] = R k m R n x ˜ j | k ( x ˜ j | k ) p ( x j , Z k ) d x j d Z k
where X ˜ j | k = X j X ^ j | k and x ˜ j | k = x j x ^ j | k are the associated estimation errors, p ( X j , Z k ) and p ( x j , Z k ) are the joint PDFs. M j | k are MSEs for filtering, prediction and smoothing when j = k , j > k and j < k , respectively.
Definition 4.
Define the FIM J j | k about the accumulated state X j as
J j | k E [ Δ X j X j ln p ( X j , Z k ) ] = R k m R ( j + 1 ) n ( Δ X j X j ln p ( X j , Z k ) ) p ( X j , Z k ) d X j d Z k
where Δ denotes the second-order derivative operator, i.e., Δ a b = a b , and ∇ denotes the gradient operator.
Lemma 1.
The MSE of X ^ j | k satisfying certain regularity conditions as in [41] is bounded from below by the inverse of J j | k as [41,45]
M j | k E [ X ˜ j | k ( X ˜ j | k ) ] ( J j | k ) 1
where the inequality means that the difference M j | k ( J j | k ) 1 is a positive semidefinite matrix.
Definition 5.
Define J j | k 1 as the n × n right-lower block of ( J j | k ) 1 and J j | k as the FIM about x j , where “n” is the dimension of the state x k . J j | k are FIMs for filtering, prediction and smoothing when j = k , j > k and j < k , respectively.
Lemma 2.
The MSE of x ^ j | k satisfying certain regularity conditions as in [41] is bounded from below by the inverse of J j | k as [41,44]
M j | k E [ x ˜ j | k ( x ˜ j | k ) ] J j | k 1
Compared with regular systems, the measurement z k of the nonlinear systems (1) and (2) not only depends on the current state x k but also the most recent previous state x k 1 directly. The main goal of this paper is to obtain the recursive FIMs J j | k for the prediction and smoothing of nonlinear TASD systems without manipulating the larger matrix J j | k .

3. Recursive BCRLBs for Prediction and Smoothing

3.1. BCRLBs for General TASD Systems

For simplicity, the following notations are introduced in advance
D k + 1 m , n = E [ Δ x n x m ln p ( x k + 1 | x k ) ] = ( D k + 1 n , m ) E k + 1 m , n = E [ Δ x n x m ln p ( z k + 1 | x k + 1 , x k ) ] = ( E k + 1 n , m )
where m , n { k , k + 1 } , and D 0 0 , 0 = E [ Δ x 0 x 0 ln p ( x 0 ) ] .
To initialize the recursion for FIMs of prediction and smoothing, the recursion of the FIM J k | k for filtering is required. This can be obtained from Corollary 3 of [51], as shown in the following lemma.
Lemma 3.
The FIM J k | k for filtering obeys the following recursion [51]
J k + 1 | k + 1 = D k + 1 k + 1 , k + 1 + E k + 1 k + 1 , k + 1 ( D k + 1 k , k + 1 + E k + 1 k , k + 1 ) · ( D k + 1 k , k + E k + 1 k , k + J k | k ) 1 ( D k + 1 k + 1 , k + E k + 1 k + 1 , k )
with J 0 | 0 = D 0 0 , 0 = E [ Δ x 0 x 0 ln p ( x 0 ) ] .

3.1.1. BCRLB for Prediction

Theorem 1.
The FIMs J j + 1 | k and J j | k are related to each other through
J j + 1 | k = D j + 1 j + 1 , j + 1 D j + 1 j , j + 1 ( D j + 1 j , j + J j | k ) 1 D j + 1 j + 1 , j
for j = k , k + 1 , k + 2 , .
Proof. 
See Appendix A. □
Substituting j = k , k + 1 , , k + m 1 into (5), the recursions of the FIM for m-step prediction can be obtained as
J k + 1 | k = D k + 1 k + 1 , k + 1 D k + 1 k , k + 1 ( D k + 1 k , k + J k | k ) 1 D k + 1 k + 1 , k J k + 2 | k = D k + 2 k + 2 , k + 2 D k + 2 k + 1 , k + 2 ( D k + 2 k + 1 , k + 1 + J k + 1 | k ) 1 D k + 2 k + 2 , k + 1 J k + m | k = D k + m k + m , k + m D k + m k + m 1 , k + m ( D k + m k + m 1 , k + m 1 + J k + m 1 | k ) 1 D k + m k + m , k + m 1
where m 1 .

3.1.2. BCRLB for Smoothing

Let X ^ k | k = [ x ^ 0 | k , , x ^ j | k , x ^ k | k ] , 1 j k 1 be an estimate of the accumulated state consisting of the smoothing estimates x ^ 0 | k , x ^ 1 | k , , x ^ k 1 | k , and the filtering estimate x ^ k | k . The MSE M k | k for X ^ k | k is bounded from below by the inverse of J k | k . Thus ( J k | k ) 1 contains the smoothing BCRLBs J j | k 1 , j = 0 , 1 , , k 1 , and filtering BCRLB J k | k 1 on its main diagonal. Then we have
( J k | k ) 1 = J 0 | k 1 J j | k 1 J j + 1 | k 1 J k | k 1 = [ ( J k | k ) 1 ] 11 [ ( J k | k ) 1 ] 22
where zero blocks have been left empty, [ ( J k | k ) 1 ] 11 = diag ( J 0 | k 1 , , J j | k 1 ) , [ ( J k | k ) 1 ] 22 = diag ( J j + 1 | k 1 , , J k | k 1 ) , and ‘diag’ denotes diagonal matrix [52].
Theorem 2.
The FIM J j | k for smoothing can be recursively obtained as
J j | k = J j | j + D j + 1 j , j + E j + 1 j , j ( D j + 1 j + 1 , j + E j + 1 j + 1 , j ) ( J j + 1 | k + D j + 1 j + 1 , j + 1 + E j + 1 j + 1 , j + 1 J j + 1 | j + 1 ) 1 ( D j + 1 j , j + 1 + E j + 1 j , j + 1 )
for j = k 1 , k 2 , , 0 . This backward recursion is initialized by the FIM J k | k for filtering.
Proof. 
See Appendix B. □

3.2. Comparison with the BCRLBs for Nonlinear Regular Systems

For nonlinear regular systems, measurement z k only depends on state x k directly, i.e., z k = h k ( x k , v k ) . Clearly, nonlinear regular systems are special cases of nonlinear TASD systems (2) since
z k = h k ( x k , v k ) = h k ( x k , v k ) + 0 · x k 1 = h k ( x k , x k 1 , v k )
As a result, the likelihood function p ( z j + 1 | x j + 1 , x j ) for TASD systems in (3) will be reduced to p ( z j + 1 | x j + 1 ) for regular systems. Correspondingly, E j + 1 j , j , E j + 1 j + 1 , j and E j + 1 j + 1 , j + 1 in (3) will be reduced to
E j + 1 j , j = E [ Δ x j x j ln p ( z j + 1 | x j + 1 , x j ) ] = 0 E j + 1 j + 1 , j = E [ Δ x j x j + 1 ln p ( z j + 1 | x j + 1 , x j ) ] = 0 E j + 1 j + 1 , j + 1 = E [ Δ x j + 1 x j + 1 ln p ( z j + 1 | x j + 1 , x j ) ] = E [ Δ x j + 1 x j + 1 ln p ( z j + 1 | x j + 1 ) ]
Substituting E j + 1 j , j , E j + 1 j + 1 , j and E j + 1 j + 1 , j + 1 in (10) into (8), the recursion of the FIM for smoothing of TASD systems will be reduced to
J j | k = J j | j + D j + 1 j , j D j + 1 j + 1 , j ( J j + 1 | k + D j + 1 j + 1 , j + 1 + E j + 1 j + 1 , j + 1 J j + 1 | j + 1 ) 1 D j + 1 j , j + 1
This is exactly the recursion of the FIM for smoothing of nonlinear regular systems in [45]. That is, the recursion of the FIM for the smoothing of nonlinear regular systems is a special case of the recursion of the FIM for the smoothing of nonlinear TASD systems.
For the FIM of prediction, it can be seen that the FIMs for prediction in (5) of TASD systems are governed by the same recursive equations as the FIMs for regular systems in [45], except that J j | k , j = k , k + 1 , k + 2 , , is different. This is because predictions for both TASD systems and regular systems only depend on the same dynamic Equation (1).
Next, we study specific and simplified BCRLBs for TASD systems with additive Gaussian noises.

3.3. BCRLBs for TASD Systems with Additive Gaussian Noise

Assume that the nonlinear systems (1) and (2) is driven by additive Gaussian noises as
x k + 1 = f k ( x k ) + w k
z k = h k ( x k , x k 1 ) + v k
where w k N ( 0 , Q k ) , v k N ( 0 , R k ) and the covariance matrices Q k and R k are invertible. Then the D ’s and E ’s of (3) used in the recursions of FIMs for prediction and smoothing will be simplified to
D k + 1 k , k = E { [ x k f k ( x k ) ] Q k 1 [ x k f k ( x k ) ] } D k + 1 k + 1 , k = E x k f k ( x k ) Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = E { [ x k h k + 1 ( x k + 1 , x k ) ] R k + 1 1 [ x k h k + 1 ( x k + 1 , x k ) ] } E k + 1 k + 1 , k = E { [ x k h k + 1 ( x k + 1 , x k ) ] R k + 1 1 [ x k + 1 h k + 1 ( x k + 1 , x k ) ] } E k + 1 k + 1 , k + 1 = E { [ x k + 1 h k + 1 ( x k + 1 , x k ) ] R k + 1 1 [ x k + 1 h k + 1 ( x k + 1 , x k ) ] }
Assume that the systems (12) and (13) is further reduced to a linear Gaussian system as
x k + 1 = F k x k + w k
z k = H k x k + C k 1 x k 1 + v k
where w k N ( 0 , Q k ) , v k N ( 0 , R k ) and the covariance matrices Q k and R k are invertible. Then the D ’s and E ’s of (3) used in the recursions of FIMs for prediction and smoothing will be further simplified to
D k + 1 k , k = F k Q k 1 F k D k + 1 k + 1 , k = F k Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = C k R k + 1 1 C k E k + 1 k + 1 , k = C k R k + 1 1 H k + 1 E k + 1 k + 1 , k + 1 = H k + 1 R k + 1 1 H k + 1
Remark 1. 
If we rewrite the linear TASD systems (15) and (16) as the following augmented form
x k + 1 x k = F k F k 1 x k x k 1 + w k
z k = H k C k 1 x k x k 1 + v k
where zero blocks have been left empty and w k = [ w k , w k 1 ] , then the process noise w k in (18) will be correlated with its adjacent noises w k 1 and w k + 1 , but uncorrelated with { w 0 , , w k 2 , w k + 2 , } . For this special type of linear system, how to obtain its BCRLBs is still unknown.

4. Recursive BCRLBs for Two Special Types of Nonlinear TASD Systems

Two special types of nonlinear systems, in which the measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, can be deemed as nonlinear TASD systems described in (1) and (2). These two types of nonlinear systems are very common in many engineering applications. For example, in target-tracking systems, the high radar measurement frequency will result in autocorrelations of measurement noises [29] and the discretization of continuous systems can induce the cross-correlation between the process and measurement noises at one time step apart [35]. In navigation systems, the multi-path error and weak GPS signal will make measurement noises autocorrelated [31] and the effect caused by vibration on the aircraft may result in the cross-correlation between the process and measurement noises [36]. Next, specific recursive BCRLBs for the prediction and smoothing of these two systems are obtained by applying the above theorems in Section 3.

4.1. BCRLBs for Systems with Autocorrelated Measurement Noises

Consider the following nonlinear system
x k + 1 = f k ( x k ) + w k
y k = l k ( x k ) + e k
where l k is a nonlinear measurement function, e k is autocorrelated measurement noise satisfying a first-order autoregressive (AR) model [38]
e k = Ψ k 1 e k 1 + ξ k 1
where Ψ k 1 is the known correlation parameter, the process noise w k and the driven noise ξ k 1 are mutually independent white noise sequences, and both independent of the initial state x 0 as well.
To obtain the BCRLBs for the prediction and smoothing of nonlinear systems with autocorrelated measurement noises, a TASD measurement equation is first constructed by differencing two adjacent measurements as
z k = y k Ψ k 1 y k 1
Then, we can get a pseudo measurement equation depending on two adjacent states as
z k = l k ( x k ) Ψ k 1 l k 1 ( x k 1 ) + e k Ψ k 1 e k 1 = h k ( x k , x k 1 ) + v k
where
h k ( x k , x k 1 ) = l k ( x k ) Ψ k 1 l k 1 ( x k 1 ) v k = ξ k 1
Clearly, the pseudo measurement noise v k in (24) is white and independent of the process noise w k and the initial state x 0 .
From the above, we know that the systems (20)–(22) is equivalent to the TASD systems (20) and (24). Applying Theorems 1 and 2 to this TASD system, we can get the BCRLBs for the prediction and smoothing of nonlinear systems with autocorrelated measurement noises.
Next, we discuss some specific and simplified recursions of FIMs for the prediction and smoothing of nonlinear and linear systems with autocorrelated measurement noises when the noises are Gaussian.
Theorem 3.
For the nonlinear systems (20)–(22), if the process noise w k N ( 0 , Q k ) and the driven noise ξ k N ( 0 , R k ) , then the D ’s and E ’s of (3) used in the recursions of FIMs for prediction and smoothing will be simplified to
D k + 1 k , k = E { [ x k f k ( x k ) ] Q k 1 [ x k f k ( x k ) ] } D k + 1 k + 1 , k = E x k f k ( x k ) Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = E { [ x k l k ( x k ) Ψ k ] R k 1 [ x k l k ( x k ) Ψ k ] } E k + 1 k + 1 , k = E { [ x k l k ( x k ) Ψ k ] R k 1 [ x k + 1 l k + 1 ( x k + 1 ) ] } E k + 1 k + 1 , k + 1 = E { [ x k + 1 l k + 1 ( x k + 1 ) ] R k 1 [ x k + 1 l k + 1 ( x k + 1 ) ] }
Proof. 
See Appendix C. □
Corollary 1.
Assume that the systems (20)–(22) is reduced to a linear Gaussian system as
x k + 1 = F k x k + w k
y k = L k x k + e k
e k = Ψ k 1 e k 1 + ξ k 1
Then the D ’s and E ’s of (25) in Theorem 3 will be simplified to
D k + 1 k , k = F k Q k 1 F k D k + 1 k + 1 , k = F k Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = L k Ψ k R k 1 Ψ k L k E k + 1 k + 1 , k = L k Ψ k R k 1 L k + 1 E k + 1 k + 1 , k + 1 = L k + 1 R k 1 L k + 1
Theorem 4.
For the linear Gaussian systems (26)–(28) with autocorrelated measurement noises, the inverse of FIM J k + m | k for m-step prediction in Corollary 1 is equivalent to the MSE matrix P k + m | k of the optimal prediction, m 1 , i.e.,
P k + m | k = J k + m | k 1
Proof. 
See Appendix D. □
Since P k + m | k = J k + m | k 1 , m 1 , the optimal predictors can attain the BCRLBs for prediction proposed in Corollary 1, i.e., the optimal predictors are efficient estimators for the linear Gaussian systems (26)–(28) with autocorrelated measurement noises.

4.2. BCRLBs for Systems with Noises Cross-Correlated at One Time Step Apart

Consider the following nonlinear system
x k + 1 = f k ( x k ) + w k
z k = l k ( x k ) + e k
where w k N ( 0 , Q k ) , e k N ( 0 , E k ) and they are cross-correlated at one time step apart [39], satisfying E [ w k e j ] = U k δ k , j 1 , where δ k , j 1 is the Kronecker delta function. Both w k and e k are independent of the initial state x 0 .
To obtain the BCRLBs for the prediction and smoothing of nonlinear systems with noises cross-correlated at one time step apart, as in [50], a TASD measurement equation is constructed as
z k = l k ( x k ) + e k + G k ( x k f k 1 ( x k 1 ) w k 1 ) = h k ( x k , x k 1 ) + v k
where
h k ( x k , x k 1 ) = l k ( x k ) + G k ( x k f k 1 ( x k 1 ) ) v k = e k G k w k 1 G k = U k 1 Q k 1 1
Clearly, the pseudo measurement noise v k is uncorrelated with the process noise w k 1 , and E [ v k ] = 0 , cov ( v k ) = R k = E k U k 1 Q k 1 1 U k 1 .
Proposition 1.
For the reconstructed TASD systems (31) and (33), h k ( x k , x k 1 ) is independent of the pseudo measurement noise v k .
Proof. 
First, from the assumption of noise independence, we know that x k 1 is independent of e k and w k 1 . Therefore, it is obvious that x k 1 is independent of v k . Second, because the state x k in h k ( x k , x k 1 ) is only determined by { x 0 , w 0 , , w k 1 } , which is independent of v k , the state x k is independent of v k . Therefore, h k ( x k , x k 1 ) is independent of the pseudo measurement noise v k . This completes the proof. □
Proposition 1 shows that the reconstructed TASD systems (31) and (33) satisfies the independence assumption of the TASD systems in Section 2.
From the above, we know that the systems (31) and (32) is equivalent to the TASD systems (31) and (33). Applying Theorems 1 and reftheorem4 to this TASD system, the BCRLBs for the prediction and smoothing of nonlinear systems in which the measurement noise is cross-correlated with the process noise at one time step apart can be obtained.
Next, we discuss some specific and simplified recursions of FIMs for the prediction and smoothing of nonlinear and linear systems with Gaussian process and measurement noises cross-correlated at one time step apart.
Theorem 5.
For the nonlinear systems (31)–(32), if the process noise w k N ( 0 , Q k ) and the measurement noise e k N ( 0 , E k ) , then the D ’s and E ’s of (3) used in recursions of FIMs for prediction and smoothing will be simplified to
D k + 1 k , k = E { [ x k f k ( x k ) ] Q k 1 [ x k f k ( x k ) ] } D k + 1 k + 1 , k = E [ x k f k ( x k ) ] Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = E { [ x k f k ( x k ) G k + 1 ] R k + 1 1 [ x k f k ( x k ) G k + 1 ] } E k + 1 k + 1 , k = E { [ x k f k ( x k ) G k + 1 ] R k + 1 1 [ x k + 1 l k + 1 ( x k + 1 ) + G k + 1 ] } E k + 1 k + 1 , k + 1 = E { [ x k + 1 l k + 1 ( x k + 1 ) + G k + 1 ] R k + 1 1 [ x k + 1 l k + 1 ( x k + 1 ) + G k + 1 ] }
Corollary 2.
Assume that the systems (31)–(32) is reduced to a linear Gaussian system as
x k + 1 = F k x k + w k
z k = L k x k + e k
Then the D ’s and E ’s of (34) in Theorem 5 will be simplified to
D k + 1 k , k = F k Q k 1 F k D k + 1 k + 1 , k = F k Q k 1 D k + 1 k + 1 , k + 1 = Q k 1 E k + 1 k , k = F k G k + 1 R k + 1 1 G k + 1 F k E k + 1 k + 1 , k = F k G k + 1 R k + 1 1 ( L k + 1 + G k + 1 ) E k + 1 k + 1 , k + 1 = ( L k + 1 + G k + 1 ) R k + 1 1 ( L k + 1 + G k + 1 )
Theorem 6.
For the linear Gaussian systems (35) and (36) with cross-correlated process and measurement noises at one time step apart, the inverse of FIM J k + m | k for m-step prediction in Corollary 2 is equivalent to the MSE matrix P k + m | k of the optimal prediction, m 1 , i.e.,
P k + m | k = J k + m | k 1
Proof. 
See Appendix E. □
Since P k + m | k = J k + m | k 1 , m 1 , the optimal predictors can attain the BCRLBs for prediction proposed in Corollary 2, i.e., the optimal predictors are efficient estimators for the linear Gaussian systems (35) and (36) with cross-correlated process and measurement noises at one time step apart.

5. Illustrative Examples

In this section, illustrative examples in radar target tracking are presented to demonstrate the effectiveness of the proposed recursive BCRLBs for the prediction and smoothing of nonlinear TASD systems.
Consider a target with nearly constant turn (NCT) motion in a 2D plane [14,40,48,53]. The target motion model is
x k + 1 = 1 sin ω T ω 0 cos ω T 1 ω 0 cos ω T 0 sin ω T 0 1 cos ω T ω 1 sin ω T ω 0 sin ω T 0 cos ω T x k + w k
where x k = x k , x ˙ k , y k , y ˙ k is the state vector, T = 1 s is the sampling interval, ω = 2 s 1 is the turning rate and the process noise w k N 0 , Q k with [53]
Q k = S w 2 ( ω T sin ω T ) ω 3 1 cos ω T ω 2 0 ( ω T sin ω T ) ω 2 1 cos ω T ω 2 T ( ω T sin ω T ) ω 2 0 0 ( ω T sin ω T ) ω 2 2 ( ω T sin ω T ) ω 3 1 cos ω T ω 2 ( ω T sin ω T ) ω 2 0 1 cos ω T ω 2 T
where S w = 0.1 m 2 s 3 is the power spectral density.
Assume that a 2D radar is located at the origin of the plane. The measurement model is
z k + 1 = r k + 1 m θ k + 1 m = x k + 1 2 + y k + 1 2 tan 1 ( y k + 1 , x k + 1 ) + e k + 1
where the radar measurement vector z k + 1 is composed of the range measurement r k + 1 m and bearing measurement θ k + 1 m , and e k + 1 is the measurement noise.

5.1. Example 1: Autocorrelated Measurement Noises

In this example, we assume that the measurement noise sequence e k + 1 in (41) is first-order autocorrelated and modeled as
e k + 1 = 0.4 I e k + ξ k
where I is a 2 × 2 identity matrix, the driven noise ξ k N 0 , R k with R k = diag ( σ r 2 ( ξ ) , σ θ 2 ( ξ ) ) , σ r ( ξ ) = 30 m and σ θ ( ξ ) = 30 mrad. Further, w k and ξ k are mutually independent. The initial state X 0 N X ¯ 0 , P 0 with
X ¯ 0 = [ 1000 m , 120 ms 1 , 1000 m , 0 ms 1 ] P 0 = diag ( 10 , 000 m 2 , 100 m 2 s 2 , 10 , 000 m 2 , 10 m 2 s 2 )
To show the effectiveness of the proposed BCRLBs in this radar target tracking example with autocorrelated measurement noises, we use the cubature Kalman filter (CKF) [37], cubature Kalman predictor (CKP) [37] and cubature Kalman smoother (CKS) [38] to obtain the state estimates. These estimators generate an augmented measurement to decorrelate the autocorrelated measurement noises instead of using the first-order linearization method. Meanwhile, these Gaussian approximate estimators can obtain accurate estimates with very low computational cost, especially in the high-dimensional case with additive Gaussian noises. The RMSEs and BCRLBs are obtained over 500 Monte Carlo runs.
Figure 1 shows the RMSE versus BCRLB for position and velocity estimation. It can be seen that the proposed BCRLBs provide lower bounds to the MSEs of CKP and CKS. Moreover, the gaps between the RMSEs of CKP and CKS and the BCRLB s for one-step prediction and fixed-interval smoothing are very small. This means that the CKP and CKS are close to being efficient. Moreover, it can be seen that the BCRLB for one-step prediction lies above the BCRLB for filtering and the RMSE of CKP lies above the RMSE of CKF. This is because prediction only depends on the dynamic model, whereas filtering depends on both the dynamic and measurement models. Since smoothing uses both past and future information, the BCRLB for fixed-interval smoothing is lower than the BCRLB for filtering and the RMSE of CKS is lower than the RMSE of CKF.
Figure 2 shows the BCRLB s for multi-step prediction, i.e., 1-step to 5-step prediction. It can be seen that the more steps we predict ahead, the larger the BCRLB for prediction is. This is because if we take more prediction steps, the predictions for position and velocity will be less accurate.
Figure 3 shows the BCRLB s for fixed-lag and fixed-interval smoothing. It can be seen that the BCRLB for 1-step fixed-lag smoothing is the worst and the BCRLB for fixed-interval smoothing is the best. This is because the smoothing estimation becomes more and more accurate as the length of the data interval increases.

5.2. Example 2: Cross-Correlated Process and Measurement Noises at One Time Step Apart

In this example, we assume that the process noise sequence w k in (39) is cross-correlated with the measurement noise sequence e k in (41) at one time step apart. The cross-correlation covariance is E [ w k e k + 1 ] = U k = 0.5 0.5 0.3 0.3 0 0 0 0 . The distribution of e k is N 0 , E k with E k = diag ( σ r 2 ( e ) , σ θ 2 ( e ) ) , σ r ( e ) = 30 m and σ θ ( e ) = 40 mrad. The initial state X 0 N X ¯ 0 , P 0 with
X ¯ 0 = [ 1000 m , 120 ms 1 , 1000 m , 10 ms 1 ] P 0 = diag ( 10 , 000 m 2 , 1000 m 2 s 2 , 10 , 000 m 2 , 10 m 2 s 2 )
To show the effectiveness of the proposed BCRLBs in this radar target tracking example with the the cross-correlated process and measurement noises at one time step apart, we use the cubature Kalman filter (CKF), cubature Kalman predictor (CKP) and cubature Kalman smoother (CKS) in [40] to obtain the state estimates. These estimators decorrelate the cross-correlation between process and measurement noises by reconstructing a pseudo measurement equation. Compared with the Monte Carlo approximation method, these Gaussian approximate estimators can give an effective balance between estimation accuracy and computational cost. A total of 500 Monte Carlo runs are performed to obtain the RMSEs and BCRLBs.
Figure 4 shows the RMSEs of CKF, CKP and CKS versus three types of BCRLB s, i.e., for filtering, one-step prediction and fixed-interval smoothing. It can be seen that the RMSEs of CKP and CKS are bounded from below by their corresponding BCRLB s. It can also be observed that the gaps between the RMSEs of CKP and CKS and their corresponding BCRLB s are very small. This indicates that these estimators are close to being efficient. Moreover, we can see that the BCRLB for one-step prediction lies above the BCRLB for filtering, and the RMSE of CKP lies above the RMSE of CKF because prediction uses less information than filtering. Since smoothing uses data within the whole interval, the BCRLB for fixed-interval smoothing is lower than the BCRLB for filtering and the RMSE of CKS is lower than the RMSE of CKF.
Figure 5 shows the BCRLB s for multi-step prediction. We can see that the BCRLB for prediction grows as the prediction step increases. This is because if we predict more steps ahead, the predictions for position and velocity will be less accurate.
Figure 6 shows the BCRLB s for fixed-lag and fixed-interval smoothing. Clearly, smoothing becomes more accurate as the length of the data interval increases. Hence, the BCRLB for 1-step fixed-lag smoothing is the worst. In contrast, the BCRLB for fixed-interval smoothing is the best.

6. Conclusions

In this paper, we have proposed recursive BCRLBs for the prediction and smoothing of nonlinear dynamic systems with TASD measurements, i.e., the current measurement depends on both the current and the most recent previous state directly. A comparison with the recursive BCRLBs for nonlinear regular systems, in which the current measurement only depends on the current state directly, has been made. It is found that the BCRLB for the smoothing of regular systems is a special case of the newly proposed BCRLB, and the recursive BCRLBs for the prediction of TASD systems have the same forms as the BCRLBs for the prediction of regular systems except that the FIMs are different. This is because prediction only depends on the dynamic model, which is the same for both of them. Specific and simplified forms of the BCRLBs for the additive Gaussian noise cases have also been given. In addition, the recursive BCRLBs for the prediction and smoothing of two special types of nonlinear systems with TASD measurements, in which the original measurement noises are autocorrelated or cross-correlated with the process noises at one time step apart, have been presented, respectively. It is proven that the optimal linear predictors are efficient estimators if these two special types of nonlinear TASD systems are linear Gaussian.

Author Contributions

Conceptualization, X.L., Z.D. and M.M.; methodology, X.L., Z.D. and M.M.; software, X.L.; validation, X.L., Z.D. and Q.T.; writing—original draft preparation, X.L., Z.D. and M.M.; writing—review and editing, Z.D., Q.T. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Key Research and Development Plan under Grants 2021YFC2202600 and 2021YFC2202603, and the National Natural Science Foundation of China through Grants 62033010, 61773147 and 61673317.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors declare that the data that support the findings of this study are available from the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCRLBBayesian Cramér-Rao lower bound
CPCRLBconditional posterior Cramér-Rao lower bound
CKFcubature Kalman filter
CKPcubature Kalman predictor
CKScubature Kalman smoother
FIMFisher information matrix
MSEmean square error
MMSEminimum mean squared error
JCRLBjoint Cramér-Rao lower bound
PCRLBposterior Cramér-Rao lower bound
PDFprobability density function
RMSEroot mean square error
TASDTwo-adjacent-states dependent

Appendix A. Proof of Theorem 1

For the FIM J j + 1 | k , the joint PDF of X j + 1 and Z k is
p k j + 1 p ( X j + 1 , Z k ) = p ( X j , Z k ) p ( x j + 1 | X j , Z k ) = p k j p ( x j + 1 | x j )
Partition X j as X j = [ ( X j 1 ) , x j ] and J j | k as
J j | k = E p k j Δ X j 1 X j 1 ln p k j Δ X j 1 x j ln p k j Δ x j X j 1 ln p k j Δ x j x j ln p k j = J j | k 11 J j | k 12 J j | k 21 J j | k 22
Since J j | k 1 is equal to the n × n right-lower block of ( J j | k ) 1 , from the inversion of a partitioned matrix [24], the FIM about x j can be obtained as
J j | k = J j | k 22 J j | k 21 ( J j | k 11 ) 1 J j | k 12
Partition X j + 1 as X j + 1 = [ ( X j 1 ) , x j , x j + 1 ] and J j + 1 | k as
J j + 1 | k = E p k j + 1 Δ X j 1 X j 1 ln p k j + 1 Δ X j 1 x j ln p k j + 1 Δ X j 1 x j + 1 ln p k j + 1 Δ x j X j 1 ln p k j + 1 Δ x j x j ln p k j + 1 Δ x j x j + 1 ln p k j + 1 Δ x j + 1 X j 1 ln p k j + 1 Δ x j + 1 x j ln p k j + 1 Δ x j + 1 x j + 1 ln p k j + 1
where
E p k j + 1 ( Δ X j 1 X j 1 ln p k j + 1 ) = R k m R ( j + 2 ) n p k j + 1 ( Δ X j 1 X j 1 ln p k j + 1 ) d X j + 1 d Z k = R k m R ( j + 2 ) n p k j p ( x j + 1 | x j ) [ Δ X j 1 X j 1 ( ln p k j + ln p ( x j + 1 | x j ) ) ] d X j + 1 d Z k = R k m R ( j + 1 ) n p k j ( Δ X j 1 X j 1 ln p k j ) d X j d Z k = ( A 2 ) J j | k 11
Similarly, we can obtain
E p k j + 1 ( Δ X j 1 x j ln p k j + 1 ) = J j | k 12 E p k j + 1 ( Δ x j x j ln p k j + 1 ) = J j | k 22 + D j + 1 j , j E p k j + 1 ( Δ X j 1 x j + 1 ln p k j + 1 ) = 0 E p k j + 1 ( Δ x j x j + 1 ln p k j + 1 ) = D j + 1 j + 1 , j E p k j + 1 ( Δ x j + 1 x j + 1 ln p k j + 1 ) = D j + 1 j + 1 , j + 1
Then, J j + 1 | k can be rewritten as
J j + 1 | k = J j | k 11 J j | k 12 0 J j | k 21 J j | k 22 + D j + 1 j , j D j + 1 j + 1 , j 0 D j + 1 j , j + 1 D j + 1 j + 1 , j + 1
Since the prediction FIM matrix J j + 1 | k is the inverse of the right-lower n × n submatrix of J j + 1 | k , from (A7), we have
J j + 1 | k = D j + 1 j + 1 , j + 1 0 D j + 1 j , j + 1 J j | k 11 J j | k 12 J j | k 21 J j | k 22 + D j + 1 j , j 1 0 D j + 1 j + 1 , j = D j + 1 j + 1 , j + 1 D j + 1 j , j + 1 [ J j | k 22 + D j + 1 j , j J j | k 21 ( J j | k 11 ) 1 J j | k 12 ] 1 D j + 1 j + 1 , j = ( A 3 ) D j + 1 j + 1 , j + 1 D j + 1 j , j + 1 ( D j + 1 j , j + J j | k ) 1 D j + 1 j + 1 , j
This completes the proof.

Appendix B. Proof of Theorem 2

For the FIM J k | k , the joint PDF of X k and Z k at arbitrary time k is
p ( X k , Z k ) = p ( X k 1 , Z k 1 ) p ( x k | X k 1 , Z k 1 ) p ( z k | x k , X k 1 , Z k 1 ) = p ( X k 1 , Z k 1 ) p ( x k | x k 1 ) p ( z k | x k , x k 1 )
Similar to (A7), by using (A8), we can partition J k | k as
J k | k = T j | j S j , k S j , k Ψ j , k = K 0 N 1 N 1 K j N j + 1 N j + 1 K j + 1 K k 1 N k N k D k k , k + E k k , k
where zero blocks have been left empty, K k = D k k , k + E k k , k + D k + 1 k , k + E k + 1 k , k , N k = D k k , k 1 + E k k , k 1 , and the block matrix T j | j is
T j | j = J j | j 11 J j | j 12 J j | j 21 J j | j 22 + D j + 1 j , j + E j + 1 j , j
Since J j | k 1 is the lower-right block of [ ( J k | k ) 1 ] 11 defined in (7), we have
J j | k 1 = [ 0 , I n ] [ ( J k | k ) 1 ] 11 [ 0 , I n ]
From (A9) and the inversion of a partitioned matrix [24], we have
[ ( J k | k ) 1 ] 11 = T j | j 1 + T j | j 1 S j , k [ ( J k | k ) 1 ] 22 S j , k T j | j 1
and
[ 0 , I n ] T j | j 1 [ 0 , I n ] = ( J j | j 22 + D j + 1 j , j + E j + 1 j , j J j | j 21 ( J j | j 11 ) 1 J j | j 12 ) 1 = ( J j | j + D j + 1 j , j + E j + 1 j , j ) 1
Substituting (A12) into (A11) and using (A13) yields
J j | k 1 = ( J j | j + D j + 1 j , j + E j + 1 j , j ) 1 + ( J j | j + D j + 1 j , j + E j + 1 j , j ) 1 ( D j + 1 j + 1 , j + E j + 1 j + 1 , j ) J j + 1 | k 1 · ( D j + 1 j , j + 1 + E j + 1 j , j + 1 ) ( J j | j + D j + 1 j , j + E j + 1 j , j ) 1
Then using the matrix inversion lemma [24], the FIM J j | k is given by
J j | k = J j | j + D j + 1 j , j + E j + 1 j , j ( D j + 1 j + 1 , j + E j + 1 j + 1 , j ) [ ( D j + 1 j , j + 1 + E j + 1 j , j + 1 ) ( J j | j + D j + 1 j , j + E j + 1 j , j ) 1 · ( D j + 1 j + 1 , j + E j + 1 j + 1 , j ) + J j + 1 | k ] 1 ( D j + 1 j , j + 1 + E j + 1 j , j + 1 )
Substituting (4) into (A15), we have
J j | k = J j | j + D j + 1 j , j + E j + 1 j , j ( D j + 1 j + 1 , j + E j + 1 j + 1 , j ) ( J j + 1 | k + D j + 1 j + 1 , j + 1 + E j + 1 j + 1 , j + 1 J j + 1 | j + 1 ) 1 · ( D j + 1 j , j + 1 + E j + 1 j , j + 1 )
This completes the proof.

Appendix C. Proof of Theorem 3

From the assumptions that the noises are additive Gaussian white noises, we have
ln p ( x k + 1 | x k ) = c 2 1 2 ( x k + 1 f k ( x k ) ) Q k 1 ( x k + 1 f k ( x k ) )
where c 2 is a constant.
Thus, the partial derivatives of ln p ( x k + 1 | x k ) are
x k ln p ( x k + 1 | x k ) = x k [ 1 2 ( ( x k + 1 f k ( x k ) ) Q k 1 ( x k + 1 f k ( x k ) ) ] = x k 1 2 [ x k + 1 Q k 1 x k + 1 x k + 1 Q k 1 f k ( x k ) f k ( x k ) Q k 1 x k + 1 + f k ( x k ) Q k 1 f k ( x k ) ) ] = x k f k ( x k , θ x ) Q k 1 ( f k ( x k ) x k + 1 )
Δ x k x k ln p ( x k + 1 | x k ) = x k x k ln p ( x k + 1 | x k ) = x k [ ( f k ( x k ) x k + 1 ) Q k 1 x k f k ( x k ) ] = x k f k ( x k ) Q k 1 x k f k ( x k ) + Δ x k x k f k ( x k , θ x ) Q k 1 f k ( x k ) Δ x k x k f k ( x k ) Q k 1 x k + 1 = x k f k ( x k ) Q k 1 x k f k ( x k ) Δ x k x k f k ( x k , θ x ) Q k 1 ( x k + 1 f k ( x k ) )
Substituting (A19) into (3), we have
D k + 1 k , k = E [ Δ x k x k ln p ( x k + 1 | x k ) ] = E [ x k f k ( x k ) Q k 1 x k f k ( x k ) Δ x k x k f k ( x k ) Q k 1 ( x k + 1 f k ( x k ) ) ] = E [ x k f k ( x k ) Q k 1 x k f k ( x k ) ]
The remaining D k + 1 k + 1 , k , D k + 1 k + 1 , k + 1 , E k + 1 k , k , E k + 1 k + 1 , k and E k + 1 k + 1 , k + 1 can be obtained similarly. This completes the proof.

Appendix D. Proof of Theorem 4

Applying the optimal filter [24] to the linear Gaussian systems (26)–(28), we have
H k = L k + 1 F k Ψ k L k
R k = L k + 1 Q k L k + 1 + R k
F k = F k Q k L k + 1 ( R k ) 1 H k
Q k = Q k Q k L k + 1 ( R k ) 1 L k + 1 Q k
K k = P k | k ( H k ) S k 1
S k = H k P k | k ( H k ) + R k
P k | k + 1 = P k | k K k S k K k = ( A 25 ) P k | k P k | k ( H k ) S k 1 H k P k | k
P k + 1 | k + 1 = F k P k | k + 1 ( F k ) + Q k
For simplicity, we introduce
B k 11 = D k + 1 k , k + E k + 1 k , k B k 12 = D k + 1 k + 1 , k + E k + 1 k + 1 , k = ( B k 21 ) B k 22 = D k + 1 k + 1 , k + 1 + E k + 1 k + 1 , k + 1
From (4) and the matrix inversion lemma [24], we have
J k + 1 | k + 1 1 = [ B k 22 B k 21 ( B k 11 + J k | k ) 1 B k 12 ] 1 = ( B k 22 ) 1 + ( B k 22 ) 1 B k 21 [ B k 11 + J k | k B k 12 ( B k 22 ) 1 B k 21 ] 1 B k 12 ( B k 22 ) 1
Let
P k | k = J k | k 1
The inverse of B k 22 in (A30) can be rewritten as
( B k 22 ) 1 = ( Q k 1 + L k + 1 R k + 1 1 L k + 1 ) 1 = Q k Q k L k + 1 ( R k ) 1 L k + 1 Q k = ( A 24 ) Q k .
Rewrite ( B k 22 ) 1 B k 21 in (A30) as
( B k 22 ) 1 B k 21 = ( Q k Q k L k + 1 ( R k ) 1 L k + 1 Q k ) ( Q k 1 F k L k + 1 R k 1 Ψ k L k ) = F k Q k L k + 1 R k 1 Ψ k L k + Q k L k + 1 ( R k ) 1 L k + 1 F k + Q k L k + 1 ( R k ) 1 L k + 1 Q k L k + 1 R k 1 Ψ k L k = ( A 23 ) F k + Q k L k + 1 ( R k ) 1 Ψ k L k Q k L k + 1 ( R k ) 1 [ R k L k + 1 Q k L k + 1 ] R k 1 Ψ k L k = ( A 22 ) F k .
Similarly, rewrite B k 11 + J k | k B k 12 ( B k 22 ) 1 B k 21 as
B k 11 + J k | k B k 12 ( B k 22 ) 1 B k 21 = J k | k + F k Q k 1 F k + L k Ψ k R k 1 Ψ k L k ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) · [ Q k Q k L k + 1 ( R k ) 1 L k + 1 Q k ] ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) = J k | k + F k Q k 1 F k + L k Ψ k R k 1 Ψ k L k ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) · Q k ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) + ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) · Q k L k + 1 ( R k ) 1 L k + 1 Q k ( F k Q k 1 + L k Ψ k R k 1 L k + 1 ) = J k | k + F k Q k 1 F k + L k Ψ k R k 1 Ψ k L k F k Q k 1 F k F k L k + 1 R k 1 Ψ k L k L k Ψ k R k 1 L k + 1 Q k L k + 1 R k 1 Ψ k L k L k Ψ k R k 1 L k + 1 F k + [ ( H k ) L k Ψ k R k 1 R k ) ] ( R k ) 1 [ ( H k ) L k Ψ k R k 1 R k ) ] = J k | k ( H k ) R k 1 Ψ k L k L k Ψ k R k 1 L k + 1 Q k L k + 1 R k 1 Ψ k L k L k Ψ k R k 1 L k + 1 F k + ( H k ) ( L k + 1 Q k L k + 1 + R k ) 1 H k + L k Ψ k R k 1 H k + ( H k ) R k 1 Ψ k L k + L k Ψ k R k 1 R k R k 1 Ψ k L k = J k | k L k Ψ k R k 1 H k + ( H k ) ( R k ) 1 H k L k Ψ k R k 1 ( L k + 1 Q k L k + 1 R k ) R k 1 Ψ k L k = J k | k + ( H k ) ( R k ) 1 H k .
Thus, the inverse of B k 11 + J k | k B k 12 ( B k 22 ) 1 B k 21 becomes
( B k 11 + J k | k B k 12 ( B k 22 ) 1 B k 21 ) 1 = ( P k | k 1 + ( H k ) ( R k ) 1 H k ) 1 = P k | k P k | k ( H k ) [ R k + H k P k | k ( H k ) ] 1 H k P k | k = P k | k P k | k ( H k ) S k 1 H k P k | k = ( A 27 ) P k | k + 1
Then, from (A30), (A32), (A33) and (A35), the inverse of J k + 1 | k + 1 is
J k + 1 | k + 1 1 = Q k + F k P k | k + 1 ( F k ) = ( A 28 ) P k + 1 | k + 1 .
Using (6) and Corollary 1, the FIM for one-step prediction can be obtained as
J k + 1 | k = D k + 1 k + 1 , k + 1 D k + 1 k , k + 1 ( D k + 1 k , k + J k | k ) 1 D k + 1 k + 1 , k = Q k 1 Q k 1 F k ( F k Q k 1 F k + J k | k ) 1 F k Q k 1 = ( Q k + F k J k | k 1 F k ) 1
For the optimal one-step predictor, the MSE matrix P k + 1 | k is given by
P k + 1 | k = E [ x ˜ k + 1 | k x ˜ k + 1 | k | y k ] = F k P k | k F k + Q k
Then we have
P k + 1 | k = ( A 31 ) ( A 37 ) J k + 1 | k 1
Using (6) and Corollary 1, the FIM for two-step prediction can be written as
J k + 2 | k = D k + 2 k + 2 , k + 2 D k + 2 k + 1 , k + 2 ( D k + 2 k + 1 , k + 1 + J k + 1 | k ) 1 D k + 2 k + 2 , k + 1 = Q k + 1 1 Q k + 1 1 F k + 1 ( F k + 1 Q k + 1 1 F k + 1 + J k + 1 | k ) 1 F k + 1 Q k + 1 1 = ( Q k + 1 + F k + 1 J k + 1 | k 1 F k + 1 ) 1
For the optimal two-step predictor, one has
x ˜ k + 2 | k = x k + 2 x ^ k + 2 | k = F k + 1 ( x k + 1 x ^ k + 1 | k ) + v k + 1
and
P k + 2 | k = E [ x ˜ k + 2 | k x ˜ k + 2 | k | y k ] = F k + 1 P k + 1 | k F k + 1 + Q k + 1
Then it follows from (A39), (A40) and (A42) that
P k + 2 | k = J k + 2 | k 1
Similarly, we can prove that P k + m | k = J k + m | k 1 , m 3 . This completes the proof.

Appendix E. Proof of Theorem 6

For the linear Gaussian systems (35) and (36), from [49], we have P k | k = J k | k 1 .
Using (6) and Corollary 2, the FIM for one-step prediction can be written as
J k + 1 | k = D k + 1 k + 1 , k + 1 D k + 1 k , k + 1 ( D k + 1 k , k + J k | k ) 1 D k + 1 k + 1 , k = Q k 1 Q k 1 F k ( F k Q k 1 F k + J k | k ) 1 F k Q k 1 = ( Q k + F k J k | k 1 F k ) 1
For the optimal one-step predictor, the MSE matrix P k + 1 | k is given by
P k + 1 | k = F k P k | k F k + Q k
From (A44), (A45) and P k | k = J k | k 1 , we can obtain
P k + 1 | k = J k + 1 | k 1
Using (6) and Corollary 2, the FIM for two-step prediction is given by
J k + 2 | k = D k + 2 k + 2 , k + 2 D k + 2 k + 1 , k + 2 ( D k + 2 k + 1 , k + 1 + J k + 1 | k ) 1 D k + 2 k + 2 , k + 1 = Q k + 1 1 Q k + 1 1 F k + 1 ( F k + 1 Q k + 1 1 F k + 1 + J k + 1 | k ) 1 F k + 1 Q k + 1 1 = ( Q k + 1 + F k + 1 J k + 1 | k 1 F k + 1 ) 1
For the optimal two-step predictor, one has
x ˜ k + 2 | k = x k + 2 x ^ k + 2 | k = F k + 1 ( x k + 1 x ^ k + 1 | k ) + v k + 1
and
P k + 2 | k = E [ x ˜ k + 2 | k x ˜ k + 2 | k | Z k ] = F k + 1 P k + 1 | k F k + 1 + Q k + 1
Then it follows from (A46), (A47) and (A49) that
P k + 2 | k = J k + 2 | k 1
Similarly, we can prove that P k + m | k = J k + m | k 1 , m 3 . This completes the proof.

References

  1. Bar-Shalom, Y.; Willett, P.; Tian, X. Tracking and Data Fusion: A Handbook of Algorithms; YBS Publishing: Storrs, CT, USA, 2011. [Google Scholar]
  2. Mallick, M.; Tian, X.Q.; Zhu, Y.; Morelande, M. Angle-only filtering of a maneuvering target in 3D. Sensors 2022, 22, 1422. [Google Scholar] [CrossRef]
  3. Li, Z.H.; Xu, B.; Yang, J.; Song, J.S. A steady-state Kalman predictor-based filtering strategy for non-overlapping sub-band spectral estimation. Sensors 2015, 15, 110–134. [Google Scholar] [CrossRef]
  4. Lu, X.D.; Xie, Y.T.; Zhou, J. Improved spatial registration and target tracking method for sensors on multiple missiles. Sensors 2018, 18, 1723. [Google Scholar] [CrossRef] [Green Version]
  5. Ntemi, M.; Kotropoulos, C. Prediction methods for time evolving dyadic processes. In Proceedings of the 26th European Signal Process, Roma, Italy, 3–7 September 2018; pp. 2588–2592. [Google Scholar]
  6. Chen, G.; Meng, X.; Wang, Y.; Zhang, Y.; Tian, P.; Yang, H. Integrated WiFi/PDR/Smartphone using an unscented Kalman filter algorithm for 3D indoor localization. Sensors 2015, 15, 24595–24614. [Google Scholar] [CrossRef] [Green Version]
  7. Xu, Y.; Chen, X.Y.; Li, Q.H. Autonomous integrated navigation for indoor robots utilizing on-line iterated extended Rauch-Tung-Striebel smoothing. Sensors 2013, 13, 15937–15953. [Google Scholar] [CrossRef] [Green Version]
  8. Kalmam, R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  9. Schmidt, S.F. The Kalman filter—Its recognition and development for aerospace applications. J. Guid. Control Dyn. 1981, 4, 4–7. [Google Scholar] [CrossRef]
  10. Norgaard, M.; Poulsen, N.; Ravn, O. New developments in state estimation of nonlinear systems. Automatica 2000, 36, 1627–1638. [Google Scholar] [CrossRef]
  11. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  12. Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Automat. Contr. 2000, 45, 477–482. [Google Scholar] [CrossRef] [Green Version]
  13. Arasaratnam, I.; Haykin, S.; Elliott, R.J. Discrete-time nonlinear filtering algorithms using Gauss-Hermite quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
  14. Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  15. Liu, H.; Wu, W. Strong tracking spherical simplex-radial cubature Kalman filter for maneuvering target tracking. Sensors 2017, 17, 741. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Li, X.R.; Jilkov, V.P. A survey of maneuvering target tracking: Approximation techniques for nonlinear filtering. In Proceedings of the SPIE Conference Signal Data Process, Small Targets, Orlando, FL, USA, 25 August 2004; pp. 537–550. [Google Scholar]
  17. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  18. Wang, X.; Li, T.; Sun, S.; Corchado, J.M. A survey of recent advances in particle filters and remaining challenges for multitarget tracking. Sensors 2017, 17, 2707. [Google Scholar] [CrossRef] [Green Version]
  19. Sun, S.L. Optimal and self-tuning information fusion Kalman multi-step predictor. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 418–427. [Google Scholar] [CrossRef]
  20. Adnan, R.; Ruslan, F.A.; Samad, A.M.; Zain, Z.M. Extended Kalman filter (EKF) prediction of flood water level. In Proceedings of the 2012 IEEE Control and System Graduate Research Colloquium, Shah Alam, Malaysia, 16–17 July 2012; pp. 171–174. [Google Scholar]
  21. Tian, X.M.; Cao, Y.P.; Chen, S. Process fault prognosis using a fuzzy-adaptive unscented Kalman predictor. Int. J. Adapt. Control Signal Process. 2011, 25, 813–830. [Google Scholar] [CrossRef] [Green Version]
  22. Han, M.; Xu, M.L.; Liu, X.X.; Wang, X.Y. Online multivariate time series prediction using SCKF-γESN model. Neurocomputing 2015, 147, 315–323. [Google Scholar] [CrossRef]
  23. Wang, D.; Yang, F.F.; Tsui, K.L.; Zhou, Q.; Bae, S.J. Remaining useful life prediction of Lithium-Ion batteries based on spherical cubature particle filter. IEEE Trans. Instrum. Meas. 2016, 65, 1282–1291. [Google Scholar] [CrossRef]
  24. Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
  25. Leondes, C.T.; Peller, J.B.; Stear, E.B. Nonlinear smoothing theory. IEEE Trans. Syst. Sci. Cybern. 1970, 6, 63–71. [Google Scholar] [CrossRef]
  26. Sarkka, S. Unscented Rauch-Tung-Striebel smoother. IEEE Trans. Autom. Control 2008, 53, 845–849. [Google Scholar] [CrossRef] [Green Version]
  27. Arasaratnam, I.; Haykin, S. Cubature Kalman smoothers. Automatica 2011, 47, 2245–2250. [Google Scholar] [CrossRef]
  28. Lindsten, F.; Bunch, P.; Godsill, S.J.; Schon, T.B. Rao-Blackwellized particle smoothers for mixed linear/nonlinear state-space models. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6288–6292. [Google Scholar]
  29. Wu, W.R.; Chang, D.C. Maneuvering target tracking with colored noise. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1311–1320. [Google Scholar]
  30. Li, Z.; Wang, Y.; Zheng, W. Adaptive consensus-based unscented information filter for tracking target with maneuver and colored noise. Sensors 2019, 19, 3069. [Google Scholar] [CrossRef] [Green Version]
  31. Yuan, G.N.; Xie, Y.J.; Song, Y.; Liang, H.B. Multipath parameters estimation of weak GPS signal based on new colored noise unscented Kalman filter. In Proceedings of the 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; pp. 1852–1856. [Google Scholar]
  32. Jamoos, A.; Grivel, E.; Bobillet, W.; Guidorzi, R. Errors-in-variables based approach for the identification of AR time-varying fading channels. IEEE Signal Process. Lett. 2007, 14, 793–796. [Google Scholar] [CrossRef]
  33. Mahmoudi, A.; Karimi, M.; Amindavar, H. Parameter estimation of autoregressive signals in presence of colored AR(1) noise as a quadratic eigenvalue problem. Signal Process. 2012, 92, 1151–1156. [Google Scholar] [CrossRef]
  34. Gustafsson, F.; Saha, S. Particle filtering with dependent noise. In Proceedings of the 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; pp. 26–29. [Google Scholar]
  35. Zuo, D.G.; Han, C.Z.; Wei, R.X.; Lin, Z. Synchronized multi-sensor tracks association and fusion. In Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, 7–10 August 2001; pp. 1–6. [Google Scholar]
  36. Chui, C.K.; Chen, G. Kalman Filtering: With Real-Time Applications; Springer: Cham, Switzerland, 2017. [Google Scholar]
  37. Wang, X.X.; Pan, Q. Nonlinear Gaussian filter with the colored measurement noise. In Proceedings of the 17th International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014; pp. 1–7. [Google Scholar]
  38. Wang, X.X.; Liang, Y.; Pan, Q.; Zhao, C.; Yang, F. Nonlinear Gaussian smoother with colored measurement noise. IEEE Trans. Autom. Control 2015, 60, 870–876. [Google Scholar] [CrossRef]
  39. Saha, S.; Gustafsson, F. Particle filtering with dependent noise processes. IEEE Trans. Signal Process. 2012, 60, 4497–4508. [Google Scholar] [CrossRef] [Green Version]
  40. Huang, Y.L.; Zhang, Y.G.; Li, N.; Shi, Z. Design of Gaussian approximate filter and smoother for nonlinear systems with correlated noises at one epoch apart. Circ. Syst. Signal Process. 2016, 35, 3981–4008. [Google Scholar] [CrossRef]
  41. Van Trees, H.L.; Bell, K.L.; Tian, Z. Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Filtering Theory, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2013. [Google Scholar]
  42. Hernandez, M. Performance Bounds for Target Tracking: Computationally Efficient Formulations and Associated Applications. In Integrated Tracking, Classification, and Sensor Management: Theory and Applications; Mallick, M., Krishnamurthy, V., Vo, B.-N., Eds.; Wiley-IEEE Press: Piscataway, NJ, USA, 2012; pp. 255–310. [Google Scholar]
  43. Ristic, B.; Arulampalam, S.; Gordon, N. Beyond the Kalman Filter; Artech House: Norwood, MA, USA, 2004. [Google Scholar]
  44. Tichavsky, P.; Muravchik, C.H.; Nehorai, A. Posterior Cramér-Rao bounds for discrete-time nonlinear filtering. IEEE Trans. Signal Process. 1998, 46, 1386–1396. [Google Scholar] [CrossRef] [Green Version]
  45. Simandl, M.; Kralovec, J.; Tichavsky, P. Filtering, predictive, and smoothing Cramér-Rao bounds for discrete-time nonlinear dynamic systems. Automatica 2001, 37, 1703–1716. [Google Scholar] [CrossRef]
  46. Zuo, L.; Niu, R.X.; Varshney, P.K. Conditional posterior Cramér-Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2011, 59, 1–14. [Google Scholar] [CrossRef] [Green Version]
  47. Zheng, Y.J.; Ozdemir, O.; Niu, R.X.; Varshney, P.K. New conditional posterior Cramér-Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2012, 60, 5549–5556. [Google Scholar] [CrossRef]
  48. Wang, Z.G.; Shen, X.J.; Zhu, Y.M. Posterior Cramér-Rao bounds for nonlinear dynamic system with colored noises. J. Syst. Sci. Complex. 2019, 32, 1526–1543. [Google Scholar] [CrossRef]
  49. Fritsche, C.; Saha, S.; Gustafsson, F. Bayesian Cramér-Rao bound for nonlinear filtering with dependent noise processes. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 797–804. [Google Scholar]
  50. Huang, Y.L.; Zhang, Y.G. A new conditional posterior Cramér-Rao lower bound for a class of nonlinear systems. Int. J. Syst. Sci. 2016, 47, 3206–3218. [Google Scholar] [CrossRef]
  51. Li, X.Q.; Duan, Z.S.; Hanebeck, U.D. Recursive joint Cramér-Rao lower bound for parametric systems with two-adjacent-states dependent measurements. IET Signal Process. 2021, 15, 221–237. [Google Scholar] [CrossRef]
  52. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: New York, NY, USA, 2012. [Google Scholar]
  53. Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part I: Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1333–1364. [Google Scholar]
Figure 1. RMSE versus BCRLB in Example 1.
Figure 1. RMSE versus BCRLB in Example 1.
Sensors 22 04667 g001
Figure 2. BCRLB s for prediction in Example 1.
Figure 2. BCRLB s for prediction in Example 1.
Sensors 22 04667 g002
Figure 3. BCRLB s for smoothing in Example 1.
Figure 3. BCRLB s for smoothing in Example 1.
Sensors 22 04667 g003
Figure 4. RMSE versus BCRLB in Example 2.
Figure 4. RMSE versus BCRLB in Example 2.
Sensors 22 04667 g004
Figure 5. BCRLB s for prediction in Example 2.
Figure 5. BCRLB s for prediction in Example 2.
Sensors 22 04667 g005
Figure 6. BCRLB s for smoothing in Example 2.
Figure 6. BCRLB s for smoothing in Example 2.
Sensors 22 04667 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, X.; Duan, Z.; Tang, Q.; Mallick, M. Bayesian Cramér-Rao Lower Bounds for Prediction and Smoothing of Nonlinear TASD Systems. Sensors 2022, 22, 4667. https://doi.org/10.3390/s22134667

AMA Style

Li X, Duan Z, Tang Q, Mallick M. Bayesian Cramér-Rao Lower Bounds for Prediction and Smoothing of Nonlinear TASD Systems. Sensors. 2022; 22(13):4667. https://doi.org/10.3390/s22134667

Chicago/Turabian Style

Li, Xianqing, Zhansheng Duan, Qi Tang, and Mahendra Mallick. 2022. "Bayesian Cramér-Rao Lower Bounds for Prediction and Smoothing of Nonlinear TASD Systems" Sensors 22, no. 13: 4667. https://doi.org/10.3390/s22134667

APA Style

Li, X., Duan, Z., Tang, Q., & Mallick, M. (2022). Bayesian Cramér-Rao Lower Bounds for Prediction and Smoothing of Nonlinear TASD Systems. Sensors, 22(13), 4667. https://doi.org/10.3390/s22134667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop