Next Article in Journal
A Lightweight Continuous Authentication Protocol for the Internet of Things
Previous Article in Journal
Motor Subtypes of Parkinson’s Disease Can Be Identified by Frequency Component of Postural Stability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations

School of Mathematics, Sichuan University, Chengdu 610064, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(4), 1103; https://doi.org/10.3390/s18041103
Submission received: 19 March 2018 / Revised: 31 March 2018 / Accepted: 2 April 2018 / Published: 5 April 2018
(This article belongs to the Section Sensor Networks)

Abstract

:
This paper considers the problems of the posterior Cramér–Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér–Rao bound. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0–1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér–Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

1. Introduction

In practical problems, we always encounter some sensors have the uncertain measurement subjected to random interference, natural interruptions or sensor failures. Using the mode parameters without considering the uncertainty is unavailable, and there are a lot of researchers that have studied the state estimation with uncertain measurement, such as [1,2,3,4]. In this paper, we consider the uncertainty caused by occlusions, i.e., the sensors may not be able to observe the target when blocked by some obstacles [5]. For the linear dynamic systems involving uncertainty in [6,7], the authors use a Kalman filter to track the target. However, it is difficult to obtain the optimal estimation for a nonlinear uncertain dynamic system, but we are particular interested in measuring their efficiency. For this purpose, it is natural to compare a lower bound of the estimation error, which gives an indication of performance limitations. Moreover, it can be used to determine whether imposed performance requirements are realistic or not.
The most popular lower bound is the well-known Cramér–Rao bound (CRB). In time-invariant statistical models, the estimated parameter vector is usually assumed to be real-valued (non-random). The lower bound is given by the inverse of the Fisher information matrix. When we deal with the time-varying systems, the estimated parameter vector is modeled randomly. A lower bound that is analogous to the CRB for random parameters is derived in [8], and this bound is also known as the Van Trees version of the CRB, or referred to as posterior CRB (PCRB). In fact, the underlying static random system needs to satisfy the regularity condition, which is absolute integrability of the first two derivatives of all related probability density functions. The first derivation of a sequential PCRB version applicable to discrete-time dynamic system filtering is done in [9] and then extended in [10,11,12]. The most general form of sequential PCRB for discrete-time nonlinear systems is presented in [13]. Together with the original static form of the CRB, these results serve as a basis for a large number of applications [14,15,16].
Most of the papers on PCRB are obtained without considering the uncertainty in the dynamic systems. When the sensors have uncertain measurements, we need to consider the influence of the uncertainty [17,18]. The CRB is presented in [19,20] to target tracking with detection probability smaller than one. If the uncertain measurement is prone to discretely-distributed faults, a Cramér–Rao-type bound is shown in [21]. Actually, the authors in [22,23] have considered uncertainty as the mixed Gaussian probabilistic model, where the sensor observation is assumed to contain only noise if the sensor cannot sense the target. Therefore, we hope to derive a recursive PCRB based on the uncertain model of the Gaussian mixture distribution.
Since the PCRB needs to compute the Fisher information, which is obtained by the derivatives of the log likelihood function of the Gaussian mixture model, and it is much more difficult than the case of a single Gaussian distribution. The reason is that the presence of the summation that occurs inside of the logarithm, and the PCRB of the Gaussian mixture model needs to compute the complex integral, which is with respect to the joint probability density function of the sensor measurements and the target state. These reasons motivate us to research another approach to derive the PCRB.
In large wireless sensor networks (WSNs), sensors are battery-powered devices with limited signal processing capabilities [24,25]. In such situations, it is inefficient to utilize all the sensors including the uninformative ones, which is hardly helpful to the tracking task but still consumes resources. This issue has been researched and shown via the development of sensor selection schemes, whose goal is to select the best non-redundant set of sensors for the tracking task while satisfying the resource constraints [26,27]. The previous research [28,29] on sensor selection assumes that the target tracking process does not have any interruptions. As the sensor observations are quite uncertain, we need to consider the sensor selection based on the proposed PCRB.
In this paper, we use two methods to derive the PCRB to effectively overcome the difficulties caused by uncertainty. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state, which leads to the computation burden of this method being relatively high, especially in large sensor networks, so that it is not better using this PCRB as a measure criteria of the sensor selection. In order to reduce the computation burden and deal with the sensor selection of a large-scale sensor networks, our contributions are as follows:
  • Inspired by the idea of the expectation maximization algorithm, we introduce some 0–1 latent variables to treat the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied in the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables, then a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. The Cramér–Rao bound avoids the complex integral with a less computation burden.
  • Based on the proposed posterior Cramér–Rao bound, the sensor selection problems for the nonlinear uncertain dynamic system can be efficiently solved, and the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection for the large-scale sensor networks.
The remainder of this paper is organized as follows. The system uncertain model is defined and the problem is formulated in Section 2. The PCRB for the dynamic system with uncertain observations is detailed and justified in Section 3. The optimal sensor selection with uncertain observations is shown in Section 4. Two numerical examples are presented in Section 5. Finally, the conclusions are offered in the final section.

2. Problem Formulation

Consider the L-sensor nonlinear dynamic systems with the uncertain observations [5,30],
x k = f k ( x k 1 ) + w k ,
y k i = h k i ( x k ) + v k i with probability p k i , v k i with probability 1 p k i , i = 1 , 2 , , L ,
where p k i is the sensing probability of sensor i, x k is the state of system at time k, y k i is the measurement at the ith sensor, i = 1 , , L , f k ( x k 1 ) is the nonlinear state function, and h k i ( x k ) is the nonlinear measurement function of x k at the ith sensor. w k and v k i are the state noise and the measurement noise, respectively, and they are mutually independent. v k i is assumed to be independent across time steps and across sensors. The measurement information of the ith sensor is denoted by Y k i = { y 1 i , y 2 i , , y k i } .
Assume that w k and v k i are white Gaussian noise with N ( 0 , Q k ) and N ( 0 , R k i ) , i = 1 , , L , respectively, where, Q k and R k i are the corresponding covariance matrices. We also assume that the initial state x 0 N ( x ^ 0 , Σ 0 ) , and, if x k is given, then the measurement y k i follows the Gaussian distribution N ( h k i ( x k ) , R k i ) with probability p k i , and follows the Gaussian distribution N ( 0 , R k i ) with probability 1 p k i , i.e.,
p ( y k i | x k ) = p k i N ( h k i ( x k ) , R k i ) + ( 1 p k i ) N ( 0 , R k i ) .
Obviously, the conditional probability density function is a Gaussian mixture distribution, which ishard to calculate the PCRB. This difficult problem motivates us to introduce a hidden state variable, which draws lessons from the idea of the expectation maximization (EM) algorithm [31].
We introduce the 0–1 hidden state variables I k i , i = 1 , 2 , , L , which indicate whether the dynamic system has uncertainty. In other words, if I k i = 1 , then y k i = h k i ( x k ) + v k i , and I k i = 0 , then y k i = v k i . Now, we transfer the nonlinear systems (1) and (2) as follows:
x k = f k ( x k 1 ) + w k , I k i = 0 · I k 1 i + w ˜ k i ,
y k i = I k i · h k i ( x k ) + v k i , i = 1 , 2 , , L .
Then, the compact form for Equations (4) and (5) can be written as follows:
x ˘ k = F k ( x ˘ k 1 ) + W k ,
y k i = I k i · h k i ( x k ) + v k i , i = 1 , 2 , , L ,
where x ˘ k = [ x k T I k T ] T , W k = [ w k T w ˜ k 1 T , , w ˜ k L T ] T , F k ( X k ) = [ f k T ( x k ) 0 ] T , I k = [ I k 1 , , I k L ] T , w k N ( 0 , Q k ) . w ˜ k i B ( 1 , p k i ) , which means a Bernoulli distribution with probability parameter p k i , if P ( w ˜ k i = 1 ) = p k i and P ( w ˜ k i = 0 ) = 1 p k i . The process noise is independent of the uncertainty. Then, we assume w k and w ˜ k i , i = 1 , 2 , , L are mutually independent.
Since the PCRB is an important criterion of sensor selection, we drive two PCRBs of the uncertain dynamic systems (1), (2), (6) and (7) in Section 3 and Section 4, respectively. The former is accurate, but it is difficult to be computed. Thus, the latter is derived by introduced some hidden state variables, which avoids the complex integral and can reduce the computation burden. Finally, based on the second PCRB, we hope to obtain the analytically optimal solution of the sensor selection problem, so that it can be applied to the large-scale sensor selection problem for the uncertain dynamic systems.

3. The Posterior Cramér–Rao Bound with Uncertain Observations

In this section, we mainly discuss two methods to calculate the PCRB of multiple sensors. The first method is based on the nonlinear dynamic system with uncertain observations (1) and (2) and Gaussian mixture model [5,13,15,32]. The other approach is based on the nonlinear dynamic system (6) and (7) motivated by the EM algorithm [33].
Let θ be a r-dimensional estimated random parameter, z represents a vector of measured data, let p ( z , θ ) be the joint probability density of the pair ( z , θ ) , and let g ( z ) be a function of z , which is an estimate of θ . Let Δ and ∇ be operators of the first and second-order partial derivatives, respectively,
η = [ η 1 , , η L ] , Δ η ξ = η ξ T .
The PCRB on the estimate error has the form
P = E [ g ( z ) θ ] [ g ( z ) θ ] T J 1 ,
where J = E Δ Θ Θ log p z , θ ( Z , Θ ) is the (Fisher) information matrix denoted by Van Trees [8]. For example, if the posterior distribution of θ conditioned on z is Gaussian with mean θ ¯ z and a covariance matric Σ z . Then, the information matrix (8) reads J = E { Σ z 1 } .
Assume now that the parameter θ is decomposed into two parts as θ = [ θ α T , θ β T ] T , and the information matrix J is correspondingly divided into blocks
J = J α α J α β J β α J β β .
Then, it can be easily shown that the covariance of estimation of θ β is lower bounded by the right-lower block of J 1 , i.e.,
P β = E { [ g β ( x ) θ β ] [ g β ( x ) θ β ] T } [ J β β J β α J α α 1 J α β ] 1 ,
where we assume that J α α 1 exists. Denoted J β = J β β J β α J α α 1 J α β , which is called the information submatrix for β .
Now, for nonlinear dynamic systems with uncertain observations (1) and (2), the following proposition gives a method to compute the information submatrix J k recursively.
Proposition 1.
The Fisher information submatrix J k for the estimating state vectors { x k } obeys the recursion:
J k + 1 = D k 22 D k 21 ( J k + D k 11 ) 1 D k 12 ,
J 0 = E [ Δ x 0 x 0 log p ( x 0 ) ] ,
with
D k 11 = E [ x k f k T ( x k ) ] Q k 1 [ x k f k T ( x k ) ] T ,
D k 12 = E [ x k f k T ( x k ) ] Q k 1 ,
D k 21 = ( D k 12 ) T ,
D k 22 = Q k 1 + i = 1 L E { Δ x k + 1 x k + 1 log p ( y k + 1 i | x k + 1 ) } ,
where
p ( y k + 1 i | x k + 1 ) = p k i N ( h k i ( x k + 1 ) , R k + 1 i ) + ( 1 p k i ) N ( 0 , R k + 1 i ) .
Proof. 
Equations (1) and (2) together with p ( x 0 ) determine the joint probability distribution of X k = [ x 0 , x 1 , , x k ] and Y k = [ y 0 , y 1 , , y k ] , where y k = ( y k 1 , y k 2 , , y k L ) T ,
p ( X k , Y k ) = p ( X k 1 , Y k 1 ) · p ( x k | X k 1 , Y k 1 ) · p ( y k | X k , Y k 1 ) = p ( X k 1 , Y k 1 ) · p ( x k | x k 1 ) · p ( y k | x k ) = p ( X k 1 , Y k 1 ) · p ( x k | x k 1 ) · i = 1 L p ( y k i | x k ) .
The conditional probability densities p ( x k | x k 1 ) and p ( y k i | x k ) can be calculated by Equations (1) and (2), respectively. Denote p k = p ( X k , Y k ) , by Equation (16), we can obtain the formula about p k + 1 as follows:
p k + 1 = p k · p ( x k + 1 | x k ) · i = 1 L p ( y k + 1 i | x k + 1 ) .
Therefore,
log p k + 1 = log p k + log p ( x k + 1 | x k ) + i = 1 L log p ( y k + 1 i | x k + 1 ) .
If we divide X k into X k = [ X k 1 T , x k T ] T , then
J ( X k ) = E { Δ X k X k log p k } = E { Δ X k 1 X k 1 log p k } E { Δ X k 1 x k log p k } E { Δ x k X k 1 log p k } E { Δ x k x k log p k } A k B k B k T C k .
The information submatrix J k for x k can be obtained as follows:
J k = C k B k T A k 1 B k .
Moreover, let X k + 1 = [ X k 1 T , x k T , x k + 1 T ] T , then the posterior information matrix for X k + 1 can be written as the following block form by Equation (18),
J ( X k + 1 ) = A k B k 0 B k T C k + D k 11 D k 12 0 D k 21 D k 22 ,
where 0 stands for zero blocks of appropriate dimensions, and D k 11 , D k 12 , D k 22 are calculated as follows:
D k 11 = E { Δ x k x k log p ( x k + 1 | x k ) } , D k 12 = E { Δ x k x k + 1 log p ( x k + 1 | x k ) } = ( D k 21 ) T , D k 22 = i = 1 L E { Δ x k + 1 x k + 1 log p ( y k + 1 i | x k + 1 ) } .
Then, the information submatrix J k + 1 for x k + 1 can be computed as
J k + 1 = D k 22 0 D k 21 A k B k B k T C k + D k 11 1 0 D k 12 = D k 22 D k 21 [ C k + D k 11 B k T A k 1 B k ] 1 D k 12 .
Based on the definition of J k in (20), we can obtain the desired recursion (9). Since the state noise and the measurement noise are Gaussian with zero mean and invertible covariance matrices Q k and R k i , i = 1 , , L , respectively. Moreover, the dynamic systems have the uncertain observations. From these assumptions and Equation (3), it follows that
log p ( x k + 1 | x k ) = c 1 + 1 2 [ x k + 1 f k ( x k ) ] T Q k 1 [ x k + 1 f k ( x k ) ] log p ( y k + 1 i | x k + 1 ) = log p k N ( h k + 1 i ( x k + 1 ) , R k + 1 i ) + ( 1 p k ) N ( 0 , R k + 1 i ) ,
where c 1 is a constant. Therefore, D k 11 , D k 12 , D k 22 can be simplified to (11)–(14). ☐
From Equations (14) and (15), we see that the appearance of the summation inside of the logarithm, and the computation of D 22 k is related to the joint probability density function of the sensor measurements y k + 1 and the target state x k + 1 , then D 22 k is not easy to calculate. These reasons motivate us to study another approach to derive the PCRB.
Based on the equivalence between the systems (1)–(2) and (6)–(7), we can derive PCRB for the dynamic systems (6) and (7) by introducing a hidden variable I k , and the new PCRB may be easier to compute. Since the second derivation for the discrete augmented variable I k do not exist, then we bring in a continuous random variable I ˜ k to approximate the 0–1 variable I k . The augmented state vector x ˘ k = [ x k , I k ] T has changed into x ˜ k = [ x k , I ˜ k ] T . Therefore, the new system can be expressed as follows:
x k = f k ( x k 1 ) + w k ,
I ˜ k i = 0 · I ˜ k 1 i + w ˜ k ,
y k i = I ˜ k i · h k i ( x k ) + v k i , i = 1 , 2 , , L .
Lemma 1
([21]). If I k i B ( 1 , p k i ) and I ˜ k i p k i N ( 1 , σ 2 ) + ( 1 p k i ) N ( 0 , σ 2 ) , i = 1 , , L , then the limit of I ˜ k is the state variable I k when σ 0 , i.e., lim σ 0 I ˜ k = I k .
Let J ˜ k represents the PCRB about the approximated augment vector x ˜ k of systems (23)–(25), respectively. Then, we can easily get the following conclusion:
Lemma 2
([21]). Assume that I ˜ k i p k i N ( 1 , σ 2 ) + ( 1 p k i ) N ( 0 , σ 2 ) , i = 1 , , L , then P k ( x ˘ k ) lim σ 0 J ˜ k 1 = J ¯ k 1 0 0 0 , where P k ( x ˘ k ) denotes the estimation error covariance matrix about the vector x ˘ k and J ¯ k denotes the Fisher information submatrix about the vector x k .
Based on Lemmas 1 and 2, for nonlinear dynamic system with the uncertain observations (6) and (7), it is easy to see that J ¯ k 1 can also represent a CRB for the estimation error covariance matrix of vector x k .
Proposition 2.
At time k + 1 , the Fisher information submatrix J ¯ k + 1 of x k + 1 for the multi-sensor uncertain systems (6) and (7) is computed according to the following recursion:
J ¯ k + 1 = D ¯ k 22 D ¯ k 21 ( J ¯ k + D ¯ k 11 ) 1 D ¯ k 12 ,
J ¯ 0 = Σ 0 1 ,
with
D ¯ k 11 = E { [ x k f k T ( x k ) ] Q k 1 [ x k f k T ( x k ) ] T } ,
D ¯ k 12 = E [ x k f k T ( x k ) ] Q k 1 ,
D ¯ k 21 = ( D ¯ k 12 ) T ,
D ¯ k 22 = Q k 1 + i = 1 L p k + 1 i E [ x k + 1 h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] .
Proof. 
According to Lemma 1 and the derivation of Proposition 1, the new augmented state vector x ˜ k has the following PCRB for systems (6) and (7):
J ˜ k + 1 = D ˜ k 22 D ˜ k 21 ( J ˜ k + D ˜ k 11 ) 1 D ˜ k 12 ,
where D ˜ k 11 , D ˜ k 12 , D ˜ k 22 are denoted as follows:
D ˜ k 11 = E x ˜ k { Δ x ˜ k x ˜ k log p ( x ˜ k + 1 | x ˜ k ) } , D ˜ k 12 = E x ˜ k { Δ x ˜ k x ˜ k + 1 log p ( x ˜ k + 1 | x ˜ k ) } , D ˜ k 21 = ( D ˜ k 12 ) T , D ˜ k 22 = E x ˜ k + 1 { Δ x ˜ k + 1 x ˜ k + 1 log p ( x ˜ k + 1 | x ˜ k ) } + i = 1 L E x ˜ k + 1 { Δ x ˜ k + 1 x ˜ k + 1 log p ( y k + 1 i | x ˜ k + 1 ) } .
In order to obtain the lower bound for x k , it is necessary for us to calculate the following probability densities, according to Equations (6) and (7),
log p ( x ˜ k + 1 | x ˜ k ) = log p ( x k + 1 | x k ) · p ( I ˜ k + 1 ) = c 3 + 1 2 [ x k + 1 f k ( x k ) ] T Q k 1 [ x k + 1 f k ( x k ) ] log p ( I ˜ k + 1 ) ,
where c 3 is a constant, and the first equality follows from the independence and the second follows from (2). The another probability density is as follows:
log p ( y k + 1 i | x ˜ k + 1 ) = c 4 + 1 2 [ y k + 1 i I ˜ k + 1 i h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 [ y k + 1 i I ˜ k + 1 i h k + 1 i ( x k + 1 ) ] ,
where c 4 is a constant. Since x ˜ k = [ x k , I ˜ k ] T , we use Equations (33)–(34) and Lemma 1, and the suitable partitioned expressions for D ˜ k 11 , D ˜ k 12 , D ˜ k 22 are obtained:
D ˜ k 11 = D ¯ k 11 0 0 0 ,
D ˜ k 12 = D ¯ k 12 0 0 0 ,
D ˜ k 22 = D ¯ k 22 C 12 C 21 C 22 ,
where D ¯ k 11 , D ¯ k 12 are denoted in (28) and (29), while C 12 , C 21 and C 22 are calculated as follows:
C 12 = E x ˜ k + 1 { Δ x k + 1 I ˜ k + 1 log p ( x ˜ k + 1 | x ˜ k ) } + i = 1 L E x ˜ k + 1 { Δ x k + 1 I ˜ k + 1 i log p ( y k + 1 i | x ˜ k + 1 ) } = i = 1 L E x ˜ k + 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 y k + 1 i + 2 i = 1 L E x ˜ k + 1 I ˜ k + 1 i h k + 1 i ( x k + 1 ) ( R k + 1 i ) 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] = ( C 21 ) T ,
C 22 = i = 1 L E x ˜ k + 1 { Δ I ˜ k + 1 i I ˜ k + 1 i log p ( y k + 1 i | x ˜ k + 1 ) } + i = 1 L E x ˜ k + 1 { Δ I ˜ k + 1 i I ˜ k + 1 i log p ( I ˜ k + 1 i ) } = i = 1 L E x ˜ k + 1 { ( h k + 1 i ( x k + 1 ) ) T ( R k + 1 i ) 1 ( h k + 1 i ( x k + 1 ) ) } + i = 1 L E x ˜ k + 1 { Δ I ˜ k + 1 i I ˜ k + 1 i log p ( I ˜ k + 1 i ) } .
If we divide J ˜ k as the following block matrix
J ˜ k = J ˜ k 11 J ˜ k 12 J ˜ k 21 J ˜ k 22 ,
then according to (32) and (35)–(37), the value of J ˜ k + 1 is
J ˜ k + 1 = D ¯ k 22 + D ¯ k 21 D ¯ k 11 + J ˜ k 11 J ˜ k 12 ( J ˜ k 22 ) 1 J ˜ k 21 1 D ¯ k 12 C 12 C 21 C 22 .
Since the matrix C 22 is the function of σ , it is shown in [21] that
lim σ 0 J ˜ k + 1 1 = D ¯ k 22 + D ¯ k 21 D ¯ k 11 + J ¯ k 11 1 D ¯ k 12 1 0 0 0 .
Using Lemma 1, we can obtain (26), and the matrix D ¯ k 22 can be computed as
D ¯ k 22 = i = 1 L E x ˜ k + 1 { I k + 1 i x k + 1 ( h k + 1 i ( x k + 1 ) ) T ( R k + 1 i ) 1 x k + 1 ( h k + 1 i ( x k + 1 ) ) I k + 1 i } + Q k 1 = Q k 1 + i = 1 L p k + 1 i E x k + 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] .
 ☐
Remark 1.
Note that PCRBs derived in Propositions 1 and 2 have different forms. The first one is optimal. The second one is only approximately optimal with less computational burden. Since it may be approximated from above or below, which one is lower cannot be judged. The simulation in Section 6 shows that they are almost equal and the computational complexity of the approximate bound is less than that of the accurate bound.
Remark 2.
In the case of p = 1 and L = 1 , the multi-sensor dynamic systems (1) and (2) has the certain observations. Obviously, the PCRB derived by the method in [13] is consistent with that derived in Proposition 2.

4. Sensor Selection with Uncertain Observations

In large sensor networks, it is an important problem to manage the communication resources efficiently. The calculation of PCRB by Proposition 1 needs to use the joint probability density function of the sensor measurements and the target state, which leads to the computational burden being heavy, so that it is detrimental to be used as a measure criteria of the sensor selection. In this section, we consider the problem of sensor selection by Proposition 2.
For the nonlinear dynamic system at time k, assume that s sensors will be selected from L sensors by maximizing the Fisher information matrix, then they will send their measurements or local estimates to the fusion center. Finally, the fusion center makes the estimates for the state. In order to select the optimal s sensors, we need to introduce a selection vector s k = [ s k 1 , , s k L ] T { 0 , 1 } L . If the ith sensor is selected, let s k i = 1 ; otherwise, s k i = 0 , i = 1 , , L . According to the derivation of the Fisher information matrix in Section 3, the selection vector modifies the log conditional probability density l o g p ( y k | x k ) as [34]
l o g i = 1 L ( p ( y k i | x k ) ) s k i = i = 1 L s k i l o g p ( y k i | x k ) .
In fact, the selected variable s k only has an effect on D ¯ k 22 of Proposition 2. Then, D ¯ k 22 can be written as
D ¯ k 22 = Q k 1 + i = 1 L s k + 1 i p k + 1 i E [ x k + 1 h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] .
Therefore, the information matrix of x k + 1 is the function of the selected variable s k . Now, the sensor selection problem can be expressed as the following optimization problem:
max s k + 1 t r ( J ¯ k + 1 ( s k + 1 ) ) ,
s . t . i = 1 L s k + 1 i = s ,
s k + 1 i { 0 , 1 } , i = 1 , , L ,
where “tr” means “trace”, which is the sum of squares of semiaxes lengths of the Fisher information matrix. “s.t”. means “subjected to”.
Remark 3.
In fact, the objective function in (43) should be matrix J ¯ k + 1 ( s k + 1 ) . Then, the problem (43)–(45) is a matrix optimization problem, which is considered in the sense that if s k + 1 is an optimal solution. Then, for an arbitrary feasible solution s k + 1 , we have J ¯ k + 1 ( s k + 1 ) J ¯ k + 1 ( s k + 1 ) , i.e., J ¯ k + 1 ( s k + 1 ) J ¯ k + 1 ( s k + 1 ) is a positive semidefinite matrix. There are two reasons to choose trace function as the objective function. First, it is a linear function, which helps us to easily derive the optimal solution. Second, some researchers [26,27,28] have proved that it has many advantages to apply to sensor selection, such as, if the primal matrix optimization problem has an optimal solution and D ¯ k 12 in (29) is invertible, then the matrix optimization problem for sensor selection can be equivalently transformed to this convex optimization problem (43)–(45).
Let the information measure corresponding to the i-th sensor at k + 1 -th time be denoted as
b k + 1 i = p k + 1 i t r E [ x k + 1 h k + 1 i ( x k + 1 ) ] T ( R k + 1 i ) 1 [ x k + 1 h k + 1 i ( x k + 1 ) ] , i = 1 , , L .
Let { b k + 1 r 1 , , b k + 1 r L } denote { b k + 1 1 , , b k + 1 L } as rearrangement with descending order, i.e., b k + 1 r 1 b k + 1 r L . The optimal solution of the optimization problem (43)–(45) can be obtained by the following proposition.
Proposition 3.
For multisensor nonlinear dynamic system with the uncertain observations (1) and (2), the optimal sensor selection scheme for the problem (43)–(45) is s k + 1 r 1 = = s k + 1 r s = 1 and s k + 1 r s + 1 = = s k + 1 r L = 0 .
Proof. 
Since D ¯ k 11 and D ¯ k 12 are not related to s k , based on Proposition 2, the optimization problem (43)–(45) can be equivalent to
max s k + 1 i = 1 L b k + 1 i s k + 1 i ,
s . t . i = 1 L s k + 1 i = s ,
s k + 1 i { 0 , 1 } , i = 1 , , L ,
where b k + 1 i is denoted by (46). According to b k + 1 r 1 b k + 1 r L , and s k + 1 i , i = 1 , , L needs to satisfy (48) and (49), and we have
i = 1 L b k + 1 i s k + 1 i i = 1 s b k + 1 r i .
The equality holds with s k + 1 r 1 = = s k + 1 r s = 1 and s k + 1 r s + 1 = = s k + 1 r L = 0 . Thus, the optimal solution is got. ☐

5. Simulation

In this section, we provide two examples to compare the different PCRB by Proposition 1 with Proposition 2, and select the optimal sensors by Proposition 3.
Example 1: Consider an uncertain nonlinear dynamic system for the mobile robot. At time k, the mobile robot pose is described with thestate vector x k = [ x k y k θ k ] , where x k and y k are the coordinates on a 2D plane relative to an external coordinate frame, and θ k is the heading angle. We use the control commands u k = [ Δ d k , Δ θ k ] to determine the motion of the mobile robot, where Δ d k is the incremental distance robot (in meters) and Δ θ k is the incremental change in heading angle (in degrees). The robot motion can be described as follows [35]:
f 1 , k = x k 1 + Δ d k c o s ( θ k 1 + 1 2 Δ θ k ) , f 2 , k = y k 1 + Δ d k s i n ( θ k 1 + 1 2 Δ θ k ) , f 3 , k = θ k 1 + Δ θ k ,
where Δ d k = 5 , Δ θ k = 5 . The state equation is defined as f k = [ f 1 , k , f 2 , k , f 3 , k ] T , and then the state model is
x k = f k ( x k 1 , u k ) + w k .
The measurement equation is
y k i = h i ( x k ) + v k j , with probability p k i , v k j with probability 1 p k i , f o r i = 1 , , L , p k i = 0 . 8 ,
where
h i ( x k ) = ( x k ( 1 ) z k i ( 1 ) ) 2 + ( x k ( 2 ) z k i ( 2 ) ) 2 a r c t a n x k ( 2 ) z k i ( 2 ) x k ( 1 ) z k i ( 1 ) .
z k i = [ z k i ( 1 ) z k i ( 2 ) ] T is the position of the ith sensor. In the simulation, we consider the WSN shown in Figure 1, which has L = 6 × 6 = 36 sensors deployed in the area 100 × 100 m 2 [5]. The noise covariances are set as Q k = d i a g ( [ 0.1 2 , 0.1 2 , 3 2 ] ) , R k i = d i a g ( [ 1 2 , 1 2 ] ) .
In the example, the initial state of the robot starts is [ 8 , 8 , 1 ] and the initial covariance matrix is P 0 = d i a g ( [ 10 , 10 , 2 ] ) [35]. The sampling length is assigned to f l a g = 50 . Here, the number of Monte Carlo (MC) simulation is M C = 200 .
The following simulation results include three parts: the first part is about the trajectory of the mobile robot and PCRB of the state estimation, the second part is about the average computation time, and the third part is about the PCRB with different sensing probability p.
  • Figure 1 shows the trajectory of the mobile robot and the location of the L sensors. Figure 2 and Figure 3 show that the PCRB of position along the x- and y-directions based on Proposition 1 and Proposition 2, respectively. From Figure 2 and Figure 3, we can see the different PCRBs are shown to converge to the same values. However, the PCRB changes so much in the first seconds, and there are two possible reasons. First, the dynamic system is nonlinear. It may cause the algorithm to require some time to be convergent. Second, the initial variance may not be given better, such that it is far away from the convergence point.
  • The average computation time of calculating PCRB based on Proposition 1 and Proposition 2 is presented in Figure 4. From Figure 4, obviously, when the number of the sensors increases, the computational complexity of Proposition 1 is much higher than that of Proposition 2, and the average computation time of PCRB by Proposition 2 increases slowly. The reason may be that the expression of PCRB based on Proposition 2 has a more concise form where the D ¯ k 22 is easier to compute. Thus, Proposition 2 is more suitable for the sensor selection in the large-scale sensor networks.
  • In Figure 5, the average PCRB of 20 time steps is plotted as a function of number of sensors. It shows that the PCRB obtained by Proposition 1 is the same as that based on Proposition 2. The larger p is, the smaller the number of required sensors. The reason may be that the sensors can take more observation information, when the sensor probability p is larger.
Example 2: In order to manage the communication resources efficiently in large wireless sensor networks, we need to select some appropriate sensors. Thus, let us consider the above dynamic system (50) and (51) and the WSNs [35]. In general, if the sensors are close to the target, which may have higher sensing probabilities compared to other sensors in the WSN, then it is highly likely to select those sensors, owing to being both closer to target and higher sensing probability. Here, we consider a relatively difficult case that the sensors around the target have relatively low sensing probabilities. Then, we compare our algorithm in Proposition 3 with the recent two methods given in [5,28].
In this example, we present two cases with different number of sensors in WSN. Firstly, we consider L = 36 and s k = 15 . Moreover, let p k i = 0.1 , i = 7 , 8 , 9 , 10 , 13 , 14 , 15 , 16 , 20 , 21 , 22 , 23 , 24 , and the other sensing probabilities are between 0.8 and 1. Secondly, we also consider L = 49 and s k = 15 . Moreover, let p k i = 0.1 , i = 14 , , 18 , 22 , , 26 , 30 , , 34 , and the other sensing probabilities are between 0.8 and 1. The following simulation results contain three parts. The first part is about the sensor selection in the application of wireless sensor network, the second part is about the mean squared error based on the selected sensors, and the third part is about the computation time.
  • Figure 6 and Figure 7 present the location of L = 36 and L = 49 sensors, respectively. The target is showed at the time 10 s, and we use our algorithm in Proposition 3 to select the optimal s k = 15 sensors. When the uncertainty in the dynamic system is ignored, the recent method in [28] can be used to select the required sensors, and the results are shown in Figure 8 and Figure 9. Comparing Figure 6 with Figure 8, some sensors are close to the target, such as sensor 8 and sensor 15, but they are not selected in Figure 6. In Figure 8, they are selected and the only closer sensors can be selected. The reason is that the sensing probability of sensor 8 and sensor 15 is very low, and they may be not given us much useful information, although they are close to the target. Comparing Figure 7 with Figure 9, it has a similar phenomenon, such as sensor 16 and 31 not being selected in Figure 7, but they are selected in Figure 9.
  • In Figure 10 and Figure 11, the mean squared errors of position in x- and y-directions are plotted for the algorithm given in Proposition 3 and the algorithms in [5,28]. It shows that our algorithm can derive the best performance. The reason is that our algorithm considers the influence of uncertain observation, and the optimal selected sensors are obtained. Although the algorithm in [5] considers the uncertain observation, it is difficult to obtain the optimal selected sensors, since it involves relaxing the variable { 0 , 1 } to the interval [ 0 , 1 ] . From Figure 12 and Figure 13, we can also see that the proposed method also performs best in the case of L = 49 , thus the performance of the new method is stable with the increase of the number of sensors.
  • The computation times of obtaining PCRB are plotted in Figure 14 and Figure 15 for the three algorithms, respectively. Figure 14 and Figure 15 show that the computation time of the method in Proposition 3 is much smaller than that of the other two methods. The reason is that the method in Proposition 3 is an analytical solution. Therefore, the proposed algorithm in Proposition 3 is more suitable for the large sensor networks.

6. Conclusions

This paper has proposed two methods to derive the PCRB to effectively overcome the difficulties caused by uncertainty. The first method is based on the recursive formula of the Cramér–Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computational burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0–1 latent variables to treat the Gaussian mixture model. Since the regular condition of the posterior Cramér–Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér–Rao bound can be achieved by a limiting process of the Cramér–Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Thus, the sensor selection problems for the nonlinear uncertain dynamic system with linear equality or inequality constraints can be efficiently solved, and the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

Acknowledgments

This work was supported in part by the NSFC No. 61673282, the open research funds of BACC-STAFDL of China under Grant No. 2015afdl010 and the PCSIRT16R53.

Author Contributions

Zhiguo Wang and Xiaojing Shen proposed the algorithm; Ping Wang performed the experiments; Yunmin Zhu contributed analysis tools.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nahi, N.E. Optimal recursive estimation with uncertain observation. IEEE Trans. Inf. Theory 1969, 15, 457–462. [Google Scholar] [CrossRef]
  2. Hadidi, M.T.; Schwartz, S.C. Linear recursive state estimators under uncertain observations. IEEE Trans. Autom. Control 1979, 24, 944–948. [Google Scholar] [CrossRef]
  3. Lin, H.; Sun, S. State estimation for a class of non-uniform sampling systems with missing measurements. Sensors 2016, 16, 1155. [Google Scholar] [CrossRef] [PubMed]
  4. Luo, Y.; Zhu, Y.; Luo, D.; Zhou, J.; Song, E.; Wang, D. Globally optimal multisensor distributed random parameter matrices Kalman filtering fusion with applications. Sensors 2008, 8, 8086–8103. [Google Scholar] [CrossRef] [PubMed]
  5. Cao, N.; Choi, S.; Masazade, E.; Varshney, P.K. Sensor selection for target tracking in wireless sensor networks with uncertainty. IEEE Trans. Signal Process. 2016, 64, 5191–5204. [Google Scholar] [CrossRef]
  6. Costa, O.L.V.; Guerra, S. Stationary filter for linear minimum mean square error estimator of discrete-time Markovian jump systems. IEEE Trans. Autom. Control 2002, 47, 1351–1356. [Google Scholar] [CrossRef]
  7. Sinopoli, B.; Guerra, S.; Schenato, L.; Franceschetti, M.; Poolla, K.; Jordan, M.I.; Sastry, S.S. Kalman filtering with intermittent observations. IEEE Trans. Autom. Control 2004, 49, 1453–1464. [Google Scholar] [CrossRef]
  8. Van Trees, H.L. Detection, Estimation, and Modulation Theory, Part I; Wiley: New York, NY, USA, 1968. [Google Scholar]
  9. Bobbovsky, B.Z.; Zakai, M. A lower bound on the estimation error for Markov processes. IEEE Trans. Autom. Control 1975, 20, 785–788. [Google Scholar] [CrossRef]
  10. Taylor, J. The Cramér–Rao estimation error lower bound computation for deterministic nonlinear systems. IEEE Trans. Autom. Control 1979, 24, 343–344. [Google Scholar] [CrossRef]
  11. Galdos, J.I. A Cramér–Rao bound for multidimensional discrete-time dynamical systems. IEEE Trans. Autom. Control 1980, 25, 117–119. [Google Scholar] [CrossRef]
  12. Chang, C.B. Two lower bounds on the covariance for nonlinear estimation problems. IEEE Trans. Autom. Control 1981, 26, 1294–1297. [Google Scholar] [CrossRef]
  13. Tichavský, P.; Muravchik, C.H.; Nehorai, A. Posterior Cramér–Rao bounds for discrete-time nonlinear filtering. IEEE Trans. Signal Process. 1998, 46, 1386–1396. [Google Scholar]
  14. Kirubarajan, T.; Bar-Shalom, Y. Low observable target motion analysis using amplitude information. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 1367–1384. [Google Scholar] [CrossRef]
  15. Zheng, Y.; Ozdemir, O.; Niu, R.; Varshney, P.K. New conditional posterior Cramér–Rao lower bounds for nonlinear sequential Bayesian estimation. IEEE Trans. Signal Process. 2012, 60, 5549–5556. [Google Scholar] [CrossRef]
  16. Šimandl, M.; Královec, J.; Tichavský, P. Filtering, predictive, and smoothing Cramér–Rao bounds for discrete-time nonlinear dynamic systems. Automatica 2001, 37, 1703–1716. [Google Scholar] [CrossRef]
  17. Hernandez, M.L.; Marrs, A.D.; Gordon, N.J.; Maskell, S.R.; Reed, C.M. Cramér-Rao bounds for non-linear filtering with measurement origin uncertainty. In Proceeding of the 5th International Conference on Information Fusion, Annapolis, MD, USA, 8–11 July 2002. [Google Scholar]
  18. Zhang, X.; Willett, P.; Bar-Shalom, Y. The Cramer-Rao bound for dynamic target tracking with measurement origin uncertainty. In Proceeding of the 41st IEEE Conference on Decision and Control, Las Vegas, NV, USA, 10–13 December 2002. [Google Scholar]
  19. Farina, A.; Ristic, B.; Timmoneri, L. Cramér–Rao bound for nonlinear filtering with Pd < 1 and its application to target tracking. IEEE Trans. Signal Process. 2002, 50, 1916–1924. [Google Scholar]
  20. Hernandez, M.; Ristic, B.; Farina, A. A comparison of two Cramér–Rao bounds for nonlinear filtering with Pd < 1. IEEE Trans. Signal Process. 2004, 52, 2361–2370. [Google Scholar]
  21. Rapoport, I.; Oshman, Y. A Cramér–Rao-type estimation lower bound for systems with measurement faults. IEEE Trans. Autom. Control 2005, 50, 1234–1245. [Google Scholar] [CrossRef]
  22. Hounkpevi, F.O.; Yaz, E.E. Robust minimum variance linear state estimators for multiple sensors with different failure rates. Automatica 2007, 43, 1274–1280. [Google Scholar] [CrossRef]
  23. Zhang, H.; Shi, Y.; Mehr, A.S. Robust weighted H filtering for networked systems with intermittent measurements of multiple sensors. Internat J. Adapt. Control Signal Process. 2011, 25, 313–330. [Google Scholar] [CrossRef]
  24. Yick, J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292–2330. [Google Scholar] [CrossRef]
  25. Gungor, V.C.; Hancke, G.P. Industrial wireless sensor networks: challenges, design principles, and technical approaches. IEEE Trans. Ind. Electron. 2009, 56, 4258–4265. [Google Scholar] [CrossRef]
  26. Shen, X.; Varshney, P.K. Sensor selection based on generalized information gain for target tracking in large sensor networks. IEEE Trans. Signal Process. 2014, 62, 363–375. [Google Scholar] [CrossRef]
  27. Joshi, S.; Boyd, S. Sensor selection via convex optimization. IEEE Trans. Signal Process. 2009, 57, 451–462. [Google Scholar] [CrossRef]
  28. Shen, X.; Liu, S.; Varshney, P.K. Sensor selection for nonlinear systems in large sensor networks. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 2664–2678. [Google Scholar] [CrossRef]
  29. Liu, S.; Fardad, M.; Masazade, E.; Varshney, P.K. Optimal periodic sensor scheduling in networks of dynamical systems. IEEE Trans. Signal Process. 2014, 62, 3055–3068. [Google Scholar] [CrossRef]
  30. Wang, P.; Wang, Z.G.; Shen, X.J.; Zhu, Y.M. The estimation fusion and Cramér-Rao bounds for nonlinear systems with uncertain observations. In Proceeding of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  31. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  32. Yang, X.J.; Niu, R.X. Sparsity-Promoting sensor selection for nonlinear target tracking with quantized data. In Proceeding of the 20th International Conference on Information Fusion, Xi’an, China, 10–13 July 2017. [Google Scholar]
  33. Bilmes, J.A. A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. Int. Comput. Sci. Inst. 1998, 4, 126. [Google Scholar]
  34. Chepuri, S.P.; Leus, G. Sparsity-promoting sensor selection for non-linear measurement models. IEEE Trans. Signal Process. 2015, 63, 684–698. [Google Scholar] [CrossRef]
  35. Pak, J.M.; Ahn, C.K.; Shmaliy, Y.S.; Lim, M.T. Improving reliability of particle filter-based localization in wireless sensor networks via hybrid particle/FIR filtering. IEEE Trans. Ind. Inform. 2015, 11, 1089–1098. [Google Scholar] [CrossRef]
Figure 1. The trajectory of the mobile robot and the location of the L sensors.
Figure 1. The trajectory of the mobile robot and the location of the L sensors.
Sensors 18 01103 g001
Figure 2. The PCRB of position in the x-direction is plotted as a function of time steps.
Figure 2. The PCRB of position in the x-direction is plotted as a function of time steps.
Sensors 18 01103 g002
Figure 3. The PCRB of position in the y-direction is plotted as a function of time steps.
Figure 3. The PCRB of position in the y-direction is plotted as a function of time steps.
Sensors 18 01103 g003
Figure 4. The average computation time of PCRB is plotted as function of number of sensors.
Figure 4. The average computation time of PCRB is plotted as function of number of sensors.
Sensors 18 01103 g004
Figure 5. The average PCRB of 20 time steps is plotted as function of number of sensors with the different probability p.
Figure 5. The average PCRB of 20 time steps is plotted as function of number of sensors with the different probability p.
Sensors 18 01103 g005
Figure 6. L = 36 sensors placement and selected sensors s k = 15 based on the algorithm in Proposition 3.
Figure 6. L = 36 sensors placement and selected sensors s k = 15 based on the algorithm in Proposition 3.
Sensors 18 01103 g006
Figure 7. L = 49 sensors placement and selected sensors s k = 15 based on the algorithm in 3.
Figure 7. L = 49 sensors placement and selected sensors s k = 15 based on the algorithm in 3.
Sensors 18 01103 g007
Figure 8. L = 36 sensors placement and selected sensors s k = 15 based on the algorithm in [28].
Figure 8. L = 36 sensors placement and selected sensors s k = 15 based on the algorithm in [28].
Sensors 18 01103 g008
Figure 9. L = 49 sensors placement and selected sensors s k = 15 based on the algorithm in [28].
Figure 9. L = 49 sensors placement and selected sensors s k = 15 based on the algorithm in [28].
Sensors 18 01103 g009
Figure 10. The mean squared error of position in the x-direction is plotted as function of time steps with L = 36 sensors.
Figure 10. The mean squared error of position in the x-direction is plotted as function of time steps with L = 36 sensors.
Sensors 18 01103 g010
Figure 11. The mean squared error of position in the y-direction is plotted as function of time steps with L = 36 sensors.
Figure 11. The mean squared error of position in the y-direction is plotted as function of time steps with L = 36 sensors.
Sensors 18 01103 g011
Figure 12. The mean squared error of position in the x-direction is plotted as function of time steps with L = 49 sensors.
Figure 12. The mean squared error of position in the x-direction is plotted as function of time steps with L = 49 sensors.
Sensors 18 01103 g012
Figure 13. The mean squared error of position in the y-direction is plotted as function of time steps with L = 49 sensors.
Figure 13. The mean squared error of position in the y-direction is plotted as function of time steps with L = 49 sensors.
Sensors 18 01103 g013
Figure 14. The computation time of obtaining PCRB is plotted as function of time steps with L = 36 sensors.
Figure 14. The computation time of obtaining PCRB is plotted as function of time steps with L = 36 sensors.
Sensors 18 01103 g014
Figure 15. The computation time of obtaining PCRB is plotted as function of time steps with L = 49 sensors.
Figure 15. The computation time of obtaining PCRB is plotted as function of time steps with L = 49 sensors.
Sensors 18 01103 g015

Share and Cite

MDPI and ACS Style

Wang, Z.; Shen, X.; Wang, P.; Zhu, Y. The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations. Sensors 2018, 18, 1103. https://doi.org/10.3390/s18041103

AMA Style

Wang Z, Shen X, Wang P, Zhu Y. The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations. Sensors. 2018; 18(4):1103. https://doi.org/10.3390/s18041103

Chicago/Turabian Style

Wang, Zhiguo, Xiaojing Shen, Ping Wang, and Yunmin Zhu. 2018. "The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations" Sensors 18, no. 4: 1103. https://doi.org/10.3390/s18041103

APA Style

Wang, Z., Shen, X., Wang, P., & Zhu, Y. (2018). The Cramér–Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations. Sensors, 18(4), 1103. https://doi.org/10.3390/s18041103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop