Next Article in Journal
Operational State Recognition of a DC Motor Using Edge Artificial Intelligence
Next Article in Special Issue
Optimal Geometries for AOA Localization in the Bayesian Sense
Previous Article in Journal
Single-Channel Blind Signal Separation of the MHD Linear Vibration Sensor Based on Singular Spectrum Analysis and Fast Independent Component Analysis
Previous Article in Special Issue
2D and 3D Angles-Only Target Tracking Based on Maximum Correntropy Kalman Filters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Signal Source Localization with Long-Term Observations in Distributed Angle-Only Sensors

1
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
2
China Academy of Launch Vehicle Technology, Beijing 100076, China
3
Beijing Aerospace Automatic Control Institute, Beijing 100854, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9655; https://doi.org/10.3390/s22249655
Submission received: 8 November 2022 / Revised: 29 November 2022 / Accepted: 5 December 2022 / Published: 9 December 2022

Abstract

:
Angle-only sensors cannot provide range information of targets and in order to determine accurate position of a signal source, one can connect distributed passive sensors with communication links and implement a fusion algorithm to estimate target position. To measure moving targets with sensors on moving platforms, most of existing algorithms resort to the filtering method. In this paper, we present two fusion algorithms to estimate both the position and velocity of moving target with distributed angle-only sensors in motion. The first algorithm is termed as the gross least square (LS) algorithm, which takes all observations from distributed sensors together to form an estimate of the position and velocity and thus needs a huge communication cost and a huge computation cost. The second algorithm is termed as the linear LS algorithm, which approximates locations of sensors, locations of targets, and angle-only measures for each sensor by linear models and thus does not need each local sensors to transmit raw data of angle-only observations, resulting in a lower communication cost between sensors and then a lower computation cost at the fusion center. Based on the second algorithm, a truncated LS algorithm, which estimates the target velocity through an average operation, is also presented. Numerical results indicate that the gross LS algorithm, without linear approximation operation, often benefits from more observations, whereas the linear LS algorithm and the truncated LS algorithm, both bear lower communication and computation costs, may endure performance loss if the observations are collected in a long period such that the linear approximation model becomes mismatch.

1. Introduction

For some passive sensors, such as infrared sensors, photoelectric sensors and cameras, they can detect targets by receiving electromagnetic signals. As they do not emit signals, they can probe targets in a stealth manner [1]. However, such sensors can measure only angles of signals and thus are termed as angle-only sensors subsequently. The signal position information, which is of great concern in many situations, cannot be obtained with a sensor. To determine the position of signal sources, one can connect distributed sensors with communication links and then estimate the position through a fusion algorithm. This is a hot topic in recent years and gains wide attentions of scholars in different fields [2,3,4,5].
In the 3-dimensional (3D) scenario, the angle information measured by each passive sensor includes the azimuth and elevation of the signal. From a mathematical perspective, each angle observation can be represented by a straight line passing the sensor and a target in space. If no error occurs in this process, all the lines will intersect in a point in space, which is the location of a signal source. In practice, both sensor location measures and angle measures are inevitably contaminated by measurement noises and then the lines may not intersect a point in space. However, as if the signals are from the same target, the lines will intersect in a small volume, whose center can be deemed as the location of a target. Following this concept, an angle-only positioning algorithm is presented in [6] and a closed-form solution is derived.
In real applications, if the targets of interest are static, or if the sampling frequency to the signals is too high in contrast to the velocities of possible targets, the algorithms can be developed under an assumption that the velocity of the target is static. The least squares (LS) algorithm is applied in the target position estimation based on angle-only measurements by linearizing the angle observation equations [7,8,9,10]. The intersection localization algorithm is obtained by considering that the straight lines formed by the angle observations will intersect in a small volume in space [11,12,13]. Real sensor often makes observations in an asynchronous manner, namely the observations are not obtained at the same instants. The stationary target assumption will also make the sensor synchronization problem easier, because we can totally drop the timing information of the observations. If the target is stationary, even if the sensors are moving, the straight lines formed by the angle measurements of multiple sensors at different times will converge to a small area near the target location. Therefore, in this scenario, one just needs to solve a target positioning algorithm in an asynchronous manner.
Once the target motion should be considered at different observation instants, target location estimation will face greater biases and then one has to take the target motion issue into consideration. Meanwhile, for moving targets, the observation instants should be taken into account and then as the distributed fusion algorithm should take instant information into account, the fusion algorithm becomes more complicated. There are mainly two strategies available so far. The first strategy is to use the filtering algorithms, such as the Kalman filter that can estimate target velocity through observations from different instants. In the target tracking theoretical framework, the angle-only observations can be described by a measurement equation, although it is heavily nonlinear. Therefore, a nonlinear filtering algorithm should be used [14,15]. For instance, the extended Kalman filtering (EKF) algorithm linearizes the angle measurement function through the first-order Taylor approximation, and then uses the standard Kalman filtering algorithm for the angle-only target tracking problem [16]. The cubature Kalman filter (CKF) [17], the unscented Kalman filter (UKF) [18], the pseudo linear Kalman filter (PLKF) [19,20], the particle filter [21] and a series of sigma-point based algorithms can also be used in the target tracking problem with distributed angle-only sensors.
Although the tracking algorithms have been widely used to estimate moving target positions, it requires the noise distribution parameters known a priori. It also faces the convergence problem if the initial state is set improperly [22,23]. In distributed sensor networks, if each observation undergoes a tracking process, the computation cost will also be high since the data amount of observations are often intensive in practice. Therefore, a good positioning algorithm should be implemented before filtering. For instance, in [15], short-term angle-only observations are fused by a distributed positioning algorithm, whose outputs are then processed by a tracking algorithm.
In the other strategy, the target position and velocity can be estimated together and then the result is valid in a longer period. In this case, the tracking operation can be performed in a longer period, so that the computation cost can be further reduced. However, if the velocity is estimated, more optimization variables are involved and then the optimization problem is more complicated. Meanwhile, in a distributed sensor configuration, the communication cost between the sensors may be high if all the observations are transmitted to a fusion center. In this paper, we study the distributed positioning of moving targets with distributed asynchronous angle-only sensors. We consider the scenario where multiple asynchronous passive sensors are linked with the fusion center through communication links. First, we formulate an algorithm, termed as gross LS algorithm, that takes all angle observations of multiple sensors together with their positions in certain period to estimate the position and velocity of the target. Different observations contribute different lines and with many lines available, both the target position and its velocity can be estimated. The classical LS algorithm is formulated such that the computation cost is reduced a lot.
Due to the huge amount of data, this algorithm still has high computational complexity and high communication cost. In order to reduce the communication and computation cost, we further present a distributed positioning algorithm, termed as linear LS algorithm, that can implement the fusion algorithm in a parallel computation manner. In detail, both positions and angle observations of local sensors are processed by LS operations, whose outputs are zero and first orders of the Taylor series of corresponding parameters. The outputs are then transmitted to a fusion center for which we derive a fusion algorithm to efficiently combine position and velocity estimates for a higher parameter estimation accuracy. The later algorithm can greatly compress the data rate from local sensors to the fusion center, such that the communication cost is greatly reduced. Meanwhile, local observations are represented by a few parameters and thus the fusion algorithm also needs a lower computation cost. The sensor location can be recorded asynchronously with the angle observations and thus can make the algorithm easier in applications. Meanwhile, a truncated LS algorithm, which replaces the velocity estimation of the linear LS algorithm by a simply average operation, is also presented.
Numerical results are obtained with distributed asynchronous angle-only sensors measuring a moving target with certain velocity. The convergence performance of both the algorithms are presented first, in order to examine the impact of the number of observations on the positioning performance. Then the impact of the linear approximation of position and angle measures on the estimation accuracy is analyzed. It will be found that the gross LS algorithm often benefits from more observations. However, although the linear LS algorithm and the truncated LS algorithm will perform good if the number of observations is small, as the number of observations increase, their performances will degrade, as a resulting of the linear model mismatching. The truncated LS algorithm will perform better in a short period than the linear LS algorithm but worse in a longer time. To an extreme, the estimation performance of the linear LS algorithm may deteriorate with more observations if the model mismatching is severe. We also verify that the linear LS algorithm has a lower communication cost in most situations and examine the performance loss due to inaccurate platform velocity estimates. Numerical angle distortion errors under the linear approximations are also analyzed.

2. Localization with Angle-Only Passive Sensors

2.1. Signal Model of Passive Observations

Consider a passive sensor network with N widely separated sensors and M targets in the surveillance volume. All the N passive sensors can measure only direction of arrival (DOA) of signals, based on which real position of a signal emitter can be estimated. Assume that all the sensors operate in the same coordinate system through some inherent position and attitude measurement devices, such as the Global Positioning System (GPS) and inertial sensors. A typical coordinate system is the earth-centered earth-fixed (ECEF) of the World Geodetic System 84 (WGS84). Both the targets and the sensors are in motion by assumption. The real position of the nth sensor at instant t is denoted by p n o ( t ) = [ x n , s o ( t ) , y n , s o ( t ) , z n , s o ( t ) ] T , n = 1 , 2 , , N , where ( · ) T denotes the transpose operation, and x n , s o ( t ) , y n , s o ( t ) , z n , s o ( t ) denote the x , y , z coordinates of the nth sensor in the common coordinate system at instant t, respectively. The real position of the mth target at instant t is denoted by g m o ( t ) = [ x m , g o ( t ) , y m , g o ( t ) , z m , g o ( t ) ] T , m = 1 , , M , where x m , g o ( t ) , y m , g o ( t ) , z m , g o ( t ) denote the x , y , z coordinates of the mth target at instant t, respectively. The topology of the passive sensors and targets are shown in Figure 1.
For the nth sensor, signals are detected and their DOAs are measured at instants denoted by t k , n , k = 1 , , N n , where N n denotes the number of observations of the nth sensor. At the instant t k , n , assume that the position of the nth sensors is measured as
p k , n ( t k , n ) = p n o ( t k , n ) + Δ p n ( t k , n ) = [ x n , s ( t k , n ) , y n , s ( t k , n ) , z n , s ( t k , n ) ] T , k = 1 , , N n
where Δ p n ( t k , n ) denotes the sensor self-positioning error. For simplicity, we assume that the sensor self-positioning error follows zero mean Gaussian distributions with covariance matrices C s ( k , n ) = E ( Δ p n ( t k , n ) Δ p n T ( t k , n ) ) , where E denotes the expectation operation.
At instant t k , n , the real position of the mth signal source is denoted by
g m o ( t k , n ) = [ x m , g o ( t k , n ) , y m , g o ( t k , n ) , z m , g o ( t k , n ) ] T , m = 1 , , M .
Assume that all the observations regarding the same target are obtained in a short period T = [ T 1 , T 2 ] . In this period, assume that the location of the mth target can be expressed by
g m o ( t ) g m o ( t 0 ) + v m o ( t t 0 )
where g m o ( t 0 ) denotes the location of the target at the reference instant t 0 , and v m o denotes the velocity over T . The signal model in use depends on the velocity of the target and the period of observations. If all the observations are collected in a short period and the velocity is small, then one can simply assume g m o ( t ) g m o ( t 0 ) as [6]. Under the signal model (3), more observations can be used to make an estimation of the target space locations. If the observations are obtained in a long period and the velocity is huge, then this model may also mismatch and higher order approximations may be used.
For the nth angle-only passive sensors, the lth observation at t k , n is indexed by a triple ( l , k , n ) , l = 1 , , L k , n . For simplicity, we also encode all the triples available, corresponding to all the observations available, with a one-to-one function Ω ( l , k , n ) i . Then we define a set L k , n by
L k , n = { i | i = ( l , k , n ) , l = 1 , , L k , n } , k = 1 , , N n , n = 1 , , N
denotes a set of signal indices detected at the instant t k , n by the nth sensor. Therefore, L k , n = | L k , n | , where | · | over a set denotes the cardinality of the set. As the possibility of miss detection, false alarms and overlapping of signal sources, L k , n may not be equal to M. Denote
L n = k = 1 N n L k , n , L = n = 1 N L n
where ∪ denotes the union operation. The total number of observations by N sensors is denoted by
N s = | L | = n = 1 N k = 1 N n L k , n .
Each observation is associated with one of M targets or the false alarm indexed by 0, represented by a set M = { 0 , 1 , , M } . It can be considered as a mapping ψ : L M , which is a correct mapping and is thus typically unknown in practice. According to our setting, the index set L can be partitioned into M + 1 disjoint sets A 0 , A 1 , , A M , and A m is defined by
A m = { i | ψ ( i ) = m , i L } ,
where A 0 denotes the index of observations corresponding to false alarms, and A m denotes the index set of observations from the mth signal source. As a partition of A , we have A i A j = , i , j M , i j , and A = i = 0 M A i , where ∩ denotes the intersection operation of sets. Assume that | A m | = L m and there are totally L = m = 0 M L m observations available.
The signal indices in A m are composed of signal indices from all the sensors and the sub set for the nth sensor is denoted by
A n , m = L n A m
which indicates the observations from the nth sensor probing the mth target. Denote K n , m = | A n , m | and then we have L m = n = 1 N K n , m and A m = A n , m .
For simplicity, we first assume that the mapping ψ is exactly known and then observations associated with A m is exactly known. For observation i A m , real azimuth angle and elevation angle, regarding the nth sensor at t k , n , can be expressed by
θ i o = tan 1 y m , g o ( t k , n ) y n , s o ( t k , n ) , x m , g o ( t k , n ) x n , s o ( t k , n ) φ i o = arctan z m , g o ( t k , n ) z n , s o ( t k , n ) ( x m , g o ( t k , n ) x n , s o ( t k , n ) ) 2 + ( y m , g o ( t k , n ) y n , s o ( t k , n ) ) 2
respectively, where θ i o ( π , π ) , φ i o ( π 2 , π 2 ) , tan 1 ( ) is called the two-argument inverse tangent function [24,25] and arctan ( ) is the inverse tangent function. Denote η i o = [ θ i o , φ i o ] T . The azimuth angle and elevation angle measures can be written as
η i = [ θ i , φ i ] T = η i o + Δ η i
θ i = θ i o + Δ θ i
φ i = φ i o + Δ φ i
Δ η i = [ Δ θ i , Δ φ i ] T
where Δ θ i and Δ φ i represent the measurement noise of the azimuth angle and elevation angle, respectively.
For simplicity, we assume that observation noises Δ θ i and Δ φ i are statistically independent and follow zero-mean Gaussian distribution. The covariance matrices of Δ η are denoted by C η ( i ) = E ( Δ η Δ η T ) C 2 × 2 , namely Δ η N ( 0 , C η ( i ) ) , which is typically affected by the SNR of the signal, where N ( 0 , C η ( i ) ) denotes the zero-mean Gaussian distribution with mean 0 and covariance matrix C η ( i ) .

2.2. Estimation of Target Track

Each angle-only observation contributes a line in 3D space and without measurement error, a target will be present at the line. With many angle-only observations, real position of the target can be determined. The line associated with the ith observation can be expressed by
L i : x ( t i ) = p i + α i e i , α i R
where p i denotes the sensor location regarding the ith observation, α i is a parameter indicating the distance to the origin p i , e i = e ( η i ) = [ e i , x , e i , y , e i , z ] T R 3 × 1 is the normalized direction vector associated with the angle observation η i , namely e i = 1 , · over a vector denotes the 2 -norm, and
e i , x = cos ( θ i ) cos ( φ i )
e i , y = sin ( θ i ) cos ( φ i )
e i , y = sin ( φ i ) .
In what follows, we consider the observations in A m , m 0 . From (3), we can rewrite
x ( t i ) = p i + α i e i = g 0 , m + v g , m ( t i t 0 ) + ϵ e ( i ) = A e ( i ) q m + ϵ e ( i )
where q m = [ g 0 , m T , v g , m T ] T , g 0 , m = g m o ( t 0 ) , v g , m = v m o ( t 0 ) , ϵ e ( i ) denotes the bias term,
A e ( i ) = [ I , ( t i t 0 ) I ] R 3 × 6
and I denotes the identity matrix. In (18), there are totally 7 unknown parameters and an observation can provide 3 equations. In addition to an observation, one can obtain another 3 equations and the number of unknown parameters will increase by 1. Unless specified, we always refer to the mth target and drop the subscripts m in situations without ambiguity subsequently, e.g., denote g 0 , m g 0 , v g , m v g , q m q .
In order to determine the location and velocity of the target, the optimization problem can be formulated as
min α , q P + E diag ( α ) A h ( 1 6 q )
where · refers to the 2 norm subsequently unless explicitly specified, α = [ α 1 , , α L m ] T , P is a matrix whose columns are p i , i A m , P = [ p , , p L m ] , 1 6 denotes a 6 × 1 all-one vector of length L m , E is a matrix whose columns are e i , i A m , ⊗ denotes the Kronecker product operation,
A h = [ A e ( i ) , ] R 3 × 6 L m , i A m ,
and diag ( · ) with a vector entry denotes a diagonal matrix with the vector as diagonal elements.

2.3. The Gross LS Algorithm

In practice, the observations are generally contaminated by measurement noise and then the lines often do not intersect into a point in space. With L m observations, there are totally L m + 6 unknown parameters and 3 L m equations. Therefore, if we have at least 3 observations from angle-only sensors, we can find 9 unknown parameters together. For that purpose, let q ¯ = [ q T , α T ] T and then we can reformulate (18). In order to minimize the mismatch, the optimization problem can be rewritten as
min α , q p + D α A d q .
The combination of equations for all observation indices in A m can be formulated as
A a q ¯ = p + ϵ e
where p = vec ( P ) , vec ( · ) denotes the vectorization operation.
ϵ e = [ ϵ T ( 1 ) , , ϵ T ( L m ) ] T
A a = [ A d , D ] R 3 L m × ( 6 + L m )
A d = [ A e T ( 1 ) , , A e T ( L m ) ] T
= [ 1 , t ] I R 3 L m × 3
D = diag ( e 1 , , e L m ) R 3 L m × L m
where t = [ t 1 t 0 , t 2 t 0 , , t L m t 0 ] T , and diag ( · ) with some matrices inputs denotes a block diagonal matrix with the input matrices as block diagonal elements.
For this optimization problem, we can find the classical LS solution as
q ^ a = ( A a T A a ) 1 A a T p .
It can be proved that for symmetric matrices A and B , and C , all of appropriate sizes,
A C T C B 1 = ( A C T B 1 C ) 1 ( A C T B 1 C ) 1 C T B B 1 C ( A C T B 1 C ) 1 ( B C A 1 C T ) 1
Consequently, with a fact that D T D = I , we have
( A a T A a ) 1 = A d T A d A d T D D T A d D T D 1 = X Z T Z Y
where
X = ( A d T A d A d T D ( D T D ) 1 D T A d ) 1
= ( A d T A d A d T D D T A d ) 1 R 6 × 6
Y = ( D T D D T A d ( A d T A d ) 1 A d T D ) 1
= ( I D T A d ( A d T A d ) 1 A d T D ) 1 R L m × L m
Z = ( D T D ) 1 D T A d X
= D T A d ( A d T A d A d T D D T A d ) 1 R L m × 3
and the following equation is used in above formulations.
It can also be proved that
A d T A d = L I T d I T d I T D I = M I
A d T D = E E t ,
where T d = i = 1 L ( t i t 0 ) , T D = i = 1 L ( t i t 0 ) 2 , E t = E diag ( t )
M = L T d T d T D M 1 = 1 L T D T d 2 T D T d T d L
and ⊗ denotes the Kronecker product operation.
Still evoke (30) and we have
( A d T A d ) 1 = M 1 I
and thus
Y 1 = I 1 L T D T d 2 [ E T , E t T ] T D E T d E t T d E + L E t
= I 1 L T D T d 2 ( T D E T E T d E t T E T d E t T E + L E t T E t ) .
Meanwhile,
A a T p = A d T p D T p = q t p e
where
q t = P [ 1 , t T ] T
p e = diag ( E T P ) .
Therefore, the estimates for q m and α can be written separately as
q ^ m = g ^ 0 v ^ g = X q t X E E t p e
α ^ m = A d T D X q t Y p e
where X can be written in a concise form as
X 1 = L I E E T T d I E E t T T d I E t E T T D I E t E t T .
Under the assumption that all the sample under consideration is from the same target, the track, parameterized by q , is identical for all observations hereafter.

2.4. The Linear LS Algorithm

In a long period, one can obtain a sequence of η = ( θ , ϕ ) observations. If we take all the observations into consideration for optimization, a huge computation cost may be required. In some cases, it is also unnecessary at all. We can extract information from local observations and then transmit estimated parameters to the fusion center for target location estimation. The target location has been approximated by a linear model. Next, we express the DOA and sensor location by a linear model as well.
For DOA measures from a sensor, we can approximate a series of angle measures by
η = θ θ 0 + θ ˙ 0 ( t t 0 ) φ φ 0 + φ ˙ 0 ( t t 0 ) .
Now consider the nth sensor and let
θ 0 θ 0 , n θ ˙ 0 θ ˙ 0 , n θ ¯ n = θ 0 , n θ ˙ 0 , n , φ 0 φ 0 , n φ ˙ 0 φ ˙ 0 , n φ ¯ n = φ 0 , n φ ˙ 0 , n .
For all observations in A n , m , we can write a polynomial regression problem as
θ n = A n θ ¯ n + ϵ θ φ n = A n φ ¯ n + ϵ φ
where θ n = [ θ i ] i A n , m , φ n = [ φ i ] i A n , m , ϵ θ denotes bias for θ n , ϵ ϕ denotes bias for φ n , and
A a ( n ) = 1 , t i t 0 R K n , m × 2 , i A n , m .
It can be proved that the LS estimate of the directions for the nth sensor can be directly written as
θ ^ n = ( A a ( n ) T A a ( n ) ) 1 A a ( n ) T θ n = 1 L T D T d 2 T D θ n T 1 T d θ n T t T d θ n T 1 + L θ n T t
φ ^ n = ( A a ( n ) T A a ( n ) ) 1 A a ( n ) T φ n = 1 L T D T d 2 T D φ n T 1 T d φ n T t T d φ n T 1 + L φ n T t .
In this case, θ ^ n and φ ^ n , instead of θ n and φ n , will be transmitted to the fusion center, such that the communication cost will be greatly reduced.
In (18), θ ^ n and φ ^ n affects the positioning accuracy through the normalized direction vector e i , i A n , m . With a linear approximation model, e can be rewritten as
e ( t ) e 0 + E ˙ 0 T η ˙ 0 ( t t 0 )
where e 0 = e ( t 0 ) ,
θ ˙ = θ t , φ ˙ = φ t
η 0 = [ θ 0 , φ 0 ] T , η = [ θ , φ ] T
η ˙ 0 = η ˙ | t = t 0 = [ θ ˙ 0 , φ ˙ 0 ] T , η ˙ = [ θ ˙ , φ ˙ ] T
E ˙ 0 = E ˙ ( η ) | t = t 0 = E ˙ ( η 0 )
and E ˙ ( η ) denotes the Jacobi matrix defined by
E ˙ T ( η ) = e η T = sin ( θ ) cos ( φ ) cos ( θ ) sin ( φ ) cos ( θ ) cos ( φ ) sin ( θ ) sin ( φ ) 0 cos ( φ ) .
For the nth sensor, denote e 0 e 0 , n , E ˙ 0 E ˙ 0 , n , η ˙ 0 η ˙ 0 , n , η 0 η 0 , n and so on.
In practice, the position of the platform is also measured by a device, such as an inertial system or a positioning system. In either case, if the nth sensor is moving with speed v s at t = t 0 , the position can be expressed by
p n ( t ) = p 0 , n + v s , n ( t t 0 ) + ϵ s ( n )
where p 0 , n denotes the position of the nth sensor, v s , n denotes the velocity, both at t = t 0 , and ϵ s denotes the bias of position estimation error.
For simplicity, we assume that the sensor location is measured at t = t ¯ k , n , k = 1 , , N ¯ n . In practice, in this configuration, the sensor location can be measured at instants other than t k , n , k = 1 , , N n . Now we can construct equations as
p k , n = p 0 , n + v s , n ( t ¯ k , n t 0 ) + ϵ s ( k , n ) , k = 1 , , N ¯ n , n = 1 , , N
or in another form as
p s , n = A s p ¯ n + ϵ s ( n )
where p ¯ n = [ p 0 , n T , v s , n T ] T , p s , n = [ p 1 , n T , , p N ¯ n , n T ] T , and
A s ( n ) = I , ( t ¯ k , n t 0 ) I k = 1 , , N ¯ n = [ 1 , t s ] I
for which
( A s T ( n ) A s ( n ) ) 1 = M s 1 I
where M s 1 is similar to M in (40).
The LS estimate of p ¯ is
p ^ n = p ^ 0 , n T v ^ s , n = ( A s T ( n ) A s ( n ) ) 1 A s T ( n ) p s , n
= ( M s 1 I ) P 1 P t t
p ^ 0 , n = 1 L T D T d 2 ( T D P 1 T d P t t )
and
v ^ s , n = 1 L T D T d 2 ( T d P 1 + L P t t ) .
With above operations, we can obtain a linear sensor location parameter p ^ n , linear DOA parameters θ ^ n , ϕ ^ n , and linear target location parameters q m . The accuracy depends on the interval of observations, the speed of the target, and the speed of the sensors. A series of observations can now be approximated by two parameters and it is now unnecessary to transmit all the local observation to a fusion center anymore.
With only one observation available, a fusion center can now estimate the target position and velocity with the following equation
x ( t ) = g 0 + v g ( t t 0 ) + ϵ a ( n ) = p 0 , n + v s , n ( t t 0 ) + α n ( t ) e n ( t ) p 0 , n + v s , n ( t t 0 ) + ( α 0 , n + α ˙ 0 , n ( t t 0 ) ) ( e 0 , n + E ˙ 0 , n T η ˙ 0 , n ( t t 0 ) )
where α ˙ 0 , n represents the change rate of α 0 , n . By expanding Equation (71) and ignoring the second-order term, ϵ a ( n ) can be reformulated as
ϵ a ( n ) = p 0 , n + α 0 , n e 0 , n g 0 + ( α 0 , n E ˙ 0 , n T η ˙ 0 , n + v s , n + α ˙ 0 , n e 0 , n v g ) ( t t 0 )
where n = 1 , 2 , N and ϵ a ( n ) denotes the bias term in approximation target position and direction of arrival by the linear models.
The following equation can be obtained using (72) for N sensors,
ϵ a = p 0 + D 0 α 0 I ¯ g 0 + ( X d α 0 + v s + D 0 α ˙ 0 I ¯ v g ) ( t t 0 )
where
α 0 = [ α 0 , 1 , α 0 , 2 , , α 0 , N ] T
α ˙ 0 = [ α ˙ 0 , 1 , α ˙ 0 , 2 , , α ˙ 0 , N ] T
p 0 = [ p 0 , 1 T , p 0 , 2 T , , p 0 , N T ] T
D 0 = diag ( e 0 , 1 , e 0 , 2 , , e 0 , N )
X d = diag ( E ˙ 0 , 1 T η ˙ 0 , 1 , E ˙ 0 , 2 T η ˙ 0 , 2 , , E ˙ 0 , N T η ˙ 0 , N )
v s = [ v s , 1 T , v s , 2 T , , v s , N T ] T
I ¯ = 1 N I 3 R 3 N × 3
and
ϵ a = [ ϵ a T ( 1 ) , ϵ a T ( 2 ) , , ϵ a T ( N ) ] T .
In order to minimize the total bias ϵ a , we can minimize
min q c , q v p 0 + D 0 α 0 I ¯ g 0 + ( X d α 0 + v s + D 0 α ˙ 0 I ¯ v g ) ( t t 0 ) , t T .
where q c = [ g 0 T , α 0 T ] T and q v = [ v g T , α ˙ 0 T ] T .
To ensure the bias is minimized for t T , both the initial position bias, p 0 + D 0 α 0 I ¯ g 0 , and the speed bias, X d α 0 + v s + D 0 α ˙ 0 I ¯ v g should be minimized. Therefore, we can solve the optimization problem through solving the following two optimization problems of smaller scale,
min q c p 0 + D 0 α 0 I ¯ g 0
min q v X d α 0 + v s + D 0 α ˙ 0 I ¯ v g .
The solutions to the problems can be found directly through the LS algorithm as
q ^ c = [ g ^ 0 T , α ^ 0 T ] T = ( A c T A c ) 1 A c T p 0
q ^ v = [ v ^ g T , α ˙ ^ 0 T ] T = ( A c T A c ) 1 A c T ( X d α 0 + v s )
where
A c = [ 1 N I 3 , D 0 ] .
It can be proved that
A c T A c = I ¯ T I ¯ I ¯ T D 0 D 0 T I ¯ D 0 T D 0 = N I 3 E 0 E 0 T I
and then
( A c T A c ) 1 = ( N I E 0 E 0 T ) 1 ( N I E 0 E 0 T ) 1 E 0 E 0 T ( N I E 0 E 0 T ) 1 I + E 0 T ( N I E 0 E 0 T ) 1 E 0 .
where E 0 = [ e 0 , 1 , e 0 , 2 , , e 0 , N ] and P 0 = [ p 0 , 1 , p 0 , 2 , , p 0 , N ] .
Meanwhile,
A c T p 0 = P 0 1 D 0 T p 0
and thus,
q ^ 0 = [ g ^ 0 T , α ^ 0 T ] T = ( A c T A c ) 1 A c T p 0 g ^ 0 = ( N I E 0 E 0 T ) 1 P 0 1 ( N I E 0 E 0 T ) 1 E 0 D 0 T p 0
= ( N I E 0 E 0 T ) 1 ( P 0 1 E 0 D 0 T p 0 ) α ^ 0 = D 0 T I ¯ g ^ 0 D 0 T p 0
= D 0 T I ¯ ( N I E 0 E 0 T ) 1 ( P 0 1 E 0 D 0 T p 0 ) D 0 T p 0
which is identical to the solution of (82). A minor difference is that the matrix inverse operation is over an N × N matrix N I E 0 E 0 T .
The solution to q v can be expressed by
A c T ( X d α 0 + v s ) = 1 N T I 3 D 0 T ( vec ( X diag ( α 0 ) ) + vec ( V s ) ) = X α 0 + V s 1 v a v e
where
X = [ E ˙ 0 , 1 T η ˙ 0 , 1 E ˙ 0 , 2 T η ˙ 0 , 2 , , E ˙ 0 , N T η ˙ 0 , N ]
V s = [ v s , 1 , v s , 2 , , v s , N ] v a = diag ( α 0 ) E 0 T X
= [ α 0 , 1 e 0 , 1 T E ˙ 0 , 1 T η ˙ 0 , 1 , , α 0 , N e 0 , N T E ˙ 0 , N T η ˙ 0 , N ] T
and
v e = [ e 0 , 1 T v s , 1 , , e 0 , N T v s , N ] T .
Consequently, we can obtain
v ^ g = ( N I E 0 E 0 T ) 1 ( X α 0 + V s 1 E 0 v a E 0 v e )
α ˙ ^ 0 = E 0 T ( N I E 0 E 0 T ) 1 ( X α 0 + V s 1 ) v a v e E 0 T ( N I E 0 E 0 T ) 1 E 0 ( v a + v e )
= E 0 T ( N I E 0 E 0 T ) 1 ( X α 0 + V s 1 E 0 v a E 0 v e ) v a v e
= E 0 T v ^ g v a v e .
One can also derive in another way. According to (93), the change rate α ˙ 0 of α 0 can be expressed as
α ˙ 0 = X d T I ¯ g ^ 0 + D 0 T I ¯ v g X d T p 0 D 0 T v s .
Take (103) into (84), which can be rewritten as
min v g X d α ^ 0 + v s + D 0 X d T I ¯ g ^ 0 D 0 D 0 T v s D 0 X d T p 0 + ( D 0 D 0 T I ) I ¯ v g .
The solution of (104) can also be found through the LS algorithm, which can be expressed as
v ^ g = ( A v T A v ) 1 A v T b v
where
A v = ( I D 0 D 0 T ) I ¯
b v = X d α ^ 0 + v s + D 0 X d T I ¯ g ^ 0 D 0 D 0 T v s D 0 X d T p 0 .

2.5. The Truncated LS Algorithm

For a better performance, it is necessary to estimate the change rate α ˙ and if we ignore this term, the optimization problem becomes
min v g X d α 0 + v s I ¯ v g
whose solution, termed as truncated LS algorithm subsequently, is
v ^ g = 1 N ( V s 1 + X α ^ 0 )
which is an average operation. Note that the truncated LS algorithm shares the same position estimate with the linear LS algorithm.
The estimate of the target location at t T can be written as
g ^ m ( t ) = g ^ 0 + v ^ g ( t t 0 ) , t T .
In (99) and (109), the velocity terms v s , k , k = 1 , , N are unknown and should be replaced by their estimates, typically v ^ s estimated in the LS algorithm as in (70). In practice, besides the linear regression method performed at local sensors, there may be other methods that can output more accurate velocity and angle difference information. For instance, some inertial devices can measure the velocity more accurately than the LS algorithm in use. With a more accurate velocity estimate, it is possible to obtain a better positioning performance.
Both the linear LS algorithm and the truncated LS algorithm estimate the velocity of the target and thus can make the time-consuming nonlinear filtering operation update in a longer time interval. Subsequently, the performances of these algorithms will be analyzed in numerical results.

3. Numerical Results

In order to evaluate the performance of the concerned positioning algorithms, we first consider a scenario where four angle-only sensors are estimating the position of a target with their angle-only observations. Both the sensors and the target are moving with a constant speed during the period of observation by assumption. The initial position and the constant speed of the sensors and the target are shown in Table 1. The scenario is illustrated in Figure 2.
All the sensors output observations at a frequency of 50 Hz, i.e., with a period of 20 ms. But they operate on an asynchronous manner, namely the sensors record the observations at independent instants. The differences of the sampling instants are randomly generated within 20ms. This assumption is important in real situations because it allows distributed sensors to operate asynchronously. We also assume that there is no error in recording the instants of the observations and for all the sensors, no signal is missed in detection during the observation period.
Assume that the self-positioning error is distributed with zero-mean normal distribution, whose variance is 1m for all the sensors, namely
C s ( k , n ) = E ( Δ p n ( t k , n ) Δ p n T ( t k , n ) ) = I , n = 1 , , N .
The angle measurement error also follows zero-mean normal distribution with variance of 0.5 degree for all the observations, namely
C η ( i ) = E ( Δ η i Δ η i T ) = 0.5 I , i A m .
At the current stage, we do not consider the measurement errors from the gyroscopes installed on the platforms along with the sensors. Therefore, the angle measurement error is caused by the sensors only.
In order to evaluate the performance of the algorithms, we run N e = 20 random experiments and take the root mean square error (RMSE) as the resulting performance metric. The RMSE of position, RMSE of velocity and the gross RMSE at instant t in scale and in dB are defined by
RMSE p ( t ) = 1 N e k = 1 N e | g ^ 0 ( t ; k ) g 1 o ( 0 ) | 2 , RMSE p ( t ) : dB = 20 log 10 RMSE p ( t )
RMSE v ( t ) = 1 N e k = 1 N e | v ^ g ( t ; k ) v g , 1 o ( 0 ) | 2 , RMSE v ( t ) : dB = 20 log 10 RMSE v ( t )
RMSE g ( t ) = RMSE p 2 ( t ) + RMSE v 2 ( t ) , RMSE g ( t ) : dB = 20 log 10 RMSE g ( t )
respectively, where g ^ 0 ( t ; k ) denotes the initial position of the target at the kth experiment, g 1 o ( 0 ) and v 1 o ( 0 ) are constants during experiments, and v ^ g ( t ; k ) denotes the estimate of the target speed at the kth experiment. At each random experiment, the position error and the angle measurement error are generated randomly.

3.1. The Convergence Curves

As the number of observations increase, the localization performance will improve. Figure 3 shows the mean RMSE of position and velocity and the gross RMSE of the gross LS algorithm, the linear LS algorithm and the truncated LS algorithm.
From Figure 3a,b, it can be seen that with observations in a short while, roughly in about 0.8 s corresponding to 40 observations, all the algorithms have close gross RMSE curves. However, as more observations are available, the linear LS algorithm and the truncated LS algorithm will perform worse and the position RMSE even increase with sample number. It is a predictable result, because the linear approximations of the target and platform motion will be inaccurate gradually, resulting a deteriorated positioning performance. The gross LS algorithm will always benefit from the increase of the observations, because it does not rely on the linear approximation, and more observations will contribute more information of the target position.
From Figure 3c,d, with some initial observations, the truncated LS algorithm performs the best and the linear LS algorithm performs the worst and close to the gross LS algorithm. As more observations are involved, the truncated LS algorithm converges to a level much higher than that can be achieved by the gross LS algorithm and the linear LS algorithm. Therefore, ignoring the term α ˙ 0 will cause performance loss for long term observations. The gross LS algorithm is still benefitting from the increase of observations and it is slightly better than the linear LS algorithm for short term observations. The linear LS algorithm can reach a lower RMSE level but will still suffer performance degradation due to the linear model mismatch. With about 40 snapshots of observations, corresponding to 160 observations and 0.8 s period, the velocity estimation performances of two algorithms will depart.
The gross RMSE of all the algorithms are shown in Figure 3e,f, which have very close appearances to Figure 3c,d. That is because the velocity estimation errors are much greater than the positioning errors. Therefore, although the algorithm can estimate the velocity of targets, the accuracy is low due to a short observation period. In order to estimate the velocity in a higher accuracy, one needs to use observations from a longer period, which can be achieved through a filtering operation.
In order to show the way in which the algorithms converge to the real value, Figure 4a,b are presented to show estimated target positions and velocities at the first 20 snapshots and the latest 20 snapshots, respectively. It can be seen that with a few observations, the linear LS algorithm will converge to the real position of the target to a high accuracy. However, as more observations are available, the gross LS algorithm is closer to the real target position and the linear LS algorithm converges to other locations. Therefore, the gross LS algorithm is more robust in real applications.

3.2. Computation Cost

The advantage of the linear LS algorithm and the truncated LS algorithm lies in its computation cost and communication cost. In applications of the LS algorithms, the locations of the sensor will be approximated by a linear model, which is described by an initial position and a velocity term, with totally 6 parameters. Therefore, it is unnecessary to transmit all observations to the fusion center anymore and thus the communication cost will be reduced. If 100 position estimates are described by 6 parameters, the data to transmit will be reduced to 2 % . Of course, there is a limit to which the data can be reduced and the limit depends on the platform speed of the sensors, the positions of sensors, the position of the target, and the periods of the observations.
The computation cost reduction, for both the linear LS algorithm and the truncated LS algorithm, stems from reduced number of multiplication and summation operations at the fusion center. The linear LS algorithm and the truncated LS algorithm can be implemented in a structure like parallel computation, namely, the linear regression of the platform position and the local DOA measures are performed at local sensors, and the fusion center just operates on the results of local sensors. To illustrate this fact, we record the computation times of the 20 random experiments for both the algorithms and show the computation times in Figure 5a,b, in scale mode and dB mode respectively. It can be seen that as the number of observations increases, the gross LS algorithm requires a longer computation time, but the linear LS algorithm and the truncated LS algorithm have much plain slopes. Meanwhile, the linear LS algorithm needs more computation cost, as a result of estimating v a and v e . In fact, the computation cost of the linear LS algorithm does not vary with the sample number too much because it always computes with the same number of parameters, namely the number of sensors N. The computation cost increase due to more observations is imposed over local sensors now.

3.3. The Impact of Velocity Estimation Error

In theoretical derivations, we assumed that the velocities of sensors are estimated by position measures from a device on a platform. In practice, the platform may provide other means to measure the velocity in a higher accuracy. Meanwhile, in order to check whether the performance degradation of the linear LS algorithm in a long period is a result of inaccurate estimation of the platform velocity, we perform a simulation in a way that the estimated velocity v ^ g is replaced by its real value v g . In this case, there is no velocity error and the only measurement bias is from position measurement. Note subsequently that the linear LS algorithm shares the same position estimate with the truncated LS algorithm.
The RMSE of position estimation is shown in Figure 6. In Figure 6a,b, the sensor location uncertainty is zero-mean normal distributed with C s ( k , n ) = I . It can be clearly seen that for the linear LS algorithm, the RMSE curve with real sensor velocity is very close to the RMSE curve using estimated sensor velocity. In order to examine whether a higher position estimation error will make a difference, we make another simulation with C s ( k , n ) = 10 I and the results are shown in Figure 6c,d. Two RMSE curves are still very close. After some experiments with other position measurement errors, we find that the platform location and velocity regression algorithms can reach a high accuracy and thus will not cause too much performance degradation. This conclusion depends heavily on a fact that the numbers of observations under consideration is often huge according to our configurations.
In fact, from (92), the position estimate of the target does not depend on the velocity estimation of the sensor platform too much. However, there still an insignificant impact, because in our simulation configuration, the sensor location at t = 0 is obtained by an interpolation operation and if the platform velocity is exactly known a priori, the position estimation will be more accurate.
From (105), the target velocity estimation depends on the sensor velocity more. In order the examine the impact of the sensor velocity estimation on the target velocity estimation performance, we run a simulation with C s ( k , n ) = 10 I and the results are shown in Figure 7a,b, in scale and dB respectively. It can be seen that it makes a little difference to use real sensor velocity instead of estimated velocity, especially in few earlier observations. As more observations are taken into account, it makes a minor different to replace by real platform velocities. That is because more observations make the velocity estimation more accurate. However, accurate sensor velocity information does not make the target velocity estimation better necessarily, and sometimes, its impact is a bit negative. As the target also moves in a constant velocity, it is reasonable to infer that the linear approximation of the signal DOA has a great impact on the target position and velocity estimation accuracy, as will be analyzed in the subsequent results.

3.4. Nonlinearity of the DOA Approximation

In order to examine the impact of DOA nonlinearity on the final performance, Figure 7c,d show the azimuth angles and elevation angles of the target in the four sensors. The azimuth angle and elevation angle change by about 15 o at most during 100 snapshots. Over 100 observations, corresponding to 2 s, the nonlinearity of both DOA angles becomes obvious. One should refer to explicit numerical quantities to evaluate the nonlinearity acceptable.

4. Conclusions

This paper studies the target position and velocity estimation problem with distributed passive sensors. The problem is formulated with distributed asynchronous sensors connected to a fusion center with communication links. We first present a gross LS algorithm that takes all angle observations from distributed sensors into account to make a LS estimation. The algorithm is simplified after some matrix manipulations, but as it needs local sensors transmitting all local observations to a fusion center, the communication cost is high. Meanwhile, the computation cost at the fusion center is also high. The communication cost is mainly a result of high-dimensional received data. In order to reduce the communication cost and computation cost, we present a linear LS algorithm that approximates local sensor locations and angle observations with linear models and then estimate target position and velocity with the parameters of the linear models. In order to simplify the velocity estimation, we also present a truncated LS algorithm that just take an average operation to estimate target velocity. In this manner, both the communication cost and the computation cost at the fusion center are reduced significantly. However, the linear LS algorithm and the truncated LS algorithm faces the model mismatching problem, namely, if the linear approximation is not accurate anymore, the performance may degrade greatly. That is a difference from the gross LS algorithm, which always benefits from more observations, as if the linear target position model holds.
The performance of the concerned algorithms is verified with numerical results. It is found that with less observations, the truncated LS algorithm performs the best. As the number of observations increase, the linear LS algorithm and the gross LS algorithm perform better. With more observations available, the linear model mismatch and then the gross LS algorithm perform the best. The gross LS algorithm always benefits from more local observations, which is a difference from the other two algorithms. The cost is a higher communication cost and a higher computation cost at fusion center. We also examined the angle distortion problem that is the only nonlinear term in the simulation configurations. Our matrix operations often make the estimation need less computation costs.
Compared to localization and tracking framework, the algorithms with velocity estimation needs a much lower rate of tracking operations, whose matrix inverse operation often need huge computation cost. Meanwhile, it can provide more accurate measures of target states and the tracking algorithm will also benefit from that. In our simulations, the sensor location error is not taken into account. In practice, this is inevitable. If the self-positioning error is non zero-mean Gaussian distributed, one may incorporate this goal in a distributed angle-only based positioning algorithm, which will be considered in our future works.

Author Contributions

Conceptualization, S.Z. and Y.C.; derivation, S.Z.; numerical simulation, R.L., L.W.; simulation configuration, Y.C., X.P. and X.X.; supervision, X.P.; writing, S.Z. and L.W.; proof reading, S.G., X.S. and J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, C.L.; Yang, R.J. Research on Airborne Infrared Passive Location Method Based on Orthogonal Multi-station Triangulation. Laser Infrared 2007, 11, 1184–1187+1191. [Google Scholar]
  2. Wang, Y.; Ho, V. An Asymptotically Efficient Estimator in Closed-Form for 3-D AOA Localization Using a Sensor Network. IEEE Trans. Wirel. Commun. 2015, 14, 6524–6535. [Google Scholar] [CrossRef]
  3. Bai, G.; Liu, J.; Song, Y.; Zuo, Y. Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform. Sensors 2017, 17, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Peng, S.; Zhao, Q.; Ma, Y.; Jiang, J. Research on the Technology of Cooperative Dual-Station position Based on Passive Radar System. In Proceedings of the 2020 3rd International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020. [Google Scholar]
  5. Zhu, Y.; Liang, S.; Gong, M.; Yan, J. Decomposed POMDP Optimization-Based Sensor Management for Multi-Target Tracking in Passive Multi-Sensor Systems. IEEE Sens. J. 2022, 22, 3565–3578. [Google Scholar] [CrossRef]
  6. Chen, Y.; Wang, L.; Zhou, S.; Chen, R. Signal Source Positioning Based on Angle-Only Measurements in Passive Sensor Networks. Sensors 2022, 22, 1554. [Google Scholar] [CrossRef] [PubMed]
  7. Yin, J.; Wan, Q.; Yang, S.; Ho, K.C. A Simple and Accurate TDOA-AOA Localization Method Using Two Stations. IEEE Signal Process. Lett. 2015, 23, 144–148. [Google Scholar] [CrossRef]
  8. Dogancay, K. On the bias of linear least squares algorithms for passive target localization. Signal Process. 2004, 84, 475–486. [Google Scholar] [CrossRef]
  9. Dogancay, K. Bearings-only target localization using total least squares. Signal Process. 2005, 85, 1695–1710. [Google Scholar] [CrossRef]
  10. Dogancay, K. Relationship Between Geometric Translations and TLS Estimation Bias in Bearings-Only Target Localization. IEEE Trans. Signal Process. 2008, 56, 1005–1017. [Google Scholar] [CrossRef]
  11. Wu, W.; Jiang, J.; Fan, X.; Zhou, Z. Performance analysis of passive location by two airborne platforms with angle-only measurements in WGS-84. Infrared Laser Eng. 2015, 44, 654–661. [Google Scholar]
  12. Frew, E.W. Sensitivity of Cooperative Target Geolocalization to Orbit Coordination. J. Guid. Control. Dyn. 2008, 31, 1028–1040. [Google Scholar] [CrossRef]
  13. Yi, Z.; Li, Y.; Qi, G.; Sheng, A. Cooperative Target Localization and Tracking with Incomplete Measurements. Int. J. Distrib. Sens. Networks 2014, 2014, 1–16. [Google Scholar]
  14. Dogancay, K. 3D Pseudolinear Target Motion Analysis From Angle Measurements. IEEE Trans. Signal Process. 2015, 63, 1570–1580. [Google Scholar] [CrossRef]
  15. Wang, L.; Zhou, S.; Chen, Y.; Xie, X. Bias Compensation Kalman Filter for 3D Angle-only Measurements Target Tracking. In Proceedings of the 2022 International Conference on Radar Systems, Edinburgh, UK, 24–27 October 2022. [Google Scholar]
  16. Mallick, M.; Krishnamurthy, V.; Vo, B.N. Angle-Only Filtering in Three Dimensions. In Integrated Tracking, Classification, and Sensor Management: Theory and Applications; Wiley: Hoboken, NJ, USA, 2012; pp. 1–42. [Google Scholar] [CrossRef]
  17. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control. 2009, 54, 1254–1269. [Google Scholar] [CrossRef] [Green Version]
  18. Julier, S.; Uhlmann, J. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  19. Rao, S.K. Pseudo-linear estimator for bearings-only passive target tracking. IEE Proc. Radar Sonar Navig. 2001, 148, 16–22. [Google Scholar] [CrossRef]
  20. He, S.; Wang, J.; Lin, D. Three-Dimensional Bias-Compensation Pseudomeasurement Kalman Filter for Bearing-Only Measurement. J. Guid. Control. Dyn. 2018, 41, 2678–2686. [Google Scholar] [CrossRef]
  21. Yu, J.Y.; Coates, M.J.; Rabbat, M.G.; Blouin, S. A Distributed Particle Filter for Bearings-Only Tracking on Spherical Surfaces. IEEE Signal Process. Lett. 2016, 23, 326–330. [Google Scholar] [CrossRef]
  22. Pang, F.; Dogancay, K.; Nguyen, N.H.; Zhang, Q. AOA Pseudolinear Target Motion Analysis in the Presence of Sensor Location Errors. IEEE Trans. Signal Process. 2020, 68, 3385–3399. [Google Scholar] [CrossRef]
  23. Kolawole, M.O. Estimation and tracking. Radar Syst. Peak Detect. Track. 2002, 287. [Google Scholar] [CrossRef]
  24. Mallick, M. A Note on Bearing Measurement Model. 2018. Available online: https://www.researchgate.net/publication/325214760_A_Note_on_Bearing_Measurement_Model (accessed on 7 November 2022).
  25. Mallick, M.; Nagaraju, R.M.; Duan, Z. IMM-CKF for a Highly Maneuvering Target Using Converted Measurements. In Proceedings of the 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), Xi’an, China, 14–17 October 2021; pp. 15–20. [Google Scholar] [CrossRef]
Figure 1. Measurement scenario of the passive sensors.
Figure 1. Measurement scenario of the passive sensors.
Sensors 22 09655 g001
Figure 2. The topology of the sensors and the target.
Figure 2. The topology of the sensors and the target.
Sensors 22 09655 g002
Figure 3. The convergence curves of the positioning errors with different numbers of observations. (a) The RMSE of position estimation and its dB form (b); (c) The RMSE of velocity estimation and its dB form (d); (e) The gross RMSE and its dB form (f).
Figure 3. The convergence curves of the positioning errors with different numbers of observations. (a) The RMSE of position estimation and its dB form (b); (c) The RMSE of velocity estimation and its dB form (d); (e) The gross RMSE and its dB form (f).
Sensors 22 09655 g003
Figure 4. The first (a) and last (b) 20 estimates of the position and velocity of the gross LS, the linear LS and the truncated LS algorithms.
Figure 4. The first (a) and last (b) 20 estimates of the position and velocity of the gross LS, the linear LS and the truncated LS algorithms.
Sensors 22 09655 g004
Figure 5. The computation time in 10 random simulations of the gross LS, linear LS and truncated LS algorithms, in scale (a) and dB (b). The program is run on a computer with an Intel™i7-10700 CPU and 16 GB memory.
Figure 5. The computation time in 10 random simulations of the gross LS, linear LS and truncated LS algorithms, in scale (a) and dB (b). The program is run on a computer with an Intel™i7-10700 CPU and 16 GB memory.
Sensors 22 09655 g005
Figure 6. The RMSE of position with velocity estimated replaced by real velocity in scale (a) and dB (b) for C s ( k , n ) = I , and in scale (c) and dB (d) for C s ( k , n ) = 10 I .
Figure 6. The RMSE of position with velocity estimated replaced by real velocity in scale (a) and dB (b) for C s ( k , n ) = I , and in scale (c) and dB (d) for C s ( k , n ) = 10 I .
Sensors 22 09655 g006
Figure 7. The target velocity estimation results with real and estimated sensor velocity are shown in (a) and (b) for C s ( k , n ) = 10 I . The azimuth (c) and elevation (d) angles of the target in the four sensors.
Figure 7. The target velocity estimation results with real and estimated sensor velocity are shown in (a) and (b) for C s ( k , n ) = 10 I . The azimuth (c) and elevation (d) angles of the target in the four sensors.
Sensors 22 09655 g007
Table 1. Positions and velocities of sensors and the target.
Table 1. Positions and velocities of sensors and the target.
Position (m) at t = 0 sVelocity (m/s)
Sensor #1 p 1 o ( 0 ) = [ 1000 , 1000 , 0 ] T [ 100 , 0 , 0 ] T
Sensor #2 p 2 o ( 0 ) = [ 1000 , 2000 , 0 ] T [ 100 , 80 , 0 ] T
Sensor #3 p 3 o ( 0 ) = [ 2000 , 1000 , 0 ] T [ 100 , 50 , 0 ] T
Sensor #4 p 4 o ( 0 ) = [ 1500 , 1500 , 0 ] T [ 100 , 60 , 0 ] T
Target #1 g 1 o ( 0 ) = [ 0 , 100 , 1000 ] T [ 200 , 100 , 0 ] T
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, S.; Wang, L.; Liu, R.; Chen, Y.; Peng, X.; Xie, X.; Yang, J.; Gao, S.; Shao, X. Signal Source Localization with Long-Term Observations in Distributed Angle-Only Sensors. Sensors 2022, 22, 9655. https://doi.org/10.3390/s22249655

AMA Style

Zhou S, Wang L, Liu R, Chen Y, Peng X, Xie X, Yang J, Gao S, Shao X. Signal Source Localization with Long-Term Observations in Distributed Angle-Only Sensors. Sensors. 2022; 22(24):9655. https://doi.org/10.3390/s22249655

Chicago/Turabian Style

Zhou, Shenghua, Linhai Wang, Ran Liu, Yidi Chen, Xiaojun Peng, Xiaoyang Xie, Jian Yang, Shibo Gao, and Xuehui Shao. 2022. "Signal Source Localization with Long-Term Observations in Distributed Angle-Only Sensors" Sensors 22, no. 24: 9655. https://doi.org/10.3390/s22249655

APA Style

Zhou, S., Wang, L., Liu, R., Chen, Y., Peng, X., Xie, X., Yang, J., Gao, S., & Shao, X. (2022). Signal Source Localization with Long-Term Observations in Distributed Angle-Only Sensors. Sensors, 22(24), 9655. https://doi.org/10.3390/s22249655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop