Next Article in Journal
Version Reporting and Assessment Approaches for New and Updated Activity and Heart Rate Monitors
Next Article in Special Issue
Joint Design of Transmit Waveforms for Object Tracking in Coexisting Multimodal Sensing Systems
Previous Article in Journal
Ranked Sense Multiple Access Control Protocol for Multichannel Cognitive Radio-Based IoT Networks
Previous Article in Special Issue
Tracking Multiple Targets from Multistatic Doppler Radar with Unknown Probability of Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Extended Target Tracking and Feature Estimation for Optical Sensors Based on the Gaussian Process

1
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Institute of Systems Engineering, AMS, PLA, Beijing 100141, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(7), 1704; https://doi.org/10.3390/s19071704
Submission received: 22 January 2019 / Revised: 30 March 2019 / Accepted: 8 April 2019 / Published: 10 April 2019
(This article belongs to the Special Issue Multiple Object Tracking: Making Sense of the Sensors)

Abstract

:
A problem of tracking surface shape-shifting extended target by using gray scale pixels on optical image is considered. The measurement with amplitude information (AI) is available to the proposed method. The target is regarded as a convex hemispheric object, and the amplitude distribution of the extended target is represented by a solid radial function. The Gaussian process (GP) is applied and the covariance function of GP is modified to fit the convex hemispheric shape. The points to be estimated on the target surface are selected reasonably in the hemispheric space at the azimuth and pitch directions. Analytical representation of the estimated target extent is provided and the recursive process is implemented by the extended Kalman filter (EKF). In order to demonstrate the algorithm’s ability of tracking complex shaped targets, a trailing target characterized by two feature parameters is simulated and the two feature parameters are extracted with the estimated points. The simulations verify the validity of the proposed method with compared to classical algorithms.

1. Introduction

The extended target tracking (ETT) problem has been widely studied to estimate target motions and shape. With the development of the sensors, more information such as amplitude, color, etc. can be collected. This additional information should be fully utilized since it helps to improve the tracking precision and contains more abundant target characteristics. Therefore, new demand for an effective algorithm that can take advantage of additional information has emerged.
Many scholars have carried out research on ETT problems from different aspects such as shape modeling, measurement modeling and dynamic modeling. In this letter, we focus on the first aspect, i.e., the shape model method for an extended object. The current research achievements can be divided into the following three categories, corresponding to three levels of shape complexity. No extent: the simplest approach is to ignore the target extent. Under the assumptions that the measurements are distributed in the neighbor of target with a Poisson distributed number, Gaussian mixture probability hypothesis density (GM-PHD) filter is proposed [1,2]. Simple extent: a better modeling approach is to assume the target as a basic geometry, such as an ellipse, a circle or a rectangle. The most widely adopted method is based on a random matrix (RM), which represents the target extent as an ellipse [3,4,5,6,7]. For the target with non-elliptic shapes, a combination of multiple elliptical subobjects can be employed to approximate its real appearances [8,9,10]. Rectangle shape is usually adopted when tracking cars and other box-shaped objects [11,12,13]. Arbitrary extent: a more accurate and general approach is called random hypersurface models (RHMs) [14,15,16], which models the extent as a star-convex shape [17]. The RHMs can adapt to more flexible shapes in contrast with RM. The Gaussian process (GP) method can also be employed to estimate a target with highly nonlinear extent [18,19]. A more complete review for an ETT problem about shape model methods is presented in [20]. As mentioned in part Simple extent, the classical RM, RHMs and other variations only output the two-dimensional contour of the target, they do not have the ability to utilize extra information while the GP has the potential to realize that. The GP based method can theoretically estimate target of any dimension due to the fitting ability for highly nonlinear function.
GP has been adopted to learn the shape of a stationary three-dimensional object in [21,22], but none of them provide a recursive solution in a tracking application. A recursive solution for a cube-shaped object is provided in [23] by using point measurements collected from the target surface, but the target shape is relatively simple and there are no comparisons to the classical methods. In this letter, we extend the GP method to the problem of ETT on an image plane for optical sensors. Since the traditional RM, RHMs and GP are unable to utilize the amplitude information, we try to present a novel approach to overcome this problem. The surface of an extended target is described with solid radial function, rather than a two-dimensional counter. In order to adapt to the three-dimensional target tracking, the spherical difference was adopted rather than the angle difference of two two-dimensional angles. The RM and RHMs need to parameterize the target extent and estimate these parameters, while this letter estimates some selected points on the target surface. These selected points can represent the target distribution feature. Thus, the proposed method does not require any parametric model. The measurements with amplitude are converted into polar coordinates, and the measurements are used to estimate these points by the GP method with modified covariance function. In addition, we use the estimated points to extract the feature parameters of the target distribution, which are beneficial to the identification and classification of the target in the future work.

2. GP Theory

The GP is a kind of stochastic process in probability theory and mathematical statistics, which combines a series of random variables that follow normal distribution within an index set. Unknown complicated functions can be learned with training data. In this section, GP theory is directly applied to tracking application. More detail about the theory can be referred to [18].

2.1. Basic GP

GP is a typical stochastic process. it is always used to model spatially correlated measurements [24]. With t being input, the distribution of a GP f ( t ) can be uniquely determined by its mean function μ ( t ) and covariance function k ( t , t ) as
μ ( t ) = E f ( t ) ,
k ( t , t ) = E f ( t ) μ ( t ) f ( t ) μ ( t ) T .
Then, the GP can be marked as
f ( t ) G P μ ( t ) , k ( t , t ) .
For the function value of the finite input t 1 t N , the Gaussian process is a generalization of the multivariate Gaussian probability distribution, i.e.,
f ( t 1 ) f ( t N ) N μ , K ,
where
μ = μ ( t 1 ) μ ( t N ) ,     K = k ( t 1 , t 1 ) k ( t 1 , t N ) k ( t N , t 1 ) k ( t N , t N ) .

2.2. GP Regression

The distribution of a highly nonlinear function can be fitted with a Gaussian process regression (GPR) method. In this subsection, the GPR is adopted to model the target extent. In tracking application, the function value f = [ f ( t 1 f ) , , f ( t N f ) ] T for the corresponding input t f = [ t 1 f , , t N f ] T is learned by using the measurement z = [ z 1 , z M ] T and the corresponding input t = [ t 1 , t N ] T . For a target with amplitude information shown in Figure 1, the above measurement z is defined by { r z , c z , I z } , where r z , c z and I z represent the row, column, and amplitude of the measurement in grayscale image, respectively. In our application, the input t = ψ , where ψ = Δ ( θ , φ ) is the azimuth and pitch angle of a solid angle. The radius value z k and the function value f = f ( ψ k ) satisfy the joint Gaussian distribution
z k f N 0 , k ( ψ k , ψ k ) + R K ( ψ k , ψ f ) K ( ψ f , ψ k ) K ( ψ f , ψ f ) .
According to the GP theory, the likelihood and initial priori probabilities are given as follows:
p ( z k f ) = N ( z k ; H k f f , R k f ) ,
p ( f ) = N ( 0 , P 0 f ) ,
where
H f ( ψ k ) = K ( ψ k , ψ f ) K ( ψ f , ψ f ) 1 ,
R f ( ψ k ) = k ( ψ k , ψ k ) + R K ( ψ k , ψ f ) K ( ψ f , ψ f ) 1 K ( ψ f , ψ k ) ,
P 0 f = K ( ψ f , ψ f ) ψ f = ( θ f , φ f ) .

2.3. Recursive GPR

In order to realize the real-time recursive processing, the approximate method is usually used to solve the recursive problem. Bayes formula is applied to the posterior probability p f z 1 : N , which is
p f z 1 : N p z N f , z 1 : N 1 · p f z 1 : N 1 , k = 1 N p z k f , z 1 : k 1 · p f .
Therefore, it can be approximated that f is independent from the previous measurement z 1 : k 1 , which means that f is completely statistically independent from the previous measurement, i.e.,
p z k f , z 1 : k 1 = p z k f .

2.4. Covariance Function Modification

In the covariance function used in Ref. [18], the squared Euler distance between the two plane angles is used to measure the correlation, i.e.,
k ( ψ , ψ ) = σ f 2 exp d ( ψ , ψ ) 2 l 2 + σ r 2 ,
d ( ψ , ψ ) = ψ ψ 2 ,
where σ f 2 denotes the variance of the measurement amplitude and l denotes the scale of the function; · 2 denotes the squared Euler distance. Since the target amplitude surface is considered as a convex hemisphere, it is obvious that the spherical distance is more reasonable to measure the correlation of the two solid angles, which is expressed as
d ( ψ , ψ ) = arccos [ cos ( φ ) cos ( φ ) cos ( θ θ ) + sin ( φ ) sin ( φ ) ] 2 .

3. EKF Model

Kalman filters such as EKF and Unscented Kalman filter (UKF) [25,26] are the most commonly used filters for nonlinear filtering. Since the EKF exerts lower computing complexity than the UKF, we employed EKF as the Bayesian approximation in this section.

3.1. Measurement Amplitude Modification

Since the measurement amplitude and target size are different in units as well as unequal in the values, in order to obtain a better extent estimation, we hope that the input x f can be distributed as uniformly as possible on the target surface. Therefore, the measurement amplitude and target size should be close in order of magnitude, which means that the amplitude information needs to be reasonably modified. Let S k represent the number of pixels covered by the target, and let I z k denote the amplitude value of all the target measurements collected from the optical image plane. Then, we define a modification factor β as
β = S k S k max I z k max I z k .
With the amplitude value of the measurements multiplied by the modification factor β , the maximum value of the modified measurement max ( I z k ) = β · max ( I z k ) = S k . It can be seen that, after modification, the amplitude value I z k and the target scale S k are at the same order of magnitude. Thus, the modified measurement is re-defined as
z ˜ k = Δ { r z k , c z k , β I z k } ,
where r z k , c z k and I z k represent the row, column, and amplitude of the measurement z k . We now replace z k with the modified measurement z ˜ k for the rest of this letter.

3.2. Measurement Model with Amplitude

With the amplitude information considered, the target dimension is augmented to three, and the target extent is regarded as a hemispherical convex surface. Thus, the extent of the target can be represented by a function of the solid angle ψ = ( θ , φ ) and radius r = f ( ψ ) in polar coordinates. Two coordinate systems are adopted. The sensor coordinate system is fixed to the sensor and local coordinate system is fixed to the body of the extended target, denoted by S and L, respectively. The origin of the local system o L is combined with the mass center of the target and fixed on the image plane; the x L axis lies in the image plane and points to the target moving direction, the z L axis is perpendicular to the plane. x L , y L and z L follow the right-hand rule. The target extent is represented in the local coordinate system and the target center is presented in the sensor coordinate system.
As is shown in Figure 1, the target measurement z ˜ k , l can be expressed in terms of the direction vector p ( x k c ) and radius f ( ψ k , l L ( x k c , ζ k ) ) relative to the centroid x k c , which is
z ˜ k , l = x k c + p ( x k c ) f ( ψ k , l L ( x k c , ζ k ) ) + e ¯ k , l ,
where e ¯ k , l N 0 , R , and p ( x k c ) = p ( ψ k , l S ( x k c ) ) = z ˜ k , l x k c | | z ˜ k , l x k c | | is the unit vector that points to the measurement z ˜ k , l from centroid x k c .
Reviewing the likelihood given in Equation (7), H k f f is the mean of the random variable z k and R k f is the variance of it. Thus, the measurement z k can be expressed as
z k = H k f f + e k f ,     e k f N ( 0 , R k f ) .
In tracking application, the target extent x k f is the instantiation of function value f , i.e., f = x k f . Similar to Equation (20), the radius value on the ψ k , l L ( x k c , ζ k ) direction can be expressed as
f ( ψ k , l L ( x k c , ζ k ) ) = H f ( ψ k , l L ( x k c , ζ k ) ) x k f + e k , l f ,
where H f ( · ) denotes the observation model and e k , l f denotes the noise of it. Substitute Equation (21) into Equation (19), the measurement equation can be rewritten as
z ˜ k , l = x k c + p k , l ( x k c ) [ H f ( ψ k , l L ( x k c , ζ k ) ) x k f + e k , l f ] + e ¯ k , l = x k c + H ˜ l ( x k c , ζ k ) x k f h ˜ ( x k c , ζ k , x k f ) + p k , l ( x k c ) e k , l f + e ¯ k , l e ˜ k , l = h ˜ k , l ( x k ) + e ˜ k , l ,
where
H ˜ l ( x k c , ψ k ) = p k , l ( x k c ) H f ( ψ k , l L ( x k c , ζ k ) ) ,
H f ( ψ k , l L ( x k c , ζ k ) ) = K ( ψ k , l L , ψ f ) [ K ( ψ f , ψ f ) ] 1 ,
x k = Δ ( x k c , ζ k , x k f ) .
According to Equation (22), it is easy to obtain the new noise covariance as follows:
R ˜ k , l = p k , l R k , l f p k , l T + R ¯ k , l ,
R k , l f = R f ( ψ k , l L ( x k c , ζ k ) ) ,
where R k , l f is the covariance of noise e k , l f , and R ¯ k , l is the covariance of noise e ¯ k , l .

3.3. Motion Model

We model the target state as x k = ( x ¯ k , x k f ) T , where x ¯ k = ( x k c , ζ k ) is the centroid, ζ k denotes the orientation of the extended target and x k f is the extent. Because the target centroid is different from that of the extended, the centroid and the extent state are modeled separately. The dynamics of the target centroid x ¯ k is consistent with the constant velocity model:
x ¯ k + 1 = F ¯ x ¯ k + ω ¯ k     ω ¯ k N ( 0 , Q ¯ ) ,
where the transition matrix F ¯ and covariance matrix Q ¯ for target centroid x ¯ k are given as
F ¯ = 1 T 0 1 I 3 ,     Q ¯ = T s 3 T s 3 3 3 T s 2 T s 2 2 2 T s 2 T s 2 2 2 T s σ q 2 0 0 0 σ q 2 0 0 0 σ q ζ 2 ,
where T denotes the sampling period, ⊗ denotes the Kronecker product, I 3 denotes the third order identity matrix, σ q 2 and σ q ζ 2 represents the process variance of the centroid and orientation. The dynamics of target extent x k f is shown as follows:
x k + 1 f = F f x k f + ω f k ω f k N ( 0 , Q f ) ,
where the extent transition matrix and covariance matrix are presented as
F f = exp ( α T ) ,
Q f = ( 1 exp ( 2 α T ) ) · K ( ψ f , ψ f ) ,
where α is the forgetting factor.
Finally, transition model of the augmented state, i.e., the combination of dynamics of target state x ¯ k and extent x k f , is given as
x k + 1 = F x k + ω k     ω k N ( 0 , Q ) ,
where
x k = [ x ¯ k , x k f ] T F = F ¯ 0 0 F f Q = Q ¯ 0 0 Q f .
The initial state x 0 is given as
x 0 μ 0 , P 0 ,     μ 0 = [ μ ¯ 0 , 0 ] T P 0 = P ¯ 0 0 0 P 0 f ,
where μ ¯ 0 and P ¯ 0 denote the mean and covariance of x ¯ k , P 0 f denotes the covariance of x k f , which can be calculated by Equation (22).
So far, the state space description is provided in Equations (22) and (33). Now, the estimation can be performed using a nonlinear filter and EKF is employed in this letter. The calculation of Jacobian matrix is deeply discussed in the appendix of Ref [18].

4. Simulation and Analysis

4.1. Simulation Setting

An extended target with grayscale information moves in the [ 300 × 300 ] surveillance area. The total number of scans is set to 100 s and the scan cycle is set to T s = 1 s. The grayscale amplitude along the velocity direction of the target follows the type I extreme value distribution with the scale parameter σ e x . The grayscale amplitude on a direction perpendicular to velocity obeys the Gaussian distribution with the standard deviation σ g a u . A view of the amplitude distribution model of the target is shown in Figure 2.
The standard deviations of the process noise on position and angle are set to be σ q = 1 and σ q ζ = 0.1 , respectively, forgetting factor α = 0.0001 . The standard deviations of the measurement noise on position and amplitude are set as σ r = 1 and σ I = 0 . 1 . The hyper-parameter of the GP are set to σ f = 2 , σ r = 2 and l = π π 4 4 . Those pixels with amplitude value greater than T h g r a y = 0.01 are retained as target measurements z and the others are removed. Each simulation are tested within 50 Monte Carlos.

4.2. Feature Estimation Steps

In addition to the estimation of the target centroid x k c , another important goal is to extract the target feature parameters based on the estimated values x k f at all directions. In this study, the steps shown in Figure 3 are designed to extract the target distribution feature parameters σ e x and σ g a u . In this feature estimation method, the estimated points along velocity direction and vertical velocity direction are extracted. These points are fitted with function f 1 and f 2 , which is given below. After the fitting step, the center estimation ( x ^ c L , y ^ c L ) and feature parameters σ e x and σ g a u are easily attained.
For the estimation step, too many ψ f will increase the calculation cost and cause the irreversibility of the covariance function K ( ψ f , ψ f ) . However, if the number of input is small, the estimation accuracy of target extent cannot be guaranteed. As a compromise, we set the input number as follows. For the proposed GP with AI, eight points are uniformly sampled within the interval of 0 2 π as azimuth θ f and five points are uniformly sampled within the interval of 0 π / 2 as φ f . For the traditional GP, eight points are uniformly sampled within the interval of 0 2 π . Since the number of Fourier coefficients in RHMs has to be odd, and to ensure that the number of parameters to be evaluated is consistent with that of GP as much as possible, we set the coefficients number to 9.
As for the extraction and fitting step, the feature parameters σ e x and σ g a u can be subsequently obtained by fitting the estimated points on velocity direction and vertical velocity direction. Since the distribution of the trailing target on velocity and vertical velocity direction follows the type I extreme value distribution and Gaussian distribution, the functions f 1 and f 2 are given as
f 1 = A 1 · 1 σ e x · exp x μ e x σ e x · exp exp x μ e x σ e x t y p e   I   e x t r e m e   v a l u e   d i s t r i b u t i o n ,
f 2 = A 2 · 1 2 π σ g a u · exp x μ g a u 2 σ g a u 2 G a u s s i a n   d i s t r i b u t i o n .
Subsequently, x ^ c L and y ^ c L , corresponding to the maximum value on each fitting curve in Figure 1, denotes the target local coordinate position, respectively.

4.3. Shape Deformation

The target moves at a constant speed and the initial state of the target centroid is [ 31 , 23 , 1.2 , 1.1 ] T . The scale parameter σ e x = 3.5 and becomes 2.5 at the 50th second. The standard deviation σ g a u = 1.5 and becomes 2 at the 50th second. The sum of the target grayscale is 200. The performance is measured with the tracking precision and target feature parameter estimation accuracy. Figure 4 shows a snapshot of the tracking process. For display convenience, we added a bias of 20 pixels on the y-axis of the estimation. It can be observed that the shape estimate changes with the measurement progressively.
Figure 5 compares the tracking precision of the RHMs, GP, and the proposed GP with AI. It can be seen from Figure 5 that the proposed method outperforms the RHMs and GP method. Because the RHMs and the traditional GP method can only use the position information of the measurements, the contribution of each pixel to the target centroid is the same, so traditional methods can only estimate the center of form. On the other hand, the proposed GP with AI can estimate the center of mass with the amplitude information introduced. The estimation of target feature σ e x and σ g a u are given in Figure 6, the red dotted lines denote the ground truth of target features and the blue lines denote the estimation of target features. It can be seen that the proposed method estimates the ground truth quite well during the first 50 s. Even if the target extent mutates in the 50th second, the proposed method is able to approximate the ground truth gradually with the new measurements’ accumulation. The result demonstrates that the proposed method has the ability to adapt to the shape-shifting extended target. There exists deviation between the estimation and the ground truth value since the systematic error is brought in by the truncation of measurement amplitude by T h g r a y and the fitting step.

4.4. Motion Change

The scale parameter σ e x = 3.5 and standard deviation σ g a u = 1.5 . The sum of target grayscale is 200. The target motion model follows Equation (38) during the 20th to 70th second and constant speed model during the rest of the simulation time
F = 1 0 sin ( ω T s ) / ω ( 1 cos ( ω T s ) ) / ω 0 1 ( 1 cos ( ω T s ) ) / ω sin ( ω T s ) / ω 0 0 cos ( ω T s ) sin ( ω T s ) 0 0 sin ( ω T s ) cos ( ω T s ) ,
where ω is the turn rate. Figure 7 shows a snapshot of a target with turn rate ω = π / 80 . Figure 8 shows the feature parameter estimation results of the targets moving with ω = π / 80 , π / 100 and π / 140 , respectively.
It can be seen that the feature parameter estimation performance goes worse with the increase of the target maneuverability.

4.5. Intensity Change

The scale parameter σ e x = 3.5 and standard deviation σ g a u = 1.5 . The sum of target grayscale changes from 200 to 500 at the 50th second. Figure 9 shows a snapshot of intensity change and Figure 10 shows the feature estimation results.
It can be seen that, even if the target parameters’ estimation has a large deviation as the intensity abrupt changes at the 50th second, it tends to converge to the ground truth as long as the intensity of the target holds steady.

5. Conclusions

In this letter, a GP based method for the tracking of an extended target by using amplitude information is proposed. Surface shape and kinematic state are simultaneously estimated. The superiority of utilizing the amplitude information is demonstrated compared with traditional RHMs and GP methods. As a result, the tracking error implemented by the proposed method reduces about 0.5 pixels compared with the RHMs and GP. Furthermore, by using the estimated points on the target surface, we also extract the target feature parameters, which can be used for target classification in further study. The estimation error of feature parameters σ e x and σ g a u are able to converge to 0.17 and 0.1, respectively. Through challenging scenes in Section 4.4 and Section 4.5, the estimation performance can be limited, influenced by the mutations on target maneuvering and intensity changing, which proves the robustness of our proposed method. In fact, the tracking performance improvement is mainly due to the introduction of amplitude information. If the amplitude information is ignored, this method degrades to the traditional GP method, and its performance will be no different from that of the GP and RHM methods. Since the proposed method does not require any parametric model, it is flexible enough to estimate any convex hemisphere-shaped target.

Author Contributions

H.Y. and W.A. conceived and designed the experiments; H.Y. performed the experiments; H.Y. and R.Z. analyzed the data; H.Y. wrote the paper; R.Z. and W.A. revised the paper.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61605242).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gilholm, K.; Maskell, S.; Salmond, D.; Godsill, S. Poisson models for extended target and group tracking. In Proceedings of the Signal and Data Processing of Small Targets 2005, San Diego, CA, USA, 31 July–4 August 2005. [Google Scholar]
  2. Granström, K.; Lundquist, C.; Orguner, U. A gaussian mixture phd filter for extended target tracking. In Proceedings of the IEEE International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2011. [Google Scholar]
  3. Orguner, U. A variational measurement update for extended target tracking with random matrices. IEEE Trans. Signal Process. 2012, 60, 3827–3834. [Google Scholar] [CrossRef]
  4. Feldmann, M.; Franken, D.; Koch, W. Tracking of extended objects and group targets using random matrices. IEEE Trans. Signal Process. 2011, 59, 1409–1420. [Google Scholar] [CrossRef]
  5. Koch, W. Bayesian approach to extended object and cluster tracking using random matrices. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1042–1059. [Google Scholar] [CrossRef]
  6. Lan, J.; Li, X.R. Tracking of extended object or target group using random matrix—Part I: New model and approach. In Proceedings of the International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 2177–2184. [Google Scholar]
  7. Schuster, M.; Reuter, J.; Wanielik, G. Probabilistic data association for tracking extended group targets under clutter using random matrices. In Proceedings of the International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015; pp. 961–968. [Google Scholar]
  8. Lan, J.; Li, X.R. Tracking of extended object or target group using random matrix—Part II: Irregular object. In Proceedings of the International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 2185–2192. [Google Scholar]
  9. Lan, J.; Li, X.R. Tracking of Maneuvering Non-Ellipsoidal Extended Object or Target Group Using Random Matrix. IEEE Trans. Signal Process. 2014, 62, 2450–2463. [Google Scholar] [CrossRef]
  10. Granström, K.; Willett, P.; Bar-Shalom, Y. An extended target tracking model with multiple random matrices and unified kinematics. In Proceedings of the International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015. [Google Scholar]
  11. Granström, K.; Lundquist, C.; Orguner, U. Tracking rectangular and elliptical extended targets using laser measurements. In Proceedings of the International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  12. Granström, K.; Reuter, S.; Meissner, D.; Scheel, A. A multiple model PHD approach to tracking of cars under an assumed rectangular shape. In Proceedings of the International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014. [Google Scholar]
  13. Granström, K.; Lundquist, C. On the Use of Multiple Measurement Models for Extended Target Tracking. In Proceedings of the International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013. [Google Scholar]
  14. Baum, M.; Hanebeck, U. Random hypersurface models for extended object tracking. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Ajman, UAE, 14–17 December 2010. [Google Scholar]
  15. Baum, M.; Hanebeck, U. Shape tracking of extended objects and group targets with starconvex RHMs. In Proceedings of the International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
  16. Steinbring, J.; Baum, M.; Zea, A.; Faion, F.; Hanebeck, U. A Closed-Form Likelihood for Particle Filters to Track Extended Objects with Star-Convex RHMs. In Proceedings of the IEEE International Conference on Multisensor Fusion and Information Integration, San Diego, CA, USA, 14–16 September 2015. [Google Scholar]
  17. Özkan, E.; Wahlström, N.; Godsill, S. Rao-blackwellised particle filter for star-convex extended target tracking models. In Proceedings of the International Conference on Information Fusion, Heidelberg, Germany, 5–8 July 2016. [Google Scholar]
  18. Wahlström, N.; Özkan, E. Extended target tracking using Gaussian processes. IEEE Trans. Signal Process. 2015, 63, 4165–4178. [Google Scholar] [CrossRef]
  19. Hirscher, T.; Scheel, A.; Reuter, S.; Dietmayer, K. Multiple extended object tracking using Gaussian processes. In Proceedings of the International Conference on Information Fusion, Heidelberg, Germany, 5–8 July 2016. [Google Scholar]
  20. Granström, K.; Baum, M.; Reuter, S. Extended Object Tracking: Introduction, Overview and Applications. arXiv, 2016; arXiv:1604.00970. [Google Scholar]
  21. Mahler, J.; Patil, S.; Kehoe, B.; Van Den Berg, J.; Ciocarlie, M.; Abbeel, P.; Goldberg, K. Gp-gpis-opt: Grasp planning with shape uncertainty using Gaussian process implicit surfaces and sequential convex programming. In Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 4919–4926. [Google Scholar]
  22. Martens, W.; Poffet, Y.; Soria, P.R.; Fitch, R.; Sukkarieh, S. Geometric priors for Gaussian process implicit surfaces. IEEE Robot. Autom. Lett. 2017, 2, 373–380. [Google Scholar] [CrossRef]
  23. Kumru, M.; Özkan, E. 3D Extended Object Tracking Using Recursive Gaussian Processes. In Proceedings of the 21st International Conference on Information Fusion, Cambridge, UK, 10–13 July 2018; pp. 1–8. [Google Scholar]
  24. Huber, M.F. Recursive Gaussian process regression. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 3362–3366. [Google Scholar]
  25. Chen, X.; Wang, X.; Xuan, J. Tracking Multiple Moving Objects Using Unscented Kalman Filtering Techniques. arXiv, 2018; arXiv:1802.01235. [Google Scholar]
  26. Chen, X.; Wang, W.; Meng, W.; Zhang, Z. A novel UKF based scheme for GPS signal tracking in high dynamic environment. In Proceedings of the International Symposium on Systems and Control in Aeronautics and Astronautics, Harbin, China, 8–10 June 2010. [Google Scholar]
Figure 1. Sketch of the target surface described by a solid radial function.
Figure 1. Sketch of the target surface described by a solid radial function.
Sensors 19 01704 g001
Figure 2. A view of the target model and its projection on the image plane.
Figure 2. A view of the target model and its projection on the image plane.
Sensors 19 01704 g002
Figure 3. Flow chart of feature extraction.
Figure 3. Flow chart of feature extraction.
Sensors 19 01704 g003
Figure 4. Snapshots of tracking at time step k = 30, 45, 60 and 75.
Figure 4. Snapshots of tracking at time step k = 30, 45, 60 and 75.
Sensors 19 01704 g004
Figure 5. Comparison of tracking precision between RHMs, GP and the proposed GP with AI.
Figure 5. Comparison of tracking precision between RHMs, GP and the proposed GP with AI.
Sensors 19 01704 g005
Figure 6. Estimation of σ e x and σ g a u vs. ground truth for a shape deformation scenario.
Figure 6. Estimation of σ e x and σ g a u vs. ground truth for a shape deformation scenario.
Sensors 19 01704 g006
Figure 7. Snapshots of target motion with turn rate ω = π / 80 .
Figure 7. Snapshots of target motion with turn rate ω = π / 80 .
Sensors 19 01704 g007
Figure 8. Estimation of σ e x and σ g a u with a different turn rate.
Figure 8. Estimation of σ e x and σ g a u with a different turn rate.
Sensors 19 01704 g008
Figure 9. Snapshots of a tracking scene of target intensity change.
Figure 9. Snapshots of a tracking scene of target intensity change.
Sensors 19 01704 g009
Figure 10. Estimation of σ e x and σ g a u for an intensity change scenario.
Figure 10. Estimation of σ e x and σ g a u for an intensity change scenario.
Sensors 19 01704 g010

Share and Cite

MDPI and ACS Style

Yu, H.; An, W.; Zhu, R. Extended Target Tracking and Feature Estimation for Optical Sensors Based on the Gaussian Process. Sensors 2019, 19, 1704. https://doi.org/10.3390/s19071704

AMA Style

Yu H, An W, Zhu R. Extended Target Tracking and Feature Estimation for Optical Sensors Based on the Gaussian Process. Sensors. 2019; 19(7):1704. https://doi.org/10.3390/s19071704

Chicago/Turabian Style

Yu, Haoyang, Wei An, and Ran Zhu. 2019. "Extended Target Tracking and Feature Estimation for Optical Sensors Based on the Gaussian Process" Sensors 19, no. 7: 1704. https://doi.org/10.3390/s19071704

APA Style

Yu, H., An, W., & Zhu, R. (2019). Extended Target Tracking and Feature Estimation for Optical Sensors Based on the Gaussian Process. Sensors, 19(7), 1704. https://doi.org/10.3390/s19071704

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop