An Iterative Nonlinear Filter Using Variational Bayesian Optimization
Abstract
:1. Introduction
2. Problem Formulation
3. Evidence Lower Bound Maximization
4. Proximal Iterative Nonlinear Filter
4.1. Penalty Function Based on KL Divergence
4.2. The Proximal Iterative Nonlinear Filter
Algorithm 1 The implementation of the PEKF-VB algorithm |
|
4.3. Remarks
- The VB method approximates the true posterior PDF by choosing from a parameterized variational distribution. In each iteration of the PEKF-VB, the ELBO (9) increases. It follows that the ELBO is a proper criterion for measuring the performance of variational optimization. The ELBO of the proposed nonlinear filter is
- Apart from the KL divergence, we can use Calvo and Oller’s distance (COD) as the penalty function in Equation (13); the corresponding filter is denoted by CODEKF. The COD of two distributions and is [28],
- Since both PEKF-VB and CODEKF involve iterations within the VB framework to minimize the divergence between the posterior PDF and variational distribution, their complexity is increased by the calculation of the Jacobian in each iteration.
- In PEKF-VB, we use KL divergence to measure the similarity between two distributions. Under Gaussian assumptions for the distributions, a closed-form solution of the variational distribution has been derived. However, the VB framework with the KL divergence can also apply to non-Gaussian distributions. If no closed-form exists, a Monte Carlo method can be used to approximate the divergence. Other measures of dissimilarity between probability distributions, such as the alpha-divergence, the Rényi-divergence and the alpha-beta divergence, can also be used in the VB framework. See [29] and references therein. Unfortunately, in general, no computationally tractable form of the variational distribution can be derived and a Monte Carlo method has to be employed.
5. Numerical Simulations
- As we expected, with a fixed sensor trajectory, PEKF-VB has the best target observability from sensor measurements and leads to a better RMSE performance than EKF and UKF. Under the VB framework, the variational distribution approaches the real posterior PDF through the iteration of the proximal filter.
- The RMSE performance of CODEKF is also better than those of EKF and UKF because, for CODEKF, the Jacobian matrix of Equation (37) in each iteration is updated to minimize the COD. However, the RMSE performance of CODEKF is slightly worse than that of PEKF-VB.
- In the first few scans, the performance of the four filters are comparable. This is because, in this bearing-only tracking problem, the accumulative measurements in these scans do not provide enough information to the four filters. The performance of CODEKF and of PEKF-VB suffers when measurement data is very limited. As more measurements accumulate both CODEKF and PEFK-VB extract more information via the iteration process, resulting in superior performance.
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A. Derivations of and
Appendix B. Derivation of ELBO
References
- Barshalom, Y.; Li, X.R. Estimation and Tracking: Principles, Techniques, and Software; Artech House: Norhood, MA, USA, 1996. [Google Scholar]
- Van Trees, H.L. Detection, Estimation, and Modulation Theory Part 1; John Wiley & Sons, Inc.: New York City, NY, USA, 2003. [Google Scholar]
- Farina, A.; Ristic, B.; Benvenuti, D. Tracking a ballistic target: Comparison of several nonlinear filters. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 854–867. [Google Scholar] [CrossRef]
- Hu, J.; Wang, Z.; Gao, H.; Stergioulas, L.K. Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements. Automatica 2012, 48, 2007–2015. [Google Scholar] [CrossRef] [Green Version]
- Hu, X.; Bao, M.; Zhang, X.; Guan, L.; Hu, Y. Generalized iterated Kalman filter and its performance evaluation. IEEE Trans. Signal Process. 2015, 63, 3204–3217. [Google Scholar] [CrossRef]
- García-Fernández, Á.F.; Svensson, L.; Morelande, M.R.; Särkkä, S. Posterior linearization filter: Principles and implementation using sigma points. IEEE Trans. Signal Process. 2015, 63, 5561–5573. [Google Scholar]
- Tronarp, F.; Garcia-Fernandez, F.; Särkkä, S. Iterative filtering and smoothing in non-linear and non-Gaussian systems using conditional moments. IEEE Signal Process. Lett. 2018, 25, 408–412. [Google Scholar] [CrossRef]
- Khan, Z.; Balch, T.; Dellaert, F. MCMC-based particle filtering for tracking a variable number of interacting targets. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1805–1819. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Merwe, R.V.D.; Doucet, A.; Freitas, N.D.; Wan, E. The unscented particle filter. In Proceedings of the International Conference on Neural Information Processing Systems, Denver, CO, USA, 4–6 December 2000; pp. 563–569. [Google Scholar]
- Cappe, O.; Godsill, S.J.; Moulines, E. An overview of existing methods and recent advances in sequential Monte Carlo. Proc. IEEE 2007, 95, 899–924. [Google Scholar] [CrossRef]
- Julier, S.; Uhlmann, J.; Durrantwhyte, H.F. A new method for nonlinear transformation of means and covariances in filters and estimates. IEEE Trans. Autom. Control 2000, 45, 477–482. [Google Scholar] [CrossRef]
- Gustafsson, F.; Hendeby, G. Some relations between extended and unscented Kalman filters. IEEE Trans. Signal Process. 2012, 60, 545–555. [Google Scholar] [CrossRef]
- Arasaratnam, I.; Haykin, S. Cubature Kalman filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
- Arasaratnam, I.; Haykin, S.; Hurd, T.R. Cubature Kalman filtering for continuous-discrete systems: Theory and simulations. IEEE Trans. Signal Process. 2010, 58, 4977–4993. [Google Scholar] [CrossRef]
- Merwe, R.V.D.; Wan, E.A. Sigma-point Kalman filters for integrated navigation. In Proceedings of the 60th Annual Meeting of the Institute of Navigation, Dayton, OH, USA, 7–9 June 2004; pp. 641–654. [Google Scholar]
- Gultekin, S.; Paisley, J. Nonlinear Kalman filtering with divergence minimization. IEEE Trans. Signal Process. 2017, 65, 6319–6331. [Google Scholar] [CrossRef]
- Beal, M.J. Variational Algorithms for Approximate Bayesian Inference. Ph.D. Thesis, Cambridge University, Cambridge, UK, 2003. [Google Scholar]
- Khan, M.E.; Baqué, P.; Fleuret, F.; Fua, P. Kullback-Leibler proximal variational inference. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 3402–3410. [Google Scholar]
- Blei, D.M.; Kucukelbir, A.; McAuliffe, J.D. Variational inference: A review for statisticians. J. Am. Stat. Assoc. 2017, 112, 1–32. [Google Scholar] [CrossRef]
- Bishop, C.M. Pattern Pecognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
- Salimans, T.; Kingma, D.; Welling, M. Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 1218–1226. [Google Scholar]
- Mnih, A.; Rezende, D. Variational inference for Monte Carlo objectives. In Proceedings of the International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016; pp. 2188–2196. [Google Scholar]
- Sain, R.; Mittal, V.; Gupta, V. A comprehensive review on recent advances in variational Bayesian inference. In Proceedings of the International Conference on Advances in Computer Engineering and Applications, Ghaziabad, India, 19–20 March 2015; pp. 488–492. [Google Scholar]
- Tseng, P. An analysis of the EM algorithm and entropy-like proximal point methods. Math. Oper. Res. 2004, 29, 27–44. [Google Scholar] [CrossRef]
- Chrétien, S.; Hero, A.O. Kullback proximal algorithms for maximum-likelihood estimation. IEEE Trans. Inf. Theory 2000, 46, 1800–1810. [Google Scholar] [CrossRef]
- Khan, M.E.; Babanezhad, R.; Lin, W.; Schmidt, M.; Sugiyama, M. Convergence of proximal-gradient stochastic variational inference under non-decreasing step-size sequence. J. Comp. Neurol. 2015, 319, 359–386. [Google Scholar]
- Petersen, K.B.; Pedersen, M.S. The Matrix Cookbook; Technical University of Denmark: Denmark, Copenhagen, 2012. [Google Scholar]
- Calvo, M.; Oller, J.M. A distance between multivariate normal distributions based in an embedding into the Siegel group. J. Multivar. Anal. 1990, 35, 223–242. [Google Scholar] [CrossRef]
- Regli, J.B.; Silva, R. Alpha-Beta Divergence For Variational Inference. arXiv, 2018; arXiv:1805.01045. [Google Scholar]
- Arulampalam, M.S.; Ristic, B.; Gordon, N.; Mansell, T. Bearings-only tracking of manoeuvring targets using particle filters. EURASIP J. Adv. Signal Process. 2004, 2004, 1–15. [Google Scholar] [CrossRef]
- Wang, X.; Morelande, M.; Moran, B. Target motion analysis using single sensor bearings-only measurements. In Proceedings of the International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 2094–2099. [Google Scholar]
- Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proc. F-Radar Signal Process. 1993, 140, 107–113. [Google Scholar] [CrossRef]
- Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Trans. Aerosp. Electron. Syst. Mag. 2010, 25, 53–82. [Google Scholar] [CrossRef] [Green Version]
- Arulampalam, S.; Maskell, S.; Gordon, N.; Tim, C. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
- Rezende, D.J.; Mohamed, S.; Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the Machine Learning Research, Beijing, China, 21–26 June 2014; pp. 1278–1286. [Google Scholar]
- Price, R. A useful theorem for nonlinear devices having Gaussian inputs. IEEE Trans. Inf. Theory 1958, 4, 69–72. [Google Scholar] [CrossRef]
Algorithm | EKF | UKF | CODEKF | PEKF-VB |
---|---|---|---|---|
Time ratio | 1 | 3.64 | 3.96 | 6.76 |
Mean RMSE | 17.8129 | 17.8455 | 16.7758 | 16.3958 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, Y.; Wang, X.; Lan, H.; Wang, Z.; Moran, B.; Pan, Q. An Iterative Nonlinear Filter Using Variational Bayesian Optimization. Sensors 2018, 18, 4222. https://doi.org/10.3390/s18124222
Hu Y, Wang X, Lan H, Wang Z, Moran B, Pan Q. An Iterative Nonlinear Filter Using Variational Bayesian Optimization. Sensors. 2018; 18(12):4222. https://doi.org/10.3390/s18124222
Chicago/Turabian StyleHu, Yumei, Xuezhi Wang, Hua Lan, Zengfu Wang, Bill Moran, and Quan Pan. 2018. "An Iterative Nonlinear Filter Using Variational Bayesian Optimization" Sensors 18, no. 12: 4222. https://doi.org/10.3390/s18124222
APA StyleHu, Y., Wang, X., Lan, H., Wang, Z., Moran, B., & Pan, Q. (2018). An Iterative Nonlinear Filter Using Variational Bayesian Optimization. Sensors, 18(12), 4222. https://doi.org/10.3390/s18124222