Next Article in Journal
Synchronized Oscillations in Double-Helix B-DNA Molecules with Mirror-Symmetric Codons
Previous Article in Journal
Generating Clustering-Based Interval Fuzzy Type-2 Triangular and Trapezoidal Membership Functions: A Structured Literature Review
Previous Article in Special Issue
Ostrowski Type Inequalities Involving Harmonically Convex Functions and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Application of the Kalman Filter Recursive Algorithm to Estimate the Gaussian Errors by Minimizing the Symmetric Loss Function

1
Faculty of Management, Bucharest University of Economic Studies, 6 Piata Romana, 1st District, 010374 Bucharest, Romania
2
Faculty of Business Administration in Foreign Languages, Bucharest University of Economic Studies, 6 Piata Romana, 1st District, 010374 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(2), 240; https://doi.org/10.3390/sym13020240
Submission received: 29 December 2020 / Revised: 25 January 2021 / Accepted: 28 January 2021 / Published: 31 January 2021
(This article belongs to the Special Issue Symmetry in Nonlinear Functional Analysis and Optimization Theory)

Abstract

:
Kalman filtering is a linear quadratic estimation (LQE) algorithm that uses a time series of observed data to produce estimations of unknown variables. The Kalman filter (KF) concept is widely used in applied mathematics and signal processing. In this study, we developed a methodology for estimating Gaussian errors by minimizing the symmetric loss function. Relevant applications of the kinetic models are described at the end of the manuscript.

1. Introduction

Linear quadratic estimation (LQE) is a statistical method with many technological applications. The most common applications of LQE are for navigation, guidance, and the control of vehicles [1]. Other applications [2,3] are based on the lag between the issuing signals and receiving a response, in which the Kalman filter (KF) makes assessments of the condition of the framework and issues directions.
In general, a KF assumes that the noise is normally (Gaussian) distributed and the observation models are linear [4]. Furthermore, the filter gives the posterior conditional probability estimate in the case when the errors follow a Gaussian distribution.
Over the past few decades, generalizations of and extensions to the KF method have been established, such as the unscented Kalman filter (UKF) and the extended Kalman filter (EKF), which both work on nonlinear systems [5,6]. EKF is probably the most widely used estimation algorithm for nonlinear systems [7].
The KF uses a dynamic model, known control inputs, and multiple consecutive measurements to make an estimate of its varying quantities, which leads to a better estimation than if they were obtained by using one measurement alone [8,9,10].
The Kalman filter is a productive recursive filter that evaluates the inward condition of a direct powerful framework from a progression of noisy estimations. It is utilized in a wide scope of econometric and engineering applications ranging from radar and computer vision to estimations of basic macroeconomic models [11,12], and is an important tool in control system engineering and control framework design. Together with the linear quadratic regulator (LQR), the KF determines the solutions of a direct quadratic Gaussian control issue.
Other related works in the mathematics field have been carried out as well [13,14,15]. The most representative works were [16,17,18]. Some mathematical researchers [19,20] proposed the use of a KF for the elimination of noise. An interesting approach [21] uses the extended KF for linearizing the nonlinear systems while improving the applicability of these methods.
Anderson et al. [22] presented the theory and applications of Kalman filtering in a unified and organized fashion, which encompassed major contributions by a substantial number of researchers of this topic. Jazwinski [23] took the Bayesian point of view regarding filtering, while other economists [24,25] set a procedure for the frequency domain design of statically optimal filters.
Starting from what is already known in this area, the novelty of this study consists of using a recursive algorithm of the Kalman filter, the semigroup of the Hamiltonian matrices, and the contraction property to estimate the Gaussian errors. The main difference between our work and classical approaches, such as EKF and LQE, is that it handles the time-varying measurement uncertainty. For this, the covariance matching approach is used to update the observation covariance matrix.
The rest of this paper is organized as follows. The methodology and the formulation of the system equations are described in Section 2. Section 2 also describes the KF model, the general case of the KF, and the Gaussian case. The subsequent section, Section 3, describes the semigroup of the Hamiltonian matrices and the contraction property. The results, including the need for the KF-based techniques under different conditions for the numerical and experimental study, are presented in Section 3. Based on the observed results, some conclusions and areas for future work, as well as possible applications of the proposed methodology, are identified in the last section of the paper.

2. Materials and Methods

2.1. The Kalman Filter Model

The KF is based on the following mathematical theory. Consider Bm = 1, 2, 3, … data observed from p × 1 vectors. Bm is a sequence that is presumed to have the following distribution:
Bm = Dm + φm = ZmAm + φm,
where
  • φm—white noise
  • Dm—carrier signal.
In this regard, the dynamic representation is as follows:
Dm = ZmAm, m = 1, 2, 3….
whose factors are defined below.
Next, we continue with an observed data model:
Bm = ZmAm + φm.
Equation (1) is related to the p × 1 vector of Bm data points, with an r × 1 Am vector. In this case, Zm is a p × r matrix, while φm is a p × 1 vector with E[φm] = 0 and Var[φm] = Mm.
The vector Am is not straightforwardly observable, but we could assume that it follows a Markov chain process of order 1, i.e.,
Am = XmAm−1 + θm, m = 1, 2, 3, …
The states’ behavior is described by Equation (2). In this case, Xm is an r × r matrix and θm is an r × 1 error vector, which is not correlated with θm since E[θm] = 0 and Var[θm] = Pm.
A0, the initial vector, was assumed to have:
E[A0] = x0 and Var[A0] = Q0.
Furthermore, we assumed that the error terms were not two-by-two correlated, i.e.,
E[θnφ’d] = 0 for all n, d = 1, 2, 3, …
and
E[θnA0′] = 0 and E[φnA0′] = 0 for all n = 1, 2, 3, …
We also assumed that the matrices Xm, Mm, Pm, and Zm, as defined before, are not random matrices. The above two statements (1) and (2) reveal that Am could be constructed as a linear combination of θd and φd, for d ≤ m − 1 and A0 for any m.

2.2. The General Case of the Kalman Filter

From the least-squares class of estimators, the KF is a recursive estimator.
The matrices Xm, Mm, Pm, and Zm are presumed to be known for any m.
Consider a sequence Γm, which includes all the data observations recorded until moment m, inclusively.
Consider now an optimal estimator xm|m of Am based on the data observations of Γm. The r × r matrix of the mean of the squared errors (EPM) of xm|m is:
Qm|m = E[(Am − xm|m)(Am − xm|m)’],
where “ ’ ” stands for the transpose of a matrix.
Considering xm−1|m−1 and Qm−1|m−1, the estimators xm|m−1 and Qm|m−1 of Am and Qm|m, which are based on the data points of Γm−1, can be calculated from the prediction formulas:
xm|m−1 = Xm xm−1|m−1 + θm, for m = 1, 2, 3, …
and
Qm|m−1 = E[(Am – xm|m−1)(Am – xm|m−1)’]
= E[(XmAm−1 + θm – Xmxm−1|m−1)(XmAm−1 + θm – Xmxm−1|m−1)’]
= E[(Xm(Am−1 – xm−1|m−1) + θm)(Xm(Am−1 − xn−1|n−1) + θm)’]
= XmE[(Am−1 – xm−1|m−1)(Am−1 – xm−1|m−1)’]X’m + E[θmθ’m],
i.e.,
Qm|m−1 = XmQm−1|m−1X’m + Pm, for m = 1, 2, 3, …
By (4) and (5), the average of the connected terms of the multiplication is zero.
Since Am−1 is a linear combination of the terms θd and φd, d ≤ m − 1, and the A0 term is also not correlated with φm.
The equations (7) and (8) above represent the KF. Given A0 and Q0, the KF gives the optimal estimator of the state when new data points are accessible.

2.3. The Gaussian Case

Now, we consider the following system:
Am = XmAm−1 + θm,
Bm = ZmAm + φm, m ≥ 1,
where
  • Xm and Zm are deterministic r × r and p × r matrices, respectively, and
  • Am Rr, θm Rq, φm, and Bm Rp are random variables.
Now, we assume that the pair of vectors (θmm), m ≥ 1, are independent and have a Gaussian distribution with an average of 0 and a covariance matrix equal to J, where J is the unit matrix and A0 has a Gaussian distribution with an average of a0 and a covariance matrix Q0. We then assume that the determinant of Xm is not zero, m ≥ 1.
For any m ≥ 1, let an Nm σ-algebra be generated by B1, …, Bm [26] and
A ^ m   =   E [ A m | N m ] ,
Q m   =   E ( ( A m     A ^ m ) ( A m     A ^ m ) | N m ) .
If Nm is known, A ^ m is the best estimator of Am and Qm is the conditional variance–covariance matrix of the errors.
Now, we consider Q, respectively Q0, which is the set of symmetric matrices that are non-negatively defined (respectively positively defined) and of r × r dimensions. For any Q   Q and m ≥ 1, we then define:
ξm(Q) = (XmQX’m + J)(J + Sm + SmXmQX’m)−1,
where Sm = Z’mZm and J is the unit matrix of order r. It follows that ξm takes P0 into itself.
The basic result shown above is related to the fact that the applications ξm, m ≥ 1, are contractions in Q0 if we enhance Q0 with the Riemann measurement δ [27], which is invariant to conjugacy. Furthermore, the contractions could be strict or uniform, depending on the controllability and observability properties of the above system (9).
To demonstrate the results in this view, we considered the symplectic group of matrices. The matrices are invariant to P0. In this view, they could be considered a generalization of the Perron–Frobenius theorem. In some sense, the study of Bougerol [28] is elementary.
The symplectic matrices utilize some of the already existing ideas [29]. The semigroup Σ was induced by Wojtkowsky [30] in a similar context. Finally, we note that the Riccati transformation [31] has a contractibility property, which is a generalization of the positive matrices under a Hilbert space metric [32].

3. Applications to Kinetic Models

3.1. A Recursive KF Algorithm Application

We considered an object moving along a straight line. Its movements are captured by a video camera. The object is moving along a straight line, with random acceleration at, and Δ t   is the time interval. Its location is described by the x-coordinate. We also assumed that the acceleration has a Gaussian distribution with zero mean and constant variance ( σ a 2 ). The position of the object, denoted as yt, is also noisy (Gaussian with zero mean and constant variance). We denote the variance as σ y 2 .
The state vector of this motion is given by v t = ( x t , v t ), where the speed is the first derivative of the distance, i.e., v t = x ˙ t .
We also assumed that there was no process control and get:
x t = ( 1 Δ t 0 1 ) ( x t 1 v t 1 ) + a t ( Δ t 2 2 Δ t ) = F t x t 1 + w t ,   Q t = var ( w t ) ,
where
  • F t is a known state transition matrix applied to the t − 1 state xt−1,
  • Q t is the variance matrix, and
  • w t is a process noise vector, which has the joint distribution as a multivariate Gaussian with a variance matrix Q t and μ i , t = E [ w i , t ] = 0 for i = 1 , n ¯ .
Now, let G t = ( Δ t 2 2 , Δ t ) T . Then, we get:
Q t = E [ w t , w t T ] = G t E [ a t 2 ] ; G t T = σ a 2 G t G t T = σ a 2 [ Δ t 4 4 Δ t 3 2 Δ t 3 2 Δ t 2 ] .
This means that Ft, Qt, and Gt are independent of t. Hence, we will call them just F, Q, and G.
To measure the object’s position at time t, we get:
y t = ( v t 0 ) ( 1 0 0 0 ) x t = v t + H t x t ,
where
  • vt is the observation noise and
  • Ht is an observation matrix.
Hence, the variance matrix is:
R = E [ v t v t T ] = ( σ y 2 0 0 0 ) .
We denote x ^ t | t as the estimate of state xt at time t and P t | t as the variance matrix of the error x t x ^ t | t . Our goal was to minimize P t | t .
The initial position of the object is x ^ 0 | 0 = ( 0 , 0 ) T . If this position vector is known accurately, then the variance matrix is zero, i.e.,
P 0 | 0 = ( 0 0 0 0 ) ;
otherwise, we will have that:
P 0 | 0 = ( a 0 0 a ) ,
where a is real and suitably large, a > 0.
When t = 1, we first predicted x ^ 1 | 0 and computed its variance matrix P 1 | 0 by using the prediction equations:
x ^ t | t 1 = F t x ^ t 1 | t 1 + B t u t = F x ^ t 1 | t 1 ,
P t | t 1 = F t P t 1 | t 1 F t T + Q t = F P t 1 | t 1 F t T + Q t = F P t 1 | t 1 + Q ,
where B t is the control matrix, which is applied to a control vector ut.
We then calculated the auxiliary data S1 and z1 from the equations:
z ˜ t = y t H t x ^ t | t 1 = y t H x ^ t | t 1 ,
S t = H t P t | t 1 H t T + R t = H P t | t 1 H T + R ,
where Rt is the variance matrix of a multivariate Gaussian distribution vector vt.
Then, we could compute the optimal Kalman gain K1 and the position vector x ^ 1 | 1 using the equations:
x ^ t | t = x ^ t | t 1 + K t z ˜ t ,
K t = P t | t H t T S t 1 = P t | t 1 H T S t 1 .
Finally, we could compute P1|1 to prepare for time t = 2 by using the following equation:
P t | t = ( I K t H t ) P t | t 1 = ( I K t H ) P t | t 1 ,
which minimizes the variance matrix P t | t .

3.2. EKF Recursive Algorithm Application

We extended our previous example of an object moving along a straight line by adding the orientation of the vehicle θ to the position vector (x,y). Then, we estimated the state vector p, i.e., p = (x,y,θ).
The kinematic model is given by the following system:
{ x t = x t 1 + v t Δ t cos ( θ t 1 + w t Δ t 2 ) , y t = y t 1 + v t Δ t sin ( θ t 1 + w t Δ t 2 ) , θ t = θ t 1 + w t Δ t ,
where Δt is the time interval, while v and w are the speed and the yaw rate of the vehicle. We also denote v ^ and w ^ as the speed and yaw rate measurements. We assumed that their errors followed the Gaussian distribution as Δ v t ~ N ( 0 , σ v t 2 ) and Δ w t ~ N ( 0 , σ w t 2 ) .
Now, suppose that the vehicle is equipped with a GPS, and the GPS measurements are denoted by vector z with a measurement model given by:
z t = H p t + γ t , H = ( 1 0 0 0 1 0 ) ,
where γ ~ N ( 0 , σ γ 2 ) is the measurement error that follows a Gaussian distribution with zero mean and variance σ γ 2 .
This is an EKF model, which could be locally linearized as follows:
( x t y t θ t ) ( x ¯ t y ¯ t θ ¯ t ) + A ( p t 1 , u t ) ( Δ x t 1 Δ y t 1 Δ θ t 1 ) + B ( p t 1 , u t ) ( Δ v t Δ ω t ) ,
{ x ¯ t = x t 1 + v t Δ t cos ( θ t 1 + w t Δ t 2 ) , y ¯ t = y t 1 + v t Δ t sin ( θ t 1 + w t Δ t 2 ) , θ ¯ t = θ t 1 + w t Δ t ,
A ( p t 1 , u t ) = ( 1 0 v t Δ t sin ( θ t 1 + w t Δ t 2 ) 0 1 v t Δ t cos ( θ t 1 + w t Δ t 2 ) 0 0 1 ) ,
B ( p t 1 , u t ) = ( Δ t cos ( θ t 1 + w t Δ t 2 ) v t Δ t 2 sin ( θ t 1 + w t Δ t 2 ) Δ t sin ( θ t 1 + w t Δ t 2 ) v t Δ t 2 cos ( θ t 1 + w t Δ t 2 ) 0 Δ t ) ,
where u t = ( v t , w t ) .
As we can see, the matrices A ( p t 1 , u t ) and B ( p t 1 , u t ) of the state evolution function are Jacobian with respect to p t 1 and u t , respectively.
Hence, we have a locally linearized EKF model. The prediction of the state vector is:
( x ¯ t y ¯ t θ ¯ t ) = ( x ^ t 1 + v ^ t Δ t cos ( θ ^ t 1 + w ^ t Δ t 2 ) y ^ t 1 + v ^ t Δ t sin ( θ ^ t 1 + w ^ t Δ t 2 ) θ ^ t 1 + w ^ t Δ t ) .

3.3. Simulation Using Synthetic Data

We tested the KF performance using synthetic data generated using model (24) and the measurement model (25). Now let:
Δ t = 1   , σ w 2 = 0.02 2   rad 2 / s 2 , σ v 2 = 0.5 2   m 2 / s 2 , σ γ 2 = d i a g ( 10 2 , 10 2 )   m 2 .
Now, we set the ground truth:
p 0 = 0   , 0   m , π 2 rad ,   w t = 0.04   rad · s , v t = 10   m / s .
The yaw rate measurements and the speed measurements were synthesized according to w ^ t ~ N ( w t , σ w 2 ) and v ^ t ~ N ( v t , σ v 2 ) .
The GPS measurements were synthesized according to z ^ t ~ N ( p t , σ γ 2 ) .
Then, the KF was applied to the synthetic data and estimates of the state of the vehicle were obtained. The results can be seen in Figure 1, in which:
  • The red line represents the estimated vehicle trajectory.
  • The black line stands for the ground truth of the vehicle trajectory.
  • The blue crosses are the GPS measurements of the position of the vehicle.
As we can see from the above figure, the estimated trajectory of the vehicle was smoother than the jumping measurements. Figure 2 shows the results of 100 Monte Carlo trials. As we can observe, the measurement errors were larger than the estimated errors, just like in the previous example.
We also calculated the orientation estimate errors and the results of the same 100 Monte Carlo simulations are shown in Figure 3. As we can observe, these errors varied around 0.05 radians.
Hence, we concluded that, without using estimations in this application, we could not have any estimate of the vehicle’s orientation except a random guess, which could be as large as π/2 radians.
This last example shows another important utility of the estimation, i.e., a proper estimation of the Gaussian errors could reveal the state part, which is not directly measurable.

4. Discussion and Conclusions

The Kalman filter is an adequate recursive filter that evaluates the internal capacity of a linear dynamic system (LDS) using a series of error estimates. Possible applications of the KF are in economics, biology, and the environmental sciences, and it is an important topic in control systems engineering and control theory. Together with the LQR, the KF solves the linear quadratic Gaussian (LQG) control problem. The linear quadratic Gaussian controller, linear quadratic regulator, and Kalman filter are solutions to some of the most important problems in control theory.
In many applications, the few “detectable” measured parameters are smaller than the internal state. In this situation, the Kalman filter could estimate the entire internal state by combining a series of measurements.
The Kalman filter could be adapted to a wide variety of economic applications. It is mainly an ordinal least-square (OLS) procedure that gives the minimum mean square estimators (MMSE) under the Gaussian assumption. Even when the normality assumption is waived, the Kalman filter minimizes any symmetric loss function. The KF is not only used directly in economic problems but it could also be used as a background to other estimation techniques, such as the estimation of Markov switching models or the quasi-maximum likelihood estimation procedure.
In the environmental sciences, the Kalman filter has wide applications in the areas of the pollution of aquatic systems, measuring atmospheric pollution, environmental policy management, and forecasting hurricanes [33,34]. The KF could also be used to estimate the temporal variations in associations between the radial growth of black oak and physiologically based climate indices [35,36]. In biology, biological samples and bacteriological infections could be detected using a multi-process Kalman filter to provide an online estimate of the inflection point of changes in the growth of an organism [37].
A wide collection of Kalman filters have been developed in the past few decades from the original formulation of the Kalman filter. The information filter, Schmidt’s extended filter, or the Kalman–Bucy filter are a wide variety of filters that were developed by Thornton, Bierman, and many others. Probably the most used type of Kalman filter is the phase-locked loop (PLL), which is now omnipresent in television sets, radios, satellite communications receivers, and almost any other communications equipment. In this paper, we presented the KF model, the general case of the KF, and the Gaussian case. The semigroup of the Hamiltonian matrices and contraction property were used in the KF algorithm to estimate the Gaussian errors.
The main result of this study was the use of a recursive algorithm of the Kalman filter to estimate the Gaussian errors.
Several possible variations of our approach exist. Model herring inflow as a state variable and model herring inflow as pure noise with a non-zero mean or model noise as pure white (non-geometric) noise assume perfect observations of the control variables (catch and inflow). These variations were all tested and led to only small changes in the recursive algorithm fit and small variations in the parameter estimates.
The main limitation of this study came from the fact that there were not many applications of the algorithms described in the previous sections. Future work, as well as possible applications of the methodology, could be applied in mathematics, physics, and engineering research areas. Moreover, a future research topic is how to treat more general noise terms, for example, terms including unknown parameters.

Author Contributions

Conceptualization, M.B.; Data curation, M.B.; Formal analysis, M.B.; Funding acquisition, C.B.; Methodology, M.B.; Project administration, C.B. and M.B.; Resources, C..; Software, M.B.; Supervision, C.B.; Validation, C.B. and M.B.; Writing – original draft, M.B.; Writing – review & editing, C.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Musoff, H.; Zarchan, P. Fundamentals of Kalman Filtering: A Practical Approach; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2009. [Google Scholar]
  2. Wolpert, D.; Ghahramani, Z. Computational Principles of Movement Neuroscience. Nat. Neurosci. 2000, 3, 1212–1217. [Google Scholar] [CrossRef] [PubMed]
  3. Hameed, Z.; Hong, Y.S.; Cho, Y.M.; Ahn, S.H.; Song, C.K. Condition monitoring and fault detection of wind turbines and related algorithms: A review. Renew. Sustain. Energy Rev. 2009, 13, 1–39. [Google Scholar] [CrossRef]
  4. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35. [Google Scholar] [CrossRef] [Green Version]
  5. Tseng, C.H.; Lin, S.F.; Jwo, D.J. Fuzzy adaptive cubature Kalman filter for integrated navigation systems. Sensors 2016, 16, 1167. [Google Scholar] [CrossRef] [Green Version]
  6. Yamauchi, T. Modeling Mindsets with Kalman Filter. Mathematics 2018, 6, 205. [Google Scholar] [CrossRef] [Green Version]
  7. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef] [Green Version]
  8. Singh Sidhu, H.; Siddhamshetty, P.; Kwon, J. Approximate Dynamic Programming Based Control of Proppant Concentration in Hydraulic Fracturing. Mathematics 2018, 6, 132. [Google Scholar] [CrossRef] [Green Version]
  9. Song, H.; Yu, K.; Li, X.; Han, J. Error estimation of load identification based on linear sensitivity analysis and interval technique. Struct. Multidiscip. Optim. 2017, 55, 423–436. [Google Scholar] [CrossRef]
  10. Ma, C.; Tuan, P.; Lin, D.; Liu, C. A study of an inverse method for the estimation of impulsive loads. Int. J. Syst. Sci. 1998, 29, 663–672. [Google Scholar] [CrossRef]
  11. Ma, C.; Ho, C. An inverse method for the estimation of input forces acting on non-linear structural systems. J. Sound Vib. 2004, 275, 953–971. [Google Scholar] [CrossRef]
  12. Lin, D. Input estimation for nonlinear systems. Inverse Probl. Sci. Eng. 2010, 18, 673–689. [Google Scholar] [CrossRef]
  13. Kim, J.; Kim, K.; Sohn, H. Autonomous dynamic displacement estimation from data fusion of acceleration and intermittent displacement measurements. Mech. Syst. Signal. Process. 2014, 42, 194–205. [Google Scholar] [CrossRef]
  14. Casciati, F.; Casciati, S.; Faravelli, L.; Vece, M. Validation of a data-fusion based solution in view of the real-time monitoring of cable-stayed bridges. Procedia Eng. 2014, 199, 2288–2293. [Google Scholar] [CrossRef]
  15. Zhang, C.; Xu, Y. Structural damage identification via multi-type sensors and response reconstruction. Struct. Health Monit. 2016, 15, 715–729. [Google Scholar] [CrossRef]
  16. Zhu, W.; Wang, W.; Yuan, G. An improved interacting multiple model filtering algorithm based on the cubature Kalman filter for maneuvering target tracking. Sensors 2016, 16, 805. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Amin, M.; Rahman, M.; Hossain, M.; Islam, M.; Ahmed, K.; Miah, B. Unscented Kalman Filter Based on Spectrum Sensing in a Cognitive Radio Network Using an Adaptive Fuzzy System. Big Data Cogn. Comput. 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  18. Giffin, A.; Urniezius, R. The Kalman Filter Revisited Using Maximum Relative Entropy. Entropy 2014, 16, 1047–1069. [Google Scholar] [CrossRef]
  19. Cao, M.; Qiu, Y.; Feng, Y.; Wang, H.; Li, D. Study of Wind Turbine Fault Diagnosis Based on Unscented Kalman Filter and SCADA Data. Energies 2016, 9, 847. [Google Scholar] [CrossRef]
  20. Kim, D.-W.; Park, C.-S. Application of Kalman Filter for Estimating a Process Disturbance in a Building Space. Sustainability 2017, 9, 1868. [Google Scholar] [CrossRef] [Green Version]
  21. Feng, K.; Li, J.; Zhang, X.; Zhang, X.; Shen, C.; Cao, H.; Yang, Y.; Liu, J. An Improved Strong Tracking Cubature Kalman Filter for GPS/INS Integrated Navigation Systems. Sensors 2018, 18, 1919. [Google Scholar] [CrossRef] [Green Version]
  22. Anderson, B.D.; Moore, J.B. Optimal Filtering; Courier Corporation: Chelmsford, MA, USA, 2012. [Google Scholar]
  23. Jazwinski, A.H. Stochastic Processes and Filtering Theory; Courier Corporation: Chelmsford, MA, USA, 2007. [Google Scholar]
  24. Gelb, A. Applied Optimal Estimation; MIT press: Cambridge, MA, USA, 1974. [Google Scholar]
  25. Tanizaki, H. Nonlinear Filters: Estimation and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  26. Mihail, B.; Sorin, C. An Application of the Kalman Filter for Market Studies. Ovidius Univ. Ann. Ser. Econ. Sci. 2013, 13, 726–731. [Google Scholar]
  27. Shelkovich, V.M. The Riemann problem admitting δ-, δ′-shocks, and vacuum states (the vanishing viscosity approach). J. Differ. Equ. 2006, 231, 459–500. [Google Scholar] [CrossRef] [Green Version]
  28. Bougerol, P. Kalman filtering with random coefficients and contractions. Siam J. Control. Optim. 1993, 31, 942–959. [Google Scholar] [CrossRef]
  29. Shayman, M.A. Phase portrait of the matrix Riccati equation. Siam J. Control Optim. 1986, 24, 1–65. [Google Scholar] [CrossRef]
  30. Wojtkowski, M. Invariant families of cones and Lyapunov exponents. Ergod. Theory Dyn. Syst. 1985, 5, 145–161. [Google Scholar] [CrossRef] [Green Version]
  31. Smith, D.R. Decoupling Order Reduction via the Riccati Transformation. Siam Rev. 1987, 29, 91–113. [Google Scholar] [CrossRef]
  32. Birkhoff, G. Extensions of Jentzsch’s theorem. Trans. Am. Math. Soc. 1957, 85, 219–227. [Google Scholar]
  33. Piegorsch, W.W.; Smith, E.P.; Edwards, D.; Smith, R.L. Statistical advances in environmental science. Stat. Sci. 1998, 13, 186–208. [Google Scholar] [CrossRef]
  34. Bril, G. Forecasting hurricane tracks using the Kalman filter. Environmetrics 1995, 6, 7–16. [Google Scholar] [CrossRef]
  35. LeBlanc, D.C. Spatial and temporal variations in the prevalence of growth decline in red spruce populations of the northeastern United States: Reply. Can. J. Res. 1993, 23, 1494–1496. [Google Scholar] [CrossRef]
  36. Cook, E.R.; Johnson, A.H. Climate change and forest decline: A review of the red spruce case. Water Air Soil Pollut. 1989, 48, 127–140. [Google Scholar] [CrossRef]
  37. Whittaker, J.; Fruhwirth-Schnatter, S. A dynamic change point model for detecting the onset of growth in bacteriological infections. Appl. Stat. 1994, 43, 625–640. [Google Scholar] [CrossRef]
Figure 1. Estimates and position measurements for one trial.
Figure 1. Estimates and position measurements for one trial.
Symmetry 13 00240 g001
Figure 2. Measurement errors and position estimates of 100 Monte Carlo trials.
Figure 2. Measurement errors and position estimates of 100 Monte Carlo trials.
Symmetry 13 00240 g002
Figure 3. Orientation estimate errors for 100 Monte Carlo trials.
Figure 3. Orientation estimate errors for 100 Monte Carlo trials.
Symmetry 13 00240 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Busu, C.; Busu, M. An Application of the Kalman Filter Recursive Algorithm to Estimate the Gaussian Errors by Minimizing the Symmetric Loss Function. Symmetry 2021, 13, 240. https://doi.org/10.3390/sym13020240

AMA Style

Busu C, Busu M. An Application of the Kalman Filter Recursive Algorithm to Estimate the Gaussian Errors by Minimizing the Symmetric Loss Function. Symmetry. 2021; 13(2):240. https://doi.org/10.3390/sym13020240

Chicago/Turabian Style

Busu, Cristian, and Mihail Busu. 2021. "An Application of the Kalman Filter Recursive Algorithm to Estimate the Gaussian Errors by Minimizing the Symmetric Loss Function" Symmetry 13, no. 2: 240. https://doi.org/10.3390/sym13020240

APA Style

Busu, C., & Busu, M. (2021). An Application of the Kalman Filter Recursive Algorithm to Estimate the Gaussian Errors by Minimizing the Symmetric Loss Function. Symmetry, 13(2), 240. https://doi.org/10.3390/sym13020240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop