Next Article in Journal
Performance and Characteristics of Wearable Sensor Systems Discriminating and Classifying Older Adults According to Fall Risk: A Systematic Review
Previous Article in Journal
Monitoring Serum Spike Protein with Disposable Photonic Biosensors Following SARS-CoV-2 Vaccination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System

1
School of HDU-ITMO, Joint Institute, Hangzhou Dianzi University, Hangzhou 310018, China
2
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
3
School of Automation, Guangdong University of Petrochemical Technology, Maoming 525000, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(17), 5864; https://doi.org/10.3390/s21175864
Submission received: 11 June 2021 / Revised: 28 August 2021 / Accepted: 28 August 2021 / Published: 31 August 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
This paper proposes one new design method for a higher order extended Kalman filter based on combining maximum correlation entropy with a Taylor network system to create a nonlinear random dynamic system with modeling errors and unknown statistical properties. Firstly, the transfer function and measurement function are transformed into a nonlinear random dynamic model with a polynomial form via system identification through the multidimensional Taylor network. Secondly, the higher order polynomials in the transformed state model and measurement model are defined as implicit variables of the system. At the same time, the state model and the measurement model are equivalent to the pseudolinear model based on the combination of the original variable and the hidden variable. Thirdly, higher order hidden variables are treated as additive parameters of the system; then, we establish an extended dimensional linear state model and a measurement model combining state and parameters via the previously used random dynamic model. Finally, as we only know the results of the limited sampling of the random modeling error, we use the combination of the maximum correlation estimator and the Kalman filter to establish a new higher order extended Kalman filter. The effectiveness of the new filter is verified by digital simulation.

1. Introduction

The application of filters occupies an important position in various fields at the national and international levels. The progress and development of filters play important roles in national economic construction—especially national defense construction—such as in real-time estimation and target tracking. In 1960, Kalman proposed a method of filtering under the minimum mean squared error criterion for linear systems, and it soon began to be widely used [1]. In order to solve nonlinear problems, extended Kalman filters (EKFs) [2], unscented Kalman filters (UKFs) [3], and cubature Kalman filters (CKFs) have since emerged. However, the above-mentioned filtering methods require the modeling error to be Gaussian white noise. As such, their performances are likely to worsen when applied to non-Gaussian situations, especially when the systems are disturbed by impulsive noise. Impulsive noise arises from heavy-tailed distributions [4] (such as some mixed Gaussian distributions), and is common in many real scenarios of automatic control and target tracking (for instance, the measurement noise in the radar system is often not Gaussian noise, but heavy-tailed non-Gaussian noise [5]). In 1993, Gordon and Salmond proposed particle filtering when the density function is known [6]; this achieves an approximation of the distribution function by sampling a large number of particles therein. However, this method is very complicated; it requires a large number of particles, and it will cause particle degradation after re-sampling. In general, the density function is difficult to obtain. For this reason, for the linear system, Chen designed the corresponding Kalman filter under the maximum correlation entropy criterion based on the limited realization of random variables [7]; this is called the maximum correlation entropy Kalman filter (MCKF) [8]. On this basis, the maximum correntropy extended Kalman filter (MCEKF) and the maximum correntropy unscented Kalman filter (MCUKF), which can solve nonlinear non-Gaussian systems, have since emerged [9]. However, in MCEKFs, all higher order terms in the Taylor expansion are discarded. Therefore, a large truncation error will be generated, and the filtering performance will decrease or even diverge as the nonlinearity of the system increases. In addition, each step of the state estimation needs to recalculate the Taylor expansion coefficient, which will undoubtedly increase the complexity of the calculation. MCUKFs use UT transformation and sigma point sampling [10]; this is called deterministic sampling. There is only one sampling point for a dimensional system. Neither low-dimensional nor high-dimensional systems have a strong claim to superiority. A large number of experiments have shown that both EKFs and UKFs can be approximated by a second-order polynomial at most [11], which will produce a large rounding error. Hence, both will eventually face the problems of degraded filtering performance and divergence as their nonlinearity increases [12].
This project proposes a higher order extended Kalman filter method based on maximum correlation entropy, under the assumption that both state and measurement equations can be modeled and based on a strong nonlinear function. The main contributions of this paper are as follows: (1) using multidimensional Taylor nets to convert the general expression of nonlinear functions into higher order polynomials; (2) defining each order of polynomial in the system as hidden variables of the corresponding order, and treating them as time-variable parameters; (3) establishing the dynamic relationship between the time-variable parameters and combining them with the original variables to further establish the expanded dimension state model; (4) based on the expanded linear state variables, equivalently rewriting the measurement model into the corresponding linear form; and (5) according to the established linear state and measurement model of the new extended dimension system, establishing a higher order extended Kalman filter method based on maximum correlation entropy.
The remaining parts of this paper are organized as follows: the first chapter is the preface of our knowledge, which introduces the definition of “entropy”; the Section 2 presents a method for identifying nonlinear functions based on multidimensional Taylor networks; the Section 3 presents a higher order extended Kalman filter method; the Section 4 presents the detailed design process of the maximum correlation entropy higher order extended Kalman filter; the Section 5 concerns simulation verification; and the Section 6 and Section 7 presents a summary and outlook.

2. Description of Correntropy

Correntropy is a generalized similarity measure between two random variables [13]. Given two one-dimensional random variables φ , ζ R 1 , their joint distribution function is F ϕ ξ ( φ , ζ ) ; then, the correlation entropy is defined as follows:
V ( ϕ , ξ ) = ε [ α ( φ , ζ ) ] = α ( φ , ζ ) d F ϕ ξ ( φ , ζ )
where ε is the expectation operator and α ( , ) is the translation-invariant Mercer kernel. In this article, it is not particularly emphasized that this kernel function is a Gaussian kernel, which is defined as follows:
α ( φ , ζ ) = G τ ( e ) = e x p ( e 2 2 τ 2 )
where e = φ ζ , τ > 0 represents the kernel’s bandwidth.
By expanding Equation (2) with a Taylor series, we can obtain the following:
α ( φ , ζ ) = G τ ( e ) = e x p ( e 2 2 τ 2 ) = k = 0 ( 1 ) k 2 k τ 2 k k ! ε { ( φ ζ ) 2 k }
and then the correlation entropy of Equation (1) has the following expression:
V ( ϕ , ξ ) = ε [ α ( φ , ζ ) ] = k = 0 ( 1 ) k 2 k τ 2 k k ! ( φ ζ ) 2 k d F ϕ ξ ( φ , ζ )       = k = 0 ( 1 ) k 2 k τ 2 k k ! ( φ ζ ) 2 k d F ϕ ξ ( φ , ζ ) = k = 0 ( 1 ) k 2 k τ 2 k k ! ε { ( ϕ ξ ) 2 k }
where ε { ( ϕ ξ ) 2 k } =   ( φ ζ ) 2 k d F ϕ ξ ( φ , ζ ) is the 2 k truncation statistic of the random variable ϕ , ξ R .
However, in most practical cases, joint distribution F ϕ ξ is usually unknown, and there are often finite implementations ( φ ( j ) , ζ ( j ) ) , j = 1 , 2 , , N of ( ϕ , ξ ) for random variables. In these cases, the sample mean estimator can be used to estimate the heterogeneity:
ε { ( ϕ ξ ) 2 k } = 1 N ( j = 0 N ( φ ( j ) ζ ( j ) ) 2 k )
Then, the entropy expression of the random variable pair   ( φ ,   ζ ) is driven by finite data:
V ^ ( ϕ , ξ ) = ε [ α ( φ , ζ ) ]       = k = 0 ( 1 ) k 2 k τ 2 k k ! ( φ ζ ) 2 k d F ϕ ξ ( φ , ζ )       = k = 0 ( 1 ) k 2 k τ 2 k k ! 1 N ( j = 0 N ( φ ( j ) ζ ( j ) ) 2 k )       = 1 N k = 0 ( 1 ) k 2 k τ 2 k k ! ( j = 0 N ( φ ( j ) ζ ( j ) ) 2 k )       = 1 N k = 0 ( 1 ) k 2 k τ 2 k k ! ( j = 0 N ( e ( j ) ) 2 k ) = 1 N j = 0 N G τ ( e ( j ) )
When ϕ , ξ R n , and the components of vector e = ϕ ξ are independent of one another, multidimensional correlation entropy is based on N sampling.

3. Non-Linear Model Identification Based on Multidimensional Taylor Networks

Given that the state model and observation model are complex dynamic systems with nonlinear characteristics [14]:
α ( τ + 1 ) = σ ( α ( τ ) ) + γ ( τ )
Γ ( τ + 1 ) = δ ( α ( τ + 1 ) ) + θ ( τ + 1 )
where α ( τ ) R h is an h-dimensional state vector; T ( τ + 1 ) R d represents the d-dimensional measure vector; and σ i ( α ( τ ) ) ,   i = 1 , 2 , , h and δ j ( α ( τ + 1 ) ) ,   j = 1 , 2 , , d represent the state function and the measurement function, respectively. The modeling errors for non-Gaussian systems are γ ( k ) and θ ( k ) , while ϑ = d i a g { ϑ 1 , ϑ 2 , , ϑ h } and η = d i a g ( η 1 η 2 η d ) are the process noise variance and the measurement noise variance, respectively.
Lemma 1.
Any continuous function defined in a closed interval can be approximated accurately with a polynomial function [15].
Lemma 2.
For continuous functions, σ ( α ( k ) ) , defined in a closed interval, can be approximated by the following [16]:
i = 1 N ( h , l ) ψ i ( k ) t = 1 l α t λ i , t ( k )
where N ( h , l ) denotes the total number of terms in the expansion and λ i , t denotes the power of the variable α t in the product of the ith variable.

3.1. Multidimensional Taylor Network Structure

The multidimensional Taylor network model can replace the traditional neural network with the dynamic model and control the system under certain conditions; it is characterized by a nonlinear autoregressive moving-average model composed of polynomials. The multidimensional Taylor network (MTN) uses a forward single intermediate layer structure, including an input layer, an intermediate layer, and an output layer. Supposing that the input layer comprises n nodes— α ( τ ) = [ α 1 ( τ ) α 2 ( τ ) α h ( τ ) ] T R h —the output layer is α ( τ + 1 ) , the middle layer is the network processing layer, and each input variable realizes the weighted summation of each power product term in this layer. The middle layer is composed of various power product terms and the corresponding connection weight vector ψ j ( τ ) :
ψ j ( τ ) = [ ψ j , 1 ( τ ) , ψ j , 2 ( τ ) , , ψ j , N ( h , l ) ( τ ) ] T
which represents the output weight vector connecting the intermediate layer and the output node of the network.
According to the multivariate Taylor equation, if a function is differentiable to the h + 1 th order at a certain point, then the function expands to a form where the power series of the variable is not greater than m times. The model can be expressed as a dynamic equation, as follows:
α j ( τ + 1 ) = σ ( α ( τ ) ) = i = 1 N ( h , l ) ψ j , i ( τ ) t = 1 l α t λ i , t ( τ ) + Δ σ ( τ )
where σ ( ) is a function of nonlinearity described by a multidimensional Taylor network model, ψ i represents the weight before the product item of the ith variable, N ( h , l ) denotes the total number of terms in the expansion, λ i , t denotes the power of the variable α t in the product of the ith variable, and Δ σ ( τ ) is the error—also known as the remainder—produced by the identification of a function by a multidimensional Taylor network.

3.2. Parameter Identification Method Based on Kalman Filtering

Model Establishment of a Kalman Filter

A Kalman filter can be regarded as an optimized autoregressive data processing method that describes the entire system through a state equation and an observation equation.
State equation:
ψ j , i ( τ + 1 ) = ψ j , i ( τ ) + w j , i ( τ )
where i = 1 , 2 , , N ( h , l ) , j = 1 , 2 , h .
Observation equation:
It is not difficult to draw from Figure 1:
α j ( τ + 1 ) = i = 1 N ( h , l ) ψ j , i ( τ + 1 ) t = i l α t λ i , t ( τ + 1 )     = H j ( τ + 1 ) [ ψ j , 1 ( τ + 1 ) , ψ j , 2 ( τ + 1 ) , , ψ j , N ( h , l ) ( τ + 1 ) ] T + v j ( τ + 1 )     = H j ( τ + 1 ) ψ j ( τ + 1 ) + v j ( τ + 1 )
Thus,
α ( τ + 1 ) = [ α 1 ( τ + 1 ) , α 2 ( τ + 1 ) , , α j ( τ + 1 ) , , α h ( τ + 1 ) ] T   = H ( τ ) ψ ( τ + 1 ) + v ( τ + 1 )
where H ( τ ) = [ H 1 ( τ + 1 ) , H 2 ( τ + 1 ) , , H h ( τ + 1 ) ] T ; ψ ( τ + 1 ) = [ ψ 1 ( τ + 1 ) , ψ 2 ( τ + 1 ) , , ψ h ( τ + 1 ) ] ; ψ j , i ( τ + 1 ) represents the system state at τ -time, that is, the parameter status value of the kth moment; and β ( τ + 1 ) represents the output value of the network. It is assumed that both process noise w ( τ ) and v ( τ + 1 ) are Gaussian white noise during the analysis, and Q j = d i a g ( Q j , 1 Q j , 2 Q j , N ( h , l ) ) and R j = d i a g ( R j , 1 R j , 2 R j , N ( h , l ) ) , which are the process noise variance and measurement noise variance, respectively. Here, we use a Kalman filter to approximate the dynamic model. As the filtering principle of Kalman filters is mentioned later in this article, please refer to Equations (20)–(24) for the detailed process.
Figure 1. Model of a multidimensional Taylor network.
Figure 1. Model of a multidimensional Taylor network.
Sensors 21 05864 g001

3.3. Approximation Analysis

Given a class of nonlinear functions σ ( α ( k ) ) , it can be assumed that it is derivative of the rth order, but r is a relatively large number, making it difficult for us to use Taylor nets to approximate its function. The optimal approach would be to set m , 1 m r and use the Taylor network to expand the nonlinear function to the mth order, obtain the result of Equation (16), and simultaneously ensure the higher order error term Δ δ θ , where θ is the acceptable error threshold. This not only makes the Taylor network fitting function process easier, but also ensures the accuracy of the fit.

4. Higher Order Extended Kalman Filter

4.1. Pseudolinearized Representation of Nonlinear Functions

For ease of description and understanding, if l = d = 2 , we can expand Equation (7) through a multidimensional Taylor network to the mth order, as follows:
σ i ( α ( k ) ) = ( ω i , 1 , 0 α 1 ( τ ) + ω i , 0 , 1 α 2 ( τ ) )           + ( ω i , 2 , 0 α 1 ( τ ) 2 + ω i , 1 , 1 α 1 ( τ ) α 2 ( τ ) + ω i , 0 , 2 α 2 ( τ ) 2 )           + ( ω i , 3 , 0 α 1 ( τ ) 3 + ω i , 1 , 2 α 1 ( τ ) α 2 2 ( τ ) + ω i , 2 , 1 α 1 2 ( τ ) α 2 ( τ ) + ω i , 0 , 3 α 2 ( τ ) 3 )           + l 1 + l 2 = l l 1 + l 2 l ω i , l 1 , l 2 α 1 l 1 ( τ ) α 2 l 2 ( τ ) + + m 1 + m 2 = r m 1 + m 2 r ω i , r 1 , r 2 α 1 m 1 ( τ ) α 2 m 2 ( τ ) + Δ σ i ( τ )
where l 1 + l 2 = l l 1 , l 2 l α 1 l 1 ( τ ) α 2 l 2 ( τ ) is the sum of all tensors of the l th order and ω i , l 1 , l 2 represents the weight corresponding to each order of the tensor.
Definition 1.
α ( l ) ( τ ) = { α 1 l 1 ( τ ) α 2 l 2 ( τ ) α 2 l h ( τ ) ,       l 1 + l 2 + l h = l ;       0 l j l ;     l = 0 , 1 , , h } is a set of implicit variables of the   l   th order.
Definition 2.
ω i ( l ) = [ ω i ;   1 ( l ) ,   ω i ;   2 ( l ) ,   , ω i ;   n l ( l ) ] = [ ω i ;   l , 0 ,   ω i ;   l 1 , 1 , ,   ω i ;   0 , l ] ,       i = 1 , 2 , l is the weight vector corresponding to the ith order implicit variable.
In [17], there is a detailed pseudolinearization process, so we will not repeat it in this article. In order to make the model more accurate, we treat the remainder Δ σ ( τ ) of the equation of state as latent variables. According to Definition 1 and Definition 2, the pseudolinear extended dimension form using the remainder as a hidden variable is as follows:
α ( 1 ) ( τ + 1 ) = W ( 1 ) ( τ + 1 , τ ) α ( 1 ) ( τ ) + l = 2 m W ( l ) ( τ + 1 , τ ) α ( l ) ( τ ) + C Δ σ ( τ ) + γ ( 1 ) ( τ )
where α ( 1 ) ( τ ) = [ α 1 ( 1 ) ( τ ) α 2 ( 1 ) ( τ ) ] , W ( l ) = [ ω 1 ( l ) ω 2 ( l ) ] , γ ( τ ) = γ ( 1 ) ( τ ) = [ γ 1 ( 1 ) ( τ ) γ 2 ( 1 ) ( τ ) ] , C = [ 1 0 0 1 ] .
Similarly, Equation (8) can be rewritten as follows:
Γ ( 1 ) ( τ + 1 ) = χ ( 1 ) ( τ + 1 ) α ( 1 ) ( τ + 1 ) + l = 2 m χ ( l ) ( τ + 1 ) α ( l ) ( τ + 1 ) + D Δ σ ( τ + 1 ) + θ ( 1 ) ( τ + 1 )  
where Γ ( 1 ) ( τ + 1 ) = [ Γ 1 ( 1 ) ( τ ) Γ 2 ( 1 ) ( τ ) ] , χ ( l ) = [ χ 1 ( l ) χ 2 ( l ) ] , θ ( 1 ) ( τ + 1 ) = [ θ 1 ( 1 ) ( τ + 1 ) θ 2 ( 1 ) ( τ + 1 ) ] , D = [ 0 0 0 0 ] .

4.2. Linearized Representation of Nonlinear Functions

In order to transform the pseudolinear model established in Section 3.1 into a true linear form, it is necessary to establish a dynamic relationship between the lth order hidden variables and the uth order hidden variables [18]:
α ( l ) ( τ + 1 ) = W l ( u ) ( τ ) α ( u ) ( τ )     l , u = 2 , 3 , , m
where W can be identified based on the multidimensional Taylor network in its original state; without any prior information, it can be set as follows:
W l ( u ) ( τ ) = { I ,       l = u 0 ,       l u
Combining Definition 1, Definition 2, and Equation (19), the state model Equation (7) has the following linear matrix form:
If A ( τ ) = [ ( α ( 1 ) ( τ ) ) T ,   ( α ( 2 ) ( τ ) ) T ,     ,   ( α ( l ) ( τ ) ) T ,     ,   ( α ( r ) ( τ ) ) T , Δ σ ( τ ) ] T
W ( τ + 1 , τ ) = [ W 1 ( 1 ) ( τ ) W 1 ( 2 ) ( τ ) W 1 ( u ) ( τ ) W 1 ( m 1 ) ( τ ) W 1 ( m ) ( τ ) C W 2 ( 1 ) ( τ ) W 2 ( 2 ) ( τ ) W 2 ( u ) ( τ ) W 2 ( m 1 ) ( τ ) W 2 ( m ) ( τ ) 0 W l ( 1 ) ( τ ) W l ( 2 ) ( τ ) W l ( u ) ( τ ) W l ( m 1 ) ( τ ) W l ( m ) ( τ ) 0 W m 1 ( 1 ) ( τ ) W m 1 ( 2 ) ( τ ) W m 1 ( u ) ( τ ) W m 1 ( m 1 ) ( τ ) W m 1 ( m ) ( τ ) 0 W m ( 1 ) ( τ ) W m ( 2 ) ( τ ) W m ( u ) ( τ ) W m ( m 1 ) ( τ ) W m ( m ) ( τ ) 0 0 0 0 0 0 C ] ,   γ ( τ ) = [ γ ( 1 ) ( τ ) γ ( 2 ) ( τ ) γ ( l ) ( τ ) γ ( m 1 ) ( τ ) γ ( m ) ( τ ) ]
then, Equation (7) has the following linearized form:
A ( τ + 1 ) = W ( τ + 1 , τ ) A ( τ ) + γ ( τ )
where γ ( k ) is the modeling error.
In the same way, the linear matrix form of the measurement model can be obtained:
Γ ( τ + 1 ) = χ ( τ + 1 , τ ) A ( τ + 1 ) + θ ( τ + 1 )
where χ ( τ + 1 , τ ) = [ χ 1 ( 1 ) ( τ + 1 ) χ 1 ( 2 ) ( τ + 1 ) χ 1 ( u ) ( τ + 1 ) χ 1 ( m 1 ) ( τ + 1 ) χ 1 ( m ) ( τ + 1 ) 0 0 χ 2 ( 1 ) ( τ + 1 ) χ 2 ( 2 ) ( τ + 1 ) χ 2 ( u ) ( τ + 1 ) χ 2 ( m 1 ) ( τ + 1 ) χ 2 ( m ) ( τ + 1 ) 0 0 ] , Γ ( τ + 1 ) = [ Γ 1 ( τ ) Γ 2 ( τ ) ] , and θ ( τ + 1 ) = [ θ 1 ( τ + 1 ) θ 2 ( τ + 1 ) ] is the modeling error.

4.3. Design of Higher Order Extended Kalman Filter

For linear models, KF-based filters are given. Given the initial value A ( 0 ) , when γ ( τ ) and θ ( τ + 1 ) are Gaussian white noise with zero mean, the variances are recorded as ϑ and η , respectively.
A recursive filter can be designed as follows:
A ^ ( τ + 1 | τ ) = W ( τ + 1 , τ ) A ^ ( τ | τ )
λ ( τ + 1 | τ ) = W ( τ + 1 , τ ) λ ( τ | τ ) W T ( τ + 1 , τ ) + ϑ ( τ )
Κ ( τ + 1 ) = ( λ ( τ + 1 | τ ) χ T ( τ + 1 ) ) ( χ ( τ + 1 ) λ ( τ + 1 | τ ) χ T ( τ + 1 ) + η ( τ + 1 ) ) 1
A ^ ( τ + 1 | τ + 1 ) = A ^ ( τ + 1 | τ ) + Κ ( τ + 1 ) ( Γ ( τ + 1 ) χ ( τ + 1 ) A ^ ( τ + 1 | τ ) )
λ ( τ + 1 | τ + 1 ) = ( I Κ ( τ + 1 ) χ ( τ + 1 ) ) λ ( τ + 1 | τ )

5. Higher Order Extended Kalman Filter Design Based on Maximum Correlation Entropy

5.1. Non-Gaussian Modeling of State Vector Based on Multivariate Information Observation

System status A ( τ ) estimates A ^ ( τ | τ ) and estimated error covariance λ ( τ | τ ) are obtained based on Κ [19]. The filtering equation predicts a step prediction estimation value A ˜ ( τ + 1 | τ ) and corresponding step prediction error covariance matrix λ ( k + 1 | k ) .
The step prediction estimate error of system status A ( τ + 1 ) is as follows:
A ˜ ( τ + 1 | τ ) = A ( τ + 1 ) A ^ ( τ + 1 | τ )                             = W ( τ + 1 , τ ) A ˜ ( τ | τ ) + γ ( τ )
and it can be modified into a measurement model for system status A ( τ + 1 ) as follows:
A ^ ( τ + 1 | τ ) = A ( τ + 1 ) A ˜ ( τ + 1 | τ )
where A ^ ( τ + 1 | τ ) is a measurement of system status A ( τ + 1 ) , while A ˜ ( τ + 1 | τ ) is the measurement error. Finally, the combined measurement model is as follows:
[ A ^ ( τ + 1 | τ ) Γ ( τ + 1 ) ] = [ I χ ( τ + 1 ) ] A ( τ + 1 ) + ϖ ( j ) ( τ + 1 )
where I is a unit array for the corresponding dimension, ϖ ( j ) ( τ + 1 ) = [ A ˜ ( τ + 1 | τ ) ) θ ( j ) ( τ + 1 ) ] , and
E [ ϖ ( i ) ( τ + 1 ) ( ϖ ( i ) ) T ( τ + 1 ) ] = [ λ ˜ ( τ + 1 | τ ) 0 0 η ˜ ( τ + 1 ) ]
According to Equation (20), a step prediction error covariance of the system state λ ( τ + 1 ) is received as follows:
λ ˜ ( τ + 1 | τ ) = W ( τ + 1 , τ ) λ ( τ | τ ) W T ( τ + 1 , τ ) + ϑ ( τ )
where ϑ ( τ ) = d i a g { ϑ ( 1 ) ( τ ) , ϑ ( 2 ) ( τ ) , , ϑ ( m ) ( τ ) } and the ϑ ( 2 ) ( τ ) , , ϑ ( m ) ( τ ) is a covariance matrix of random error vectors γ ( 2 ) ( τ ) , , γ ( r ) ( τ ) when the higher order hidden variable α ( 2 ) ( τ ) , , α ( m ) ( τ ) is dynamically modeled. ϑ ( 1 ) ( τ ) is the original system status model (Equation (16)) of the non-Gaussian model error w ( 1 ) ( k ) , and calculates the second-order statistic after obtaining a limited number of samples:
ϑ ˜ ( 1 ) ( τ ) = 1 N j = 1 N { [ γ ( 1 , j ) ( τ ) γ ¯ ( τ ) ] [ γ ( 1 , j ) ( τ ) γ ¯ ( τ ) ] T }
In Equation (23), η ˜ ( τ + 1 ) is the calculated second-order statistic calculated after the non-Gaussian model error θ ( τ + 1 ) in the original system measurement state model (Equation (8)), obtaining a limited sample:
η ˜ ( τ + 1 ) = 1 N j = 1 N { [ θ ( j ) ( τ + 1 ) θ ¯ ( τ + 1 ) ] [ θ ( j ) ( τ + 1 ) θ ¯ ( τ + 1 ) ] T }
where θ ( j ) ( τ + 1 ) is the jth realization vector of the non-Gaussian random noise vector θ ( τ + 1 ) .

5.2. The Statistical Independence Process of Each Component in the Non-Gaussian Modeling Error Vector ϖ ( τ + 1 ) in the Comprehensive Measurement Model

The vector ϖ ( τ + 1 ) in the comprehensive measurement model Equation (22) is a non-Gaussian modeling error vector, and its components are not statistically independent. In order to use the correlation entropy form of the multidimensional independent vector shown in Equation (19), the one-dimensional non-Gaussian vector ϖ ( τ + 1 ) needs to be transformed into statistical independence.
From λ ( τ + 1 | τ ) = E { [ A ( τ + 1 | τ ) A ( τ | τ ) ] [ A ( τ + 1 | τ ) A ( τ | τ ) ] T } , λ ( τ + 1 | τ ) is a positive definite matrix. Similarly, in Equation (26), η ˜ ( τ + 1 ) is also a positive definite matrix. For this reason, Equation (23) is further expressed as follows:
E { ϖ ( i ) ( τ + 1 ) ( ϖ ( i ) ) T ( τ + 1 ) } = [ Λ α ( τ + 1 | τ ) Λ α T ( τ + 1 | τ ) 0 0 Λ Γ ( τ + 1 ) Λ Γ T ( τ + 1 ) ]                                                                     = Λ ( τ + 1 ) Λ T ( τ + 1 )
where Λ α ( τ + 1 ) and Λ Γ ( τ + 1 ) are the Cholesky factor matrices of λ ˜ ( τ + 1 | τ ) and η ˜ ( τ + 1 ) , respectively.
Applying Λ 1 ( τ + 1 ) to both sides of Equation (22), respectively, yields:
Λ 1 ( τ + 1 ) [ A ^ ( τ + 1 | τ ) Γ ( τ + 1 ) ] = Λ 1 ( τ + 1 ) [ I χ ( τ + 1 ) ] A ( τ + 1 ) + Λ 1 ( τ + 1 ) ϖ ( j ) ( τ + 1 )
where
D ( τ + 1 ) = Λ 1 ( τ + 1 ) [ A ^ ( τ + 1 | τ ) T ( τ + 1 ) ] , S ( τ + 1 ) = Λ 1 ( τ + 1 ) [ I χ ( τ + 1 ) ] , e ( τ + 1 ) = Λ 1 ( τ + 1 ) ϖ ( τ + 1 )
The above equation can be further simplified as follows:
D ( τ + 1 ) = S ( τ + 1 ) A ( τ + 1 ) + e ( τ + 1 )
because
E { e ( τ ) e T ( τ ) } = E { [ Λ 1 ( τ + 1 ) ϖ ( τ + 1 ) ] [ Λ 1 ( τ + 1 ) ϖ ( τ + 1 ) ] T }                                         = Λ 1 ( τ + 1 ) E { ϖ ( τ + 1 ) ϖ T ( τ + 1 ) } ( Λ 1 ( τ + 1 ) ) T                                         = Λ 1 ( τ + 1 ) Λ ( τ + 1 ) Λ T ( τ + 1 ) ( Λ 1 ( τ + 1 ) ) T                                         = I
Therefore, after the non-Gaussian modeling error random variable ϖ ( τ + 1 ) undergoes the equivalent transformation of the matrix Λ 1 ( τ + 1 ) , the components of the random e ( τ + 1 ) are statistically independent.

5.3. Implementation Process of a Higher Order Extended Kalman Filter Based on Maximum Entropy

The filtering process of the extended Kalman filter (H-MCEKF) based on the maximum correlation entropy is as follows (see [20] for the specific derivation process):
  • The filter initialization obtains the initial filter value A ^ ( 0 ) and the covariance λ ( 0 ) , choosing a suitable core bandwidth ο and a small positive number ε ;
  • Taylor networks are used for system identification to obtain the parameters in the equations, using the expanded item and the remainder as the new hidden variables. A pseudolinearization process is performed to obtain the pseudolinear form of the system;
  • Equations (20) and (21) are used to obtain X ^ ( k + 1 | k ) and P ( k + 1 | k ) , respectively, while Cholesky decomposition is used to obtain B p ( k + 1 | k ) ;
  • t = 1 and A ^ ( τ + 1 | τ + 1 ) 0 = A ^ ( τ + 1 | τ ) are taken, where A ^ ( τ + 1 | τ + 1 ) t represents the estimated state of the fixed-point iteration t;
  • The starting fixed-point iterative algorithm is as follows:
    e ˜ i ( τ + 1 ) = d i ( τ + 1 ) s i ( τ + 1 ) A ^ ( τ + 1 | τ + 1 ) t 1
    where e i is the ith element of e :
    C ˜ α ( τ + 1 ) = d i a g ( G ο ( e ˜ 1 ( τ + 1 ) ) , , G ο ( e ˜ m ( τ + 1 ) ) )
    C ˜ Γ ( τ + 1 ) = d i a g ( G ο ( e ˜ r + 1 ( τ + 1 ) ) , , G ο ( e ˜ m + m ( τ + 1 ) ) )
    H ˜ ( τ + 1 ) = Λ Γ ( τ + 1 ) C ˜ Γ 1 ( τ + 1 ) Λ Γ T ( τ + 1 )
    λ ˜ ( τ + 1 τ ) = Λ α ( τ + 1 τ ) C ˜ α 1 ( τ + 1 ) Λ α T ( τ + 1 τ )
    Κ ˜ ( τ + 1 ) = λ ˜ ( τ + 1 τ ) χ T ( χ λ ˜ ( τ + 1 τ ) χ T + H ˜ ( τ + 1 ) ) 1
    A ^ ( τ + 1 τ + 1 ) t = A ^ ( τ + 1 τ ) + Κ ˜ ( τ + 1 ) ( Γ ( τ + 1 ) H A ^ ( τ + 1 τ ) )
    The estimates of the current iteration step are compared with those of the previous iteration and, if satisfied,
    A ^ ( τ + 1 τ + 1 ) t A ^ ( τ + 1 τ + 1 ) t 1 A ^ ( τ + 1 τ + 1 ) t 1 ε
    then A ^ ( τ + 1 τ + 1 ) = A ^ ( τ + 1 τ + 1 ) t , λ ( τ + 1 τ + 1 ) = λ ˜ ( τ + 1 τ ) , and the value of the pseudovariable can be updated, or the iteration can be repeated;
  • τ = τ + 1 , and steps (3–5) are repeated until the end of filtering.

6. Simulated Cases

This section verifies the validity of the proposed method by providing two cases: one in which the state equation is a nonlinear equation and the measurement equation is a linear equation, and one in which the state and measurement equations are both nonlinear.

6.1. Case 1

Consider a nonlinear system in which the state equation is a nonlinear model and the measurement equation is a linear model:
{ x 1 ( k + 1 ) = ( 0.8 0.5 e x 1 2 ( k ) ( 1 + e 0.015 k ) ) x 1 ( k ) ( 0.3 + 0.9 e x 1 2 ( k ) ( 1 + 0.5 sin ( π 2 k ) ) ) x 2 ( k ) + w 1 ( k ) x 2 ( k + 1 ) = 1.2 ( 1 e 0.8 k ) x 2 ( k ) + 0.11 x 1 ( k ) + cos ( 1 + x 2 2 ( k ) ) + e 0.8 k x 1 4 ( k ) + w 2 ( k ) { y 1 ( k + 1 ) = x 1 ( k + 1 ) + v 1 ( k + 1 ) y 2 ( k + 1 ) = x 2 ( k + 1 ) + v 2 ( k + 1 )
where the initial value x ( 0 ) is a random value of [ 0 , 1 ] , the initial estimation error covariance P ( 0 | 0 ) = 0.1 × d i a g ( 1 , 1 ) , and the process noise and measurement noise have the following characteristics:
w 1 ( k ) ~ 0 . 9 N ( 0 , 0.01 ) + 0 . 1 N ( 0 , 0.2 ) , w 2 ( k ) ~ 0 . 9 N ( 0 , 0.02 ) + 0 . 1 N ( 0 , 0.2 ) v 1 ( k ) ~ 0 . 9 N ( 0 , 0.01 ) + 0 . 1 N ( 0 , 2 ) , v 2 ( k ) ~ 0 . 9 N ( 0 , 0.02 ) + 0 . 1 N ( 0 , 2 )
Figure 2 shows a diagram of the MTN identification system, while Figure 3 shows the estimated values of state variables x 1 and x 2 under the three filtering methods. From [21], we know the influence of ε is not significant compared with the kernel bandwidth σ. The parameters are set at ε = 10 6 . Table 1 and Table 2 show the mean squared error and the mean relative error, respectively, of the estimated values under the three algorithms, which are computed as averages over 100 independent Monte Carlo runs, with each run containing 50 time steps. When σ = 5 , the three algorithms all obtain better filtering results. Figure 4 and Figure 5 show the probability densities of the estimation errors when estimating the states x 1 and x 2 , respectively, when the parameters are ε = 10 6 and σ = 5 . All of the results confirm that the proposed H-MCKF (design method for a higher order extended Kalman filter based on maximum correlation entropy and a Taylor network system) can outperform the MCEKF (maximum correntropy extended Kalman filter) significantly when the system is disturbed by non-Gaussian processes and measurement noise, and the H-MCKF_R (H-MCKF with the remainder of the state equation) further improves the filtering performance of the H-MCKF.

6.2. Case 2

Consider a nonlinear system in which the state equation and the measurement equation are both nonlinear models:
{ x 1 ( k + 1 ) = cos ( 0 . 5 x 1 ( k ) + 2 . 5 x 2 ( k ) 1 + x 1 2 ( k ) + 8 cos ( 1 . 2 k ) ) + w 1 ( k ) x 2 ( k + 1 ) = sin ( x 1 2 ( k ) ) + w 2 ( k ) { y 1 ( k ) = cos ( x 1 ( k ) + sin ( x 1 3 ( k ) ) ) + v 1 ( k + 1 ) y 2 ( k ) = sin ( x 2 ( k ) sin ( x 2 3 ( k ) ) ) + v 2 ( k + 1 )
where the initial value x ( 0 ) is a random value of [0, 1], the initial estimation error covariance P ( 0 | 0 ) = 0.1 × d i a g ( 1 , 1 ) , and the process noise and measurement noise have the following characteristics:
w 1 ( k ) ~ 0 . 9 N ( 0 , 0.01 ) + 0 . 1 N ( 0 , 0.2 ) , w 2 ( k ) ~ 0 . 9 N ( 0 , 0.02 ) + 0 . 1 N ( 0 , 0.2 ) v 1 ( k ) ~ 0 . 9 N ( 0 , 0.01 ) + 0 . 1 N ( 0 , 2 ) , v 2 ( k ) ~ 0 . 9 N ( 0 , 0.02 ) + 0 . 1 N ( 0 , 2 )
Figure 6 shows a diagram of the MTN identification system, while Figure 7 shows the estimated values of state variables x 1 and x 2 under the three filtering methods. Similar to case 1, the parameters are set at ε = 10 6 . Table 3 and Table 4 show the mean squared error and the mean relative error, respectively, of the estimated values under the three algorithms, which are computed as averages over 100 independent Monte Carlo runs, with each run containing 50 time steps. When σ = 5 , the three algorithms all obtain better filtering results. Figure 8 and Figure 9 show the probability densities of the estimation errors when estimating the states x 1 and x 2 , respectively, when the parameters are ε = 10 6 and σ = 5 . All of the results confirm that the proposed H-MCKF can outperform the MCKF significantly when the system is disturbed by non-Gaussian processes and measurement noise, and the H-MCKF_R further improves the filtering performance of the H-MCKF when the state and measurement equations are both nonlinear.

7. Conclusions

This paper considered a wide range of filter design problems for the state estimation of multivariable dynamic systems, which consist of a strong nonlinear dynamic model and a strong nonlinear observation model. Firstly, we transformed those strong nonlinear models into a higher order polynomial series using a multidimensional Taylor network. Secondly, all higher order items in the polynomial series were defined as hidden variables. Those higher order series were then rewritten as their pseudolinear equivalents. Thirdly, dynamic relationships between all hidden variables and known variables were constructed using the multidimensional Taylor network. Combining the original model of pseudolinearization with the higher order hidden variable dynamic model, linear dynamic models fitted to a standard Kalman filter were presented. Finally, considering that a finite number of samples from modeling error can be obtained, we built the higher order extended Kalman filter based on maximum correlation entropy, and acquired better filter performance than offered by the existing MCEKF [22].
Outlook: There exist several challenges worthy of further research. Firstly, the proposed higher order extended Kalman filter based on maximum correlation entropy is an online iteration process that obtains state estimation constantly, but, as such, it loses one important function possessed by the standard Kalman filter: the ability to operate in real time. Secondly, the linearized model parameters of the original nonlinear model and the hidden variable dynamic model were identified by local time period data; thus, they need to be updated with new time period data in order to fit the time dynamics of the system. Thirdly, in this paper, on the basis of defining all of the hidden variables, we established a linear form of the strong nonlinear model in an expanded state with the original variables and all hidden variables, and obtained better estimation performance than that of a standard EKF; if measurements can be expanded in the same manner as state, we believe that such a filter may offer better estimation performance than the one established by this paper.

Author Contributions

Conceptualization, Q.W., X.S. and C.W.; methodology, C.W.; software, Q.W.; writing—original draft preparation, Q.W. writing—review and editing, Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the institutions: [01] key project of National Natural Science Foundation of China (No. 61751304), extraction and deep learning of optimal decision rules in an uncertain small sample environment; [02] key project of National Natural Science Foundation of China and Zhejiang joint fund for integration of industrialization and industrialization (No. u1509203), life cycle fault prediction, and intelligent health management of large ship power system operation; [03] key project of National Natural Science Foundation of China (No. 61933013): intelligent diagnosis, prediction and maintenance of abnormal working conditions of large petrochemical units.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kalman, R.E. A new approach to linear filtering and prediction problems. actions of the . Trans. ASME Ser. D J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  2. Wen, C.; Wang, Z.; Liu, Q.; Alsaadi, F.E. Recursive distributed filtering for a class of state-saturated systems with fading measurements and quantization effects. IEEE Trans. Syst. Man Cybern. Syst. 2016, 48, 930–941. [Google Scholar] [CrossRef]
  3. Wen, C.; Wang, Z.; Hu, J.; Liu, Q.; Alsaadi, F.E. Rlsaadi. Recursive fifiltering for state-saturated systems with randomly occurring nonlinearities and missing measurements. Int. J. Robust Nonlinear Control. 2018, 28, 1715–1727. [Google Scholar] [CrossRef]
  4. Ge, Q.; Shao, T.; Duan, Z.; Wen, C. Performance Analysis of the Kalman Filter with Mismatched Noise Covariances. IEEE Trans. Autom. Control. 2016, 61, 4014–4019. [Google Scholar] [CrossRef]
  5. Wen, C.; Cheng, X.; Xu, D.; Wen, C. Filter design based on characteristic functions for one class of multi-dimensional nonlinear non-Gaussian systems. Automatica 2017, 82, 171–180. [Google Scholar] [CrossRef]
  6. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  7. Feng, X.; Wen, C.; Park, J.H. Sequential fusion filtering for multi-rate multi-sensor time-varying systems—A Krein-space approach. IET Control Theory Appl. 2017, 11, 369–381. [Google Scholar] [CrossRef]
  8. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, X.; Qu, H.; Zhao, J.; Chen, B. Extended Kalman filter under maximum correntropy criterion. In Proceedings of the 2016 International Joint Conference on Neural Networks, Vancouver, BC, Canada, 24–29 July 2016; pp. 1733–1737. [Google Scholar]
  10. Wang, G.; Li, N.; Zhang, Y. Maximum correntropy unscented Kalman and information filters for non-Gaussian measurement noise. J. Frankl. Inst. 2017, 354, 8659–8677. [Google Scholar] [CrossRef]
  11. Meinhold, R.J.; Singpurwalla, N.D. Robustification of Kalman filter models. J. Am. Stat. Assoc. 1989, 84, 479–486. [Google Scholar] [CrossRef]
  12. Wang, L.; Cheng, X.H.; Li, S.X. Gaussian Sum High Order Unscented Kalman Filtering Algorithm. Chin. J. Electron. 2017, 45, 424–430. [Google Scholar]
  13. Zhang, C.; Yan, H.S. Identification of nonlinear time varying system with noise based on multi-dimensional Taylor network with optimal structure. J. Southeast Univ. 2017, 47, 1086–1093. [Google Scholar]
  14. Wen, T.; Ge, Q.; Lyu, X.; Chen, L.; Constantinou, C.; Roberts, C.; Cai, B. A Cost-effective Wireless Network Migration Planning Method Supporting High-security Enabled Railway Data Communication Systems. J. Frankl. Inst. 2021, 358, 131–150. [Google Scholar] [CrossRef]
  15. Wen, T.; Wen, C.; Roberts, C.; Cai, B. Distributed Filtering for a Class of Discrete-time Systems Over Wireless Sensor Networks. J. Frankl. Inst. 2020, 357, 3038–3055. [Google Scholar] [CrossRef]
  16. Xiaohui, S.; Chenglin, W.; Tao, W. A Novel Step-by-Step High-Order Extended Kalman Filter Design for a Class of Complex Systems with Multiple Basic Multipliers. Chin. J. Electron. 2021, 30, 313–321. [Google Scholar] [CrossRef]
  17. Xiaohui, S.; Chenglin, W.; Tao, W. High-Order Extended Kalman Filter Design for a Class of Complex Dynamic Systems with Polynomial Nonlinearities. Chin. J. Electron. 2021, 30, 508–515. [Google Scholar] [CrossRef]
  18. Feng, X.; You, B. Random attractors for the two-dimensional stochastic g-Navier-Stokes equations. Stochastics 2019, 92, 613–626. [Google Scholar] [CrossRef]
  19. Liu, W.; Chi, Y.; Zhang, G. Multiple Resolvable Group Estimation Based on the GLMB Filter with Graph Structure. In Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China, 19–23 July 2018; pp. 960–964. [Google Scholar]
  20. Wu, Z.; Shi, J.; Zhang, X.; Ma, W.; Chen, B. Kernel recursive maximum correntropy. Signal Process. 2015, 117, 11–16. [Google Scholar] [CrossRef]
  21. Anderson, B.; Moore, J. Optimal Filtering; Prentice-Hall: New York, NY, USA, 1979. [Google Scholar]
  22. Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control. 2000, 45, 477–482. [Google Scholar] [CrossRef] [Green Version]
Figure 2. Graph of the MTN identification system in case 1.
Figure 2. Graph of the MTN identification system in case 1.
Sensors 21 05864 g002
Figure 3. (a,b) Set parameters: σ = 5 , ε = 10 6 in case 1; (c,d) set parameters: σ = 10 , ε = 10 6 in case 1.
Figure 3. (a,b) Set parameters: σ = 5 , ε = 10 6 in case 1; (c,d) set parameters: σ = 10 , ε = 10 6 in case 1.
Sensors 21 05864 g003
Figure 4. Probability densities of x 1 estimation errors with the three filters in case 1.
Figure 4. Probability densities of x 1 estimation errors with the three filters in case 1.
Sensors 21 05864 g004
Figure 5. Probability densities of x 2 estimation errors with the three filters in case 1.
Figure 5. Probability densities of x 2 estimation errors with the three filters in case 1.
Sensors 21 05864 g005
Figure 6. (a) Graph of the MTN identify the state equation in case 2. (b) Graph of the MTN identify the measurement equation in case 2.
Figure 6. (a) Graph of the MTN identify the state equation in case 2. (b) Graph of the MTN identify the measurement equation in case 2.
Sensors 21 05864 g006
Figure 7. (a,b) Set parameters: σ = 5 ,   ε = 10 6 in case2; (c,d) set parameters: σ = 10 ,   ε = 10 6 in case 2.
Figure 7. (a,b) Set parameters: σ = 5 ,   ε = 10 6 in case2; (c,d) set parameters: σ = 10 ,   ε = 10 6 in case 2.
Sensors 21 05864 g007
Figure 8. Probability densities of x 1 estimation errors with the three filters in case 2.
Figure 8. Probability densities of x 1 estimation errors with the three filters in case 2.
Sensors 21 05864 g008
Figure 9. Probability densities of x 2 estimation errors with the three filters in case 2.
Figure 9. Probability densities of x 2 estimation errors with the three filters in case 2.
Sensors 21 05864 g009
Table 1. The mean squared error using the three methods in case 1.
Table 1. The mean squared error using the three methods in case 1.
MSE   of   x 1 MSE   of   x 2
σ ε MCEKFH-MCKFH-MCKF_RMCEKFH-MCKFH-MCKF_R
σ = 2 ε = 10 6 0.20730.18150.16940.10000.07740.0738
σ = 5 ε = 10 6 0.19740.12440.12250.11000.09640.0921
σ = 10 ε = 10 6 0.22820.16690.16360.11580.09250.0888
σ = 20 ε = 10 6 0.22440.16020.15720.11600.09160.0880
Table 2. The mean relative error using the three methods in case 1.
Table 2. The mean relative error using the three methods in case 1.
MRE   of   x 1 MRE   of   x 2
σ ε MCEKFH-MCKFH-MCKF_RMCEKFH-MCKFH-MCKF_R
σ = 2 ε = 10 6 0.33720.24050.24030.23540.22020.2084
σ = 5 ε = 10 6 0.34620.29530.29060.26790.24850.2448
σ = 10 ε = 10 6 0.36580.30520.29860.27450.24690.2426
σ = 20 ε = 10 6 0.36340.30090.29450.27530.24660.2419
Table 3. The mean squared error using the three methods in case 2.
Table 3. The mean squared error using the three methods in case 2.
MSE   of   x 1 MSE   of   x 2
σ ε MCEKFH-MCKFH-MCKF_RMCEKFH-MCKFH-MCKF_R
σ = 2 ε = 10 6 0.40170.12300.12190.20900.09070.0883
σ = 5 ε = 10 6 0.11480.12410.12330.25420.12000.1183
σ = 10 ε = 10 6 0.32210.12540.12480.22200.12070.1193
σ = 20 ε = 10 6 0.40400.12570.12510.22180.12080.1196
Table 4. The mean relative error using the three methods in case 2.
Table 4. The mean relative error using the three methods in case 2.
MRE   of   x 1 MRE   of   x 2
σ ε MCEKFH-MCKFH-MCKF_RMCEKFH-MCKFH-MCKF_R
σ = 2 ε = 10 6 0.51060.23370.23060.37420.23550.2316
σ = 5 ε = 10 6 0.21470.25510.25300.30700.26520.2645
σ = 10 ε = 10 6 0.45270.25700.25530.38240.26610.2659
σ = 20 ε = 10 6 0.47640.25750.25580.37890.26630.2662
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Q.; Sun, X.; Wen, C. Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System. Sensors 2021, 21, 5864. https://doi.org/10.3390/s21175864

AMA Style

Wang Q, Sun X, Wen C. Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System. Sensors. 2021; 21(17):5864. https://doi.org/10.3390/s21175864

Chicago/Turabian Style

Wang, Qiupeng, Xiaohui Sun, and Chenglin Wen. 2021. "Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System" Sensors 21, no. 17: 5864. https://doi.org/10.3390/s21175864

APA Style

Wang, Q., Sun, X., & Wen, C. (2021). Design Method for a Higher Order Extended Kalman Filter Based on Maximum Correlation Entropy and a Taylor Network System. Sensors, 21(17), 5864. https://doi.org/10.3390/s21175864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop