Next Article in Journal
Automated Hyperparameter Tuning in Reinforcement Learning for Quadrupedal Robot Locomotion
Next Article in Special Issue
Kalman Filter with Adaptive Covariance Estimation for Carrier Tracking under Weak Signals and Dynamic Conditions
Previous Article in Journal
Healthcare in Asymmetrically Smart Future Environments: Applications, Challenges and Open Problems
Previous Article in Special Issue
One-Dimensional Quaternion Discrete Fourier Transform and an Approach to Its Fast Computation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust State Estimation Using the Maximum Correntropy Cubature Kalman Filter with Adaptive Cauchy-Kernel Size

1
Key Laboratory of Infrared System Detection and Imaging Technology, Chinese Academy of Sciences, Shanghai 200083, China
2
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Information Science and Technology, Shanghai Tech University, Shanghai 201210, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(1), 114; https://doi.org/10.3390/electronics13010114
Submission received: 16 November 2023 / Revised: 21 December 2023 / Accepted: 26 December 2023 / Published: 27 December 2023

Abstract

:
The maximum correntropy criterion (MCC), as an effective method for dealing with anomalous measurement noise, is widely applied in the design of filters. However, its performance largely depends on the proper setting of the kernel bandwidth, and currently, there is no efficient adaptive kernel adjustment mechanism. To deal with this issue, a new adaptive Cauchy-kernel maximum correntropy cubature Kalman filter (ACKMC-CKF) is proposed. This algorithm constructs adaptive factors for each dimension of the measurement system and establishes an entropy matrix with adaptive kernel sizes, enabling targeted handling of specific anomalies. Through simulation experiments in target tracking, the performance of the proposed algorithm was comprehensively validated. The results show that the ACKMC-CKF, through its flexible kernel adaptive mechanism, can effectively handle various types of anomalies. Not only does the algorithm demonstrate excellent reliability, but it also has low sensitivity to parameter settings, making it more broadly applicable in a variety of practical application scenarios.

1. Introduction

State estimation is the process of reconstructing the inherent state of a system by using algorithms to address the inherent uncertainty and noise in limited observational data. While the acquired data predominantly reflect the external characteristics of the system, the dynamic behavior is typically represented by its internal state variables, which are often challenging to measure directly or come with significant measurement errors. Hence, state estimation plays a pivotal role in unveiling the internal structure and dynamics of the system. It finds extensive applications in areas such as attitude determination, power system monitoring, vehicle dynamics, and target tracking [1,2,3,4,5,6].
The Kalman filter (KF) employs the minimum mean square error (MMSE) as its optimization criterion and achieves optimal state estimation for linear systems via the Bayesian rule. Building upon the foundation of classical Kalman filtering theory, several Gaussian approximation filters have been proposed to handle state estimation in nonlinear systems [7,8], such as the unscented Kalman filter (UKF) and the cubature Kalman filter (CKF). These filters also adhere to the MMSE principle and, when system noise is Gaussian, can achieve satisfactory state estimation precision. However, in real-world scenarios with intricate noise environments, the MMSE criterion is sensitive to significant outliers, which can result in a notable performance degradation for conventional filters [9].
For certain specific non-Gaussian nonlinear processes, refs. [10,11] introduce multiple model approach methods, which represent non-Gaussian system behavior by combining different modes of parallel Gaussian problems. Refs. [12,13] explore the use of the Conditional Gaussian Observation Markov Switching Model (CGOMSM) statistical model, which combines Markov Switching Models and Conditional Gaussian Observations, to handle nonlinear and non-Gaussian characteristics that are difficult to capture with traditional linear Gaussian models. These methods pose higher requirements in terms of modeling and computation. In the design of filters, adopting non-MMSE criteria has also proven to be an effective way to enhance the robustness of filters against non-Gaussian noise disturbances. Notable examples include the H-infinity filter [14] and the M-estimation filters [15,16]. Unlike the H-infinity filter, which primarily focuses on bounded energy gain from disturbances to estimation errors, the M-estimator using Huber techniques offers a novel approach to addressing discrepancies between Gaussian assumptions and the actual error density. However, the tuning parameter plays a crucial role in shaping the Huber cost function. Existing Huber M-estimators with fixed parameters to constrain the score function may have certain conservations since precision is taken as the cost of the robustness effect, even under ideal conditions [17].
Over recent years, the optimization criteria in Information Theory Learning (ITL) have garnered increasing attention, leveraging information entropy estimated directly from data as the optimization cost. As a local similarity measure within ITL, the correntropy, endowed with higher-order moments of the probability density function, exhibits superior characteristics when addressing non-Gaussian noise assumptions [18,19,20]. These robust filters, designed based on the MCC, typically employ the Gaussian kernel function to define distances between distinct vectors. Nonetheless, it might not always be the optimal kernel choice [21]. On one hand, the kernel bandwidth substantially impacts the performance of the MCC. An improperly sized kernel under MCC may fail to enhance robustness against outliers and might even lead to filter divergence [22]. On the other hand, when the system is perturbed by multi-dimensional non-Gaussian noise, the aforementioned MCC algorithms grapple with numerical challenges due to the emergence of singular matrices [23,24].
The log-similarity measure serves as another pivotal learning criterion within information theoretic learning [25]. In comparison to the Gaussian loss based on local similarity measures, the Cauchy loss rooted in log similarity offers enhanced robustness to non-Gaussian noise and has been adeptly integrated into kernel adaptive filters [2,26]. Filters based on the Cauchy kernel, when confronted with multi-dimensional non-Gaussian noise, present a more stable structure, effectively mitigating the filtering collapse issues caused by singular matrices in MCC algorithms [27].
Although Cauchy kernel-based filters have shown good performance in terms of parameter sensitivity, choosing the right Cauchy kernel size remains crucial for ensuring their high efficiency. In traditional methods, the setting of kernel size is often based on experience or fixed rules, which may not be suitable for all situations, especially in dynamically changing noise environments. To address this issue, this study proposes an ACKMC-CKF method. This method calculates an adaptive factor for each dimension by analyzing the noise characteristics in the measurement data. These factors directly affect the size of the kernel, allowing the filter to automatically adjust its processing strategy based on the current data characteristics. This adaptive adjustment not only improves the accuracy of the filter in dealing with complex noise environments but also enhances its robustness to unexpected noise and dataset changes.
The remainder of this paper is structured as follows: Section 2 provides foundational knowledge on correntropy and derives the Cauchy kernel-based maximum correntropy cubature Kalman filter (CKMC-CKF). In Section 3, we introduce an adaptive approach for kernel size determination and design the ACKMC-CKF. Section 4 elucidates the performance of the ACKMC-CKF through simulation experiments. Finally, conclusions are drawn in Section 5.

2. Problem Formulation

2.1. Maximum Correntropy Criterion and Cauchy Kernel Function

Correntropy is utilized to measure the metric of a nonlinear relationship between two random variables. Given two random variables, X and Y, with their joint distribution function denoted as F X , Y x , y , their correntropy can be defined as follows [28]:
V X , Y = E κ X , Y = κ x , y F X , Y x , y d x d y
where E · means the expectation operator, κ · is the kernel function of the correntropy. In practical scenarios, the quantity of sample data is limited, which prevents the precise calculation of F X , Y x , y . Therefore, V X , Y is commonly approximated through the average of the samples
V X , Y = 1 N k = 1 N κ x k , y k
where x k , y k , k = 1 , , N represents n sampling points of F X , Y x , y .
The Gaussian kernel function is currently the most widely applied in ITL. This paper utilizes the Cauchy kernel defined by Equation (3) as the kernel function for the correntropy. Compared to the Gaussian kernel, it has the advantages of being less sensitive to kernel bandwidth and providing greater stability in the constructed filters [26]. The expression for the Cauchy kernel is as follows:
C σ x y = 1 1 + x y 2 / σ
where σ is the Cauchy kernel bandwidth σ > 0 .
Based on Equations (2) and (3), the maximum correntropy cost function utilizing the Cauchy kernel is constructed.
J C K M C = i = 1 N C σ x i y i

2.2. Cauchy Kernel-Based Maximum Correntropy Cubature Kalman Filter

Consider a nonlinear system described by the following state and measurement equations:
X k = f X k 1 + v k 1 Z k = h X k + w k
where X k R n is the system state vector at time k, Z k R m is the system measurement vector at time k, f · and h · are the nonlinear dynamic state equation and the measurement equation, respectively. v k 1 and w k represent the zero-mean Gaussian white noise associated with the system’s dynamics and measurements, respectively, with their variances denoted as Q k 1 for the dynamic noise and R k for the measurement noise.
The traditional CKF algorithm experiences a significant decline in performance when confronted with non-Gaussian noise. This is primarily because the CKF’s MMSE criterion assumes that all observations are of equal importance. Filters based on the MCC take into account observations with varying levels of importance, allowing for an adaptive estimation of the system’s state. The CKMC-CKF method consists of two steps: the time update and the measurement update.

2.2.1. Time Update

Firstly, based on the spherical-radial rule, cubature points are generated using x ^ k 1 and the mean square error matrix P k 1 | k 1 from the last time step.
P k 1 | k 1 = S k 1 | k 1 S k 1 | k 1 T
X i , c u b = S k 1 | k 1 ξ i + x ^ k 1 ,         f o r   i = 1,2 , , 2 n
where S k 1 | k 1 is obtained by the Cholesky decomposition of P k 1 | k 1 . ξ i is defined as
ξ i = n 1 i , i = 1,2 , , n n 1 i , i = n + 1 , , 2 n
where 1 i denotes the i-th column vector of the n × n identity matrix I .
The cultivated points X i , c u b , after being propagated through the nonlinear function f · , can be obtained as
X i , k | k 1 * = f X i , c u b
By using the cubature points X i , k | k 1 * , the prior prediction state value x ^ k | k 1 and the prediction covariance matrix P k | k 1 are calculated.
x ^ k | k 1 = ω i = 1 2 n X i , k | k 1 *
P k | k 1 = ω i = 1 2 n X i , k | k 1 * X i , k | k 1 * T x ^ k | k 1 x ^ k | k 1 T + Q k 1

2.2.2. Measurement Update

Likewise, the cubature points are calculated again according to x ^ k | k 1 and P k | k 1 obtained in the previous step.
P k | k 1 = S k | k 1 S k | k 1 T
X i , c u b * = S k | k 1 ξ i + x ^ k | k 1 ,         f o r   i = 1,2 , , 2 n
The cultivated points X i , c u b * , after being propagated through the measurement function h · , can be obtained as
Z i , k | k 1 * = h X i , c u b *
By using the cubature points Z i , k | k 1 * , the prior measurement z ^ k | k 1 is calculated.
z ^ k | k 1 = ω i = 1 2 n Z i , k | k 1 *
The innovation covariance matrix P z z and the cross-correlation covariance matrix P x z are as follows:
P z z = ω i = 1 2 n Z i , k | k 1 * Z i , k | k 1 * T z ^ k | k 1 z ^ k | k 1 T + R k
P x z = ω i = 1 2 n X i , k | k 1 * Z i , k | k 1 * T x ^ k | k 1 z ^ k | k 1 T
For the convenience of matrix solving, we obtain the pseudo-measurement matrix through statistical linearization [29]:
H = P x z T P k | k 1 1
An approximation of the nonlinear observation equation can be obtained through statistical linearization:
z k = z ^ k | k 1 + H x k x ^ k | k 1 + e k
To maximize the posterior probability of the state, consider using the weighted least squares method to handle process noise and the MCC method to handle measurement noise since the measurement noise is non-Gaussian. The defined cost function can be expressed as:
J C K M C x k = x k x ^ k | k 1 P k | k 1 1 2 + C σ e k R k 1 2
The standard symbol operations used in the equation are as follows: x A 2 = x T A x , which denotes the A-weighted square Mahalanobis distance of a vector. The redefined Cauchy kernel with squared Mahalanobis distance is expressed as
C σ x A 2 = 1 1 + x A 2 / σ
To obtain the optimal state estimation, with the maximization of the objective function as the optimization criterion, the optimal solution can be expressed as:
x ^ k | k = arg max x k J C K M C x k
The solution to the extremum problem can be found from the derivatives of the above equation, thereby obtaining the implicit equation.
J C K M C x k x k = P k | k 1 1 x k x ^ k | k 1 m k H k T R k 1 e k = 0
where
m k = C σ e k R k 1 2
Equation (23) constitutes a fixed-point problem with respect to m k , and given the dependency of m k on x k , a fixed-point iteration method can be employed for its resolution [30]. Repeated iterations can slightly improve the accuracy of the estimate. In consideration of the balance between computational efficiency and precision, this study adopts a single round of fixed-point iteration process. Equation (23) yields the following result:
x ^ k | k = x ^ k | k 1 + K ~ k z k z ^ k | k 1
The Kalman gain K ~ k , based on the MCC, is defined as:
K ~ k = P k | k 1 1 + M k H k T R k 1 H k 1 m k H k T R k 1
The corresponding estimation error covariance is denoted as
P k | k = I K ~ k H k P k | k 1 I K ~ k H k T + K ~ k R k K ~ k T
When abnormal measurements occur, the innovation term z ~ = Z k z ^ k | k 1 will deviate significantly from its expected value. The Gaussian kernel function exponentially approaches zero as z ~ increases, greatly increasing the possibility of matrix singularity. In contrast, the Cauchy kernel function approaches zero much more slowly, effectively reducing the probability of singular values in the matrix [26].
Just like the Gaussian kernel, the choice of bandwidth for the Cauchy kernel significantly affects the filter’s ability to resist non-Gaussian noise. Narrowing the bandwidth can reduce the correlation coefficient, enhancing the system’s robustness to abnormal measurements. However, a smaller bandwidth also decreases the Kalman gain K ~ k , which may reduce the filter’s estimation accuracy in the presence of Gaussian noise. In existing methods, the bandwidth is often preset and fixed, which greatly limits the filter’s adaptability to different types of noise [26,31,32]. Therefore, it is urgent to develop an adaptive adjustment mechanism for the kernel bandwidth.

3. Adaptive Cauchy-Kernel Maximum Correntropy Cubature Kalman Filter

3.1. Adaptive Kernel Bandwidth Adjustment Strategy

The selection of the Cauchy kernel bandwidth should follow this principle: A narrower bandwidth is advisable for systems with pronounced non-Gaussian noise characteristics, whereas a wider bandwidth is suitable for systems where noise closely approximates a Gaussian distribution [26]. As for the adaptive strategy for adjusting the Cauchy kernel bandwidth, our aim is to preserve a wider bandwidth under most conditions, only reducing the bandwidth upon encountering non-Gaussian noise or outlier measurement noise. Consequently, the crux lies in how to effectively discern the comparative relationship between state estimation accuracy under the MCC and the MMSE criterion when atypical noise is present.
In the design of the filter, R k represents the covariance matrix of measurement noise. When anomalies occur in measurement noise, R k can no longer accurately reflect the true statistical characteristics of the measurement noise. In such cases, the actual measurement noise covariance matrix is represented as R ~ k = E w k w k T R k .
Theorem 1. 
When the following condition is met, the mean square error of the filter based on the MCC criterion will be less than or equal to that of the filter based on the MMSE criterion:
R ~ k P z z R k + 2 R k
R ~ k C k 1 P z z R k C k + 2 R k C k 1  
where
C k = d i a g C σ z ~ 1 , k R 1 , k 1 2 , , C σ z ~ m , k R m , k 1 2  
represents the correlation coefficient matrix based on the Cauchy kernel. z ~ i , k represents the i-th value of the innovation term, and R i , k represents the i-th diagonal element of R k .
The specific proof of Theorem 1 is not presented in the text; for detailed steps, refer to [33]. Under the specific conditions of Theorem 1, by extracting the i-th diagonal element from Equation (29), we can obtain:
R ~ i , k P i i , z z R i , k + 2 R k C σ z ~ i , k R i , k 1 2 1
where P i i , z z and R ~ i , k represent the i-th diagonal elements of P z z and R ~ k , respectively. By substituting Equation (21) into Equation (31) and rearranging, we can obtain:
R ~ i , k P i i , z z R i , k 2 R i , k 1 + z ~ i , k R i , k 1 2 σ k
Since φ = R ~ i , k P i i , z z R i i , k / 2 R i i , k 1 according to Equation (28), thus Equation (32) can be rewritten as follows:
σ k z ~ i , k R i , k 1 2 φ 1
Therefore, to ensure that the performance of CKMC-CKF surpasses that of the traditional CKF, the setting of the kernel bandwidth must be constrained by an upper limit. Once the kernel bandwidth exceeds this threshold, the CKMC-CKF will gradually converge to the CKF, and its performance advantage over the CKF will no longer exist when encountering anomalous noise.
Based on the previous analysis, the size of the kernel bandwidth should be dynamically optimized to conform to the established upper limit constraints. To effectively deal with specific outliers in the measurement system, it is worth considering the independent adjustment of the kernel bandwidth for each dimension. The following is the defined adaptive parameter:
σ i , k = μ i , k σ m a x ,       f o r   i = 1,2 , , m
where μ i , k represents the adaptive factor measured for the i-th element at the moment k, while σ m a x denotes the predetermined maximum bandwidth parameter of the kernel function.
To ensure that the adaptive factor μ i , k is optimally adjusted for various types of noise, the first step is to analyze the relationship between the innovation term and its covariance matrix P z z . This step is to detect potential measurement anomalies. For this purpose, define the parameter δ i , k as follows:
δ i , k = P i i , z z z ~ i , k 2
where P i i , z z represents the i-th diagonal element of the innovation covariance matrix P z z ; z ~ i , k is the i-th component of the innovation sequence z ~ k , which follows a Gaussian distribution with zero mean and a variance of P i i , z z .
Further, the adaptive factor μ i , k can be defined as:
μ i , k = 1 exp δ i , k
The adaptive factor μ i , k , as determined by the formula, strictly falls within the range of 0 to 1. Under normal measurement conditions following a Gaussian distribution, the factor μ i , k tends towards 1, which maintains a relatively wide kernel bandwidth. In the case of detected anomalous measurements, that is, when the value of the innovation term z ~ i , k significantly deviates, the factor μ i , k is accordingly decreased. This rapid decrease in the correlation coefficient effectively mitigates the disturbance caused by the outliers. Such an adjustment mechanism ensures the robustness of the estimation process, preventing the excessive penetration of anomalous data into the final results.
The kernel bandwidth does not necessarily increase with the increment of the factor σ m a x . When σ m a x is set to a higher value, it may not sufficiently suppress outliers, which can cause the innovation term z ~ i , k to enlarge. This situation results in a continual decrease in parameter μ i , k , enhancing the resistance to outliers. Thus, we might consider adopting a larger σ m a x to enhance the accuracy of estimations in a Gaussian distribution context, and this approach will not impair the robustness of the filter.

3.2. Adaptive Cauchy-Kernel Maximum Correntropy Cubature Kalman Filter

Through the strategy presented in Section 3.1, we have redefined the correlation coefficient matrix for the adaptive multi-kernel method as follows:
C ~ k = d i a g C σ 1 , k z ~ 1 , k R 1 , k 1 2 , , C σ m , k z ~ m , k R m , k 1 2  
For the Kalman gain defined by Equation (26), we can derive an equivalent expression with lower computational complexity:
K ~ k = P k | k 1 H k T R k m k I m 1 + H k T P k | k 1 H k 1
In the conventional CKMC-CKF, m k serves as a global scaling factor for R k and does not allow for adjustment of individual dimensions. Given that R k is a diagonal matrix, we can replace m k I m with C ~ k to achieve independent control over each diagonal element of R k . Similarly, we can continue to use the cubature rule to calculate the Kalman gain, thus avoiding the need for statistical linearization.
K ~ k = P ~ x z P ~ z z 1
where
P ~ x z = P x z C ~ k
P ~ z z = P z z R k C ~ k + R k
The calculation of the posterior state estimate and its error covariance matrix is as follows:
x ^ k = x ^ k | k 1 + K ~ k z k z ^ k | k 1
P k | k = P k | k 1 K ~ k P ~ z z K ~ k T
In summary, the ACKMC-CKF algorithm proposed in this study adopts an adaptive multi-core strategy, effectively countering the impact of anomalous noise through the adjustment of the correlation coefficient matrix C ~ k . Simultaneously, the algorithm avoids the step of statistical linearization, thereby reducing potential errors that may arise. The specific process of the algorithm can be referred to in Figure 1.

4. Illustrative Examples

This chapter primarily validates the performance of the proposed algorithm through a series of target-tracking simulation experiments. Specifically, the newly proposed algorithm is compared and analyzed against the traditional CKF, MC-CKF, and CKMC-CKF algorithms. By designing three different experiments, a comprehensive evaluation of the performance of these algorithms in their respective scenarios is conducted. The initial section will detail the setup conditions and validation methods of the experiments, laying a foundation for subsequent performance analysis.

4.1. Simulation Scenarios and Performance Metrics

This experiment addresses a common problem in the field of target tracking: tracking the trajectory of an aircraft executing maneuvers at a nearly constant rate of turn. In this system configuration, the position and velocity of the aircraft are defined as the state variables of the system. Meanwhile, in an environment filled with clutter, the distance and azimuth information captured by radar are used as measurement data [27,34]. The specific expressions for the state and measurement equations of this system are as follows:
x k = 1 sin ω T ω 0 1 cos ω T ω 0 cos ω T 0 sin ω T 0 1 cos ω T ω 1 sin ω T ω 0 sin ω T 0 cos ω T x k 1 + v k
Z k = r k θ k = x 2 + y 2 arctan y x + w k
where x k = x , v x , y , v y , ω T is the state vector, x and y represent the target’s position in the X and Y directions, respectively, v x and v y denote the target’s velocity in the X and Y directions, and ω indicates the target’s turning rate. The other relevant parameters for the simulation experiment are shown in Table 1.
To ensure the reliability of the estimation, 200 independent Monte Carlo simulations were conducted. For evaluating the simulation results, root mean square error (RMSE) and average root mean square error (ARMSE) were chosen as the performance metrics. Their definitions are as follows:
R M S E k p o s = 1 N i = 1 N x k i x ^ k i 2 + y k i y ^ k i 2 R M S E k v e l = 1 N i = 1 N v x k i v ^ x k i 2 + v y k i v ^ y k i 2
A R M S E k = 1 T s k = 1 T s R M S E k
where N is the number of Monte Carlo simulations, i is the i-th Monte Carlo simulation, k is the simulation time, and T s = 100   s is the total simulation time. x k i , y k i and v x k i , v y k i represent the true position and velocity of the target, respectively. x ^ k i , y ^ k i and v ^ x k i , v ^ y k i represent the estimated position and velocity of the filter, respectively.

4.2. Gaussian Noise Test

In the first set of experiments, the focus is on the filtering performance of various algorithms under Gaussian noise conditions to verify the effectiveness and rationality of the proposed ACKMC-CKF algorithm in standard scenarios. The measurement noise used in this experiment is Gaussian white noise, generated based on the measurement noise covariance matrix outlined in Table 1. For the MC-CKF and CKMC-CKF algorithms, a comparative analysis was conducted with multiple kernel bandwidth settings. As for the ACKMC-CKF algorithm, the kernel bandwidth upper limit σ m a x = 100 . This experimental setup facilitates a thorough exploration of the performance and characteristics of each algorithm when confronted with Gaussian noise.
Figure 2 and Figure 3 illustrate the performance of several algorithms under Gaussian noise conditions. It is observed that with the appropriate selection of kernel bandwidth, both MC-CKF and CKMC-CKF can achieve results close to those of CKF, and their performance increasingly aligns with CKF as the kernel bandwidth is enlarged. Compared to MC-CKF, CKMC-CKF demonstrates a higher tolerance for kernel bandwidth selection. The ARMSE data for various kernel bandwidth choices presented in Table 2 further corroborates this observation. However, when the kernel bandwidth is set too small, the performance of both algorithms significantly deteriorates, falling short of CKF. These experimental findings are consistent with the analysis of kernel bandwidth selection discussed in Section 2.2. The ACKMC-CKF algorithm proposed in this paper shows the closest performance to CKF. In Gaussian noise scenarios, the calculated kernel adaptive factor μ i , k is close to 1, leading to a final kernel bandwidth near the preset upper limit, thus achieving performance nearly identical to CKF.

4.3. Non-Gaussian Noise Test

In most real-world scenarios, measurement noise does not strictly adhere to a Gaussian distribution. This leads to a significant performance degradation in traditional CKF, and this degradation trend becomes even more pronounced as the degrees of freedom in the system increase [35]. Therefore, the second set of experiments focuses on Gaussian mixture noise, a typical form of non-Gaussian noise, to evaluate the filtering performance of different algorithms in such a noisy environment. The aim of this experiment is to validate the superior performance of the proposed ACKMC-CKF algorithm in handling non-Gaussian noise. The measurement noise used in these experiments follows the specific distribution described below:
w k   ~   1 λ   N 0 ,   R n + λ   N 0 ,   R p
where δ 0,1 denotes the proportion of contaminated noise, R n denotes the standard measurement noise error covariance matrix, and R p denotes the contaminated noise covariance matrix. In the experimental setup, λ = 0.2 , R n = R k and R p = 50 R n .
In this scenario, if the kernel bandwidth is set too small, MC-CKF and CKMC-CKF might encounter numerical singularities, leading to interruptions in the filtering process or divergence issues. Therefore, it is necessary to appropriately increase the kernel bandwidth to ensure the smooth progression of the experiment. Additionally, the experiment also compares ACKMC-CKF algorithms with two different upper limit values for kernel bandwidth, aiming to investigate the specific impact of this parameter on the filtering performance.
Figure 4 and Table 3 demonstrate the RMSE and ARMSE of several algorithms under non-Gaussian noise conditions. Under such conditions, the traditional CKF shows a trend of divergence, significantly reducing its filtering precision. When the MC-CKF’s kernel bandwidth δ is set to 1, a numerical singularity issue arises, causing the algorithm to halt execution. Although CKMC-CKF did not crash when the kernel bandwidth σ was set to 1, its numerical fluctuations were too severe to be displayed in the figure. With an appropriate increase in kernel bandwidth, both MC-CKF and CKMC-CKF surpass the filtering accuracy of CKF, exhibiting superior performance. This set of experiments further highlights the importance of kernel bandwidth settings for the filtering accuracy of MC-CKF. When the upper limit of kernel bandwidth is set to 50 and 100, ACKMC-CKF shows excellent filtering effects in both cases, with only minor differences between these two settings. Figure 5 reveals that under scenarios approximating Gaussian noise distribution, the kernel bandwidth of ACKMC-CKF approaches its set upper limit. Despite the significant differences in upper limit settings, the insensitivity of the Cauchy kernel results in no substantial change in filtering accuracy. In the presence of contaminated noise, the adaptive kernel bandwidth strategy in the algorithm quickly adjusts the kernel bandwidth to a smaller value, unaffected by the upper limit settings.
Furthermore, to validate the applicability of the algorithm in higher-dimensional systems, we expanded the simulation system to a three-dimensional configuration with 9-DoF for a more comprehensive algorithm comparison. Figure 6 clearly illustrates the RMSE performance of various algorithms in a 9-DoF system. As seen in the figure, all algorithms exhibit a trend of gradual performance degradation over time. However, compared to the CKF and the traditional correntropy-based CKF, the proposed ACKMC-CKF algorithm in this paper demonstrates the slowest decline in performance.

4.4. Observation Outliers Test

The main objective of the third set of experiments was to evaluate the performance of various algorithms in handling anomalous measurement values. By introducing both unidimensional and multi-dimensional measurement anomalies, the aim was to validate the necessity and adaptability of the ACKMCCKF algorithm for using multiple adaptive kernels for noise processing. The experiment was conducted under the Gaussian noise conditions established in the first set of experiments, and the specific methods of introducing anomalous measurement values were as follows:
z 20 = z 20 + 500   m 0 z 30 = z 30 + 0 5 ° z 40 = z 40 + 500   m 5 °
The analysis of Figure 7 reveals that the MC-CKF and CKMC-CKF algorithms can effectively counteract the disturbance caused by anomalous values in the system by reducing the kernel bandwidth. Notably, CKMC-CKF exhibits superior performance in shielding against anomalies compared to MC-CKF. However, as the system predominantly operates without encountering anomalous interferences, setting the kernel bandwidth too small could lead to a decrease in the precision of these algorithms, contradicting the purpose of employing the maximum correlation entropy criterion. In contrast, the ACKMC-CKF algorithm only reduces the kernel bandwidth rapidly when anomalies are detected, maintaining a larger width at other times. This approach not only prevents disturbances from anomalies but also ensures the algorithm does not terminate due to singular values. The results indicate that the use of ACKMC-CKF is almost unaffected by anomalous values.
Moreover, as illustrated in Figure 8, the ACKMC-CKF algorithm can effectively discern the dimensions where anomalies occur and make targeted adjustments. For instance, anomalies appear in the distance dimension at 20 s and in the azimuth dimension at 30 s. ACKMC-CKF manages to adjust the kernel bandwidth only for the affected dimensions without altering others. When both the distance and azimuth dimensions experience anomalies at 40 s, the algorithm can adjust both simultaneously, significantly enhancing its flexibility and accuracy.

5. Conclusions

This study introduces a novel adaptive Cauchy-kernel maximum correlation entropy CKF approach. By timely adjusting the kernel bandwidth, it effectively resolves the challenge of setting kernel bandwidth in the CKF based on the MCC. The proposed algorithm utilizes the Cauchy kernel function and a multi-kernel adjustment strategy, reducing the sensitivity to the upper limit settings of the kernel bandwidth and enabling targeted adjustments. This significantly enhances the practical application capabilities of the algorithm.
This research is grounded in the hidden Markov model (HMM), characterized by independent process noise and measurement noise. However, in recent years, more complex state space models such as pairwise Markov models [36,37] and triplet Markov models [38,39] have been successfully applied in the realm of KF. These models demonstrate greater universality and flexibility compared to traditional HMMs, offering new possibilities for enhancing modeling capabilities. Thus, exploring how to effectively adapt the algorithm presented in this paper to these advanced models constitutes a crucial direction for our future research endeavors. This exploration is anticipated not only to potentially improve the performance of the algorithm but also to contribute new theoretical and practical insights into the field of state estimation for complex systems.

Author Contributions

Conceptualization, X.Y.; methodology, X.Y.; software, X.Y.; formal analysis, S.L.; writing—original draft preparation, X.Y.; writing—review and editing, X.Y., J.W. and D.W.; supervision, S.L.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sang, X.Y.; Li, J.C.; Yuan, Z.H.; Yu, X.J.; Zhang, J.Q.; Zhang, J.R.; Yang, P.F. Invariant Cubature Kalman Filtering-Based Visual-Inertial Odometry for Robot Pose Estimation. IEEE Sens. J. 2022, 22, 23413–23422. [Google Scholar] [CrossRef]
  2. Wang, Y.; Yang, Z.W.; Wang, Y.Q.; Li, Z.W.; Dinavahi, V.; Liang, J. Resilient Dynamic State Estimation for Power System Using Cauchy-Kernel-Based Maximum Correntropy Cubature Kalman Filter. IEEE Trans. Instrum. Meas. 2023, 72, 3268445. [Google Scholar] [CrossRef]
  3. Dantas, D.T.; Neto, J.A.D.; Manassero, G., Jr. Transient current protection for transmission lines based on the Kalman filter measurement residual. Int. J. Electr. Power Energy Syst. 2023, 154, 109471. [Google Scholar] [CrossRef]
  4. Huang, C.Y.; Xu, Z.; Xue, Z.J.; Zhang, Z.H.; Liu, Z.Y.; Wang, X.Y.; Li, L. Transfer Case Clutch Modeling and EKF-UIO Based Torque Estimation Method for On-Demand 4WD Vehicles. IEEE Trans. Veh. Technol. 2023, 72, 458–468. [Google Scholar] [CrossRef]
  5. Montañez, O.J.; Suarez, M.J.; Fernandez, E.A. Application of Data Sensor Fusion Using Extended Kalman Filter Algorithm for Identification and Tracking of Moving Targets from LiDAR-Radar Data. Remote Sens. 2023, 15, 3396. [Google Scholar] [CrossRef]
  6. Ye, X.Z.; Wang, J.; Wu, D.J.; Zhang, Y.; Li, B. A Novel Adaptive Robust Cubature Kalman Filter for Maneuvering Target Tracking with Model Uncertainty and Abnormal Measurement Noises. Sensors 2023, 23, 6966. [Google Scholar] [CrossRef]
  7. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
  8. Arasaratnam, I.; Haykin, S. Cubature Kalman Filters. IEEE Trans. Autom. Control 2009, 54, 1254–1269. [Google Scholar] [CrossRef]
  9. Li, L.Q.; Sun, Y.C.; Liu, Z.X. Maximum Fuzzy Correntropy Kalman Filter and Its Application to Bearings-Only Maneuvering Target Tracking. Int. J. Fuzzy Syst. 2021, 23, 405–418. [Google Scholar] [CrossRef]
  10. Bilik, I.; Tabrikian, J. MMSE-Based Filtering in Presence of Non-Gaussian System and Measurement Noise. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1153–1170. [Google Scholar] [CrossRef]
  11. Shan, C.; Zhou, W.; Jiang, Z.; Shan, H. A new Gaussian approximate filter with colored non-stationary heavy-tailed measurement noise. Digit. Signal Process. 2022, 122, 103358. [Google Scholar] [CrossRef]
  12. Abbassi, N.; Benboudjema, D.; Derrode, S.; Pieczynski, W. Optimal Filter Approximations in Conditionally Gaussian Pairwise Markov Switching Models. IEEE Trans. Autom. Control 2015, 60, 1104–1109. [Google Scholar] [CrossRef]
  13. Yap, G.L.C. Optimal Filter Approximations for Latent Long Memory Stochastic Volatility. Comput. Econ. 2019, 56, 547–568. [Google Scholar] [CrossRef]
  14. Simon, D. A game theory approach to constrained minimax state estimation. IEEE Trans. Signal Process. 2006, 54, 405–412. [Google Scholar] [CrossRef]
  15. Wang, X.; Cui, N.; Guo, J. Huber-based unscented filtering and its application to vision-based relative navigation. IET Radar Sonar Navig. 2010, 4, 134–141. [Google Scholar] [CrossRef]
  16. Chang, L.; Hu, B.; Chang, G.; Li, A. Huber-based novel robust unscented Kalman filter. IET Sci. Meas. Technol. 2012, 6, 502–509. [Google Scholar] [CrossRef]
  17. Liu, J.; Cai, B.G.; Wang, J. Cooperative Localization of Connected Vehicles: Integrating GNSS With DSRC Using a Robust Cubature Kalman Filter. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2111–2125. [Google Scholar] [CrossRef]
  18. Chen, B.D.; Liu, X.; Zhao, H.Q.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  19. Liu, X.; Chen, B.D.; Xu, B.; Wu, Z.Z.; Honeine, P. Maximum correntropy unscented filter. Int. J. Syst. Sci. 2017, 48, 1607–1615. [Google Scholar] [CrossRef]
  20. Liu, X.; Qu, H.; Zhao, J.H.; Yue, P.C. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems. Isa Trans. 2018, 80, 195–202. [Google Scholar] [CrossRef]
  21. Wang, H.W.; Zhang, W.; Zuo, J.Y.; Wang, H.P. Outlier-robust Kalman filters with mixture correntropy. J. Frankl. Inst.-Eng. Appl. Math. 2020, 357, 5058–5072. [Google Scholar] [CrossRef]
  22. Li, S.L.; Li, L.J.; Shi, D.W.; Zou, W.L.; Duan, P.; Shi, L. Multi-Kernel Maximum Correntropy Kalman Filter for Orientation Estimation. IEEE Robot. Autom. Lett. 2022, 7, 6693–6700. [Google Scholar] [CrossRef]
  23. Dang, L.J.; Chen, B.D.; Huang, Y.L.; Zhang, Y.G.; Zhao, H.Q. Cubature Kalman Filter Under Minimum Error Entropy With Fiducial Points for INS/GPS Integration. IEEE-Caa J. Autom. Sin. 2022, 9, 450–465. [Google Scholar] [CrossRef]
  24. Zhao, H.Q.; Tian, B.Y.; Chen, B.D. Robust stable iterated unscented Kalman filter based on maximum correntropy criterion. Automatica 2022, 142, 110410. [Google Scholar] [CrossRef]
  25. Shi, W.; Xiong, K.; Wang, S. The Kernel Recursive Generalized Cauchy Kernel Loss Algorithm. In Proceedings of the 2019 6th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Chongqing, China, 27–30 September 2019; pp. 253–257. [Google Scholar]
  26. Wang, J.Q.; Lyu, D.H.; He, Z.M.; Zhou, H.Y.; Wang, D.Y. Cauchy kernel-based maximum correntropy Kalman filter. Int. J. Syst. Sci. 2020, 51, 3523–3538. [Google Scholar] [CrossRef]
  27. Meng, Q.W.; Li, X.Y. Minimum Cauchy Kernel Loss Based Robust Cubature Kalman Filter and Its Low Complexity Cost Version With Application on INS/OD Integrated Navigation System. IEEE Sens. J. 2022, 22, 9534–9542. [Google Scholar] [CrossRef]
  28. Wang, G.Q.; Zhang, Y.G.; Wang, X.D. Iterated maximum correntropy unscented Kalman filters for non-Gaussian systems. Signal Process. 2019, 163, 87–94. [Google Scholar] [CrossRef]
  29. Zhao, J.B.; Mili, L. A Robust Generalized-Maximum Likelihood Unscented Kalman Filter for Power System Dynamic State Estimation. IEEE J. Sel. Top. Signal Process. 2018, 12, 578–592. [Google Scholar] [CrossRef]
  30. Wang, G.Q.; Li, N.; Zhang, Y.G. Maximum correntropy unscented Kalman and information filters for non-Gaussian measurement noise. J. Frankl. Inst.-Eng. Appl. Math. 2017, 354, 8659–8677. [Google Scholar] [CrossRef]
  31. Song, H.F.; Ding, D.R.; Dong, H.L.; Yi, X.J. Distributed filtering based on Cauchy-kernel-based maximum correntropy subject to randomly occurring cyber-attacks. Automatica 2022, 135, 110004. [Google Scholar] [CrossRef]
  32. Shen, B.; Wang, X.L.; Zou, L. Maximum Correntropy Kalman Filtering for Non-Gaussian Systems With State Saturations and Stochastic Nonlinearities. IEEE-CAA J. Autom. Sin. 2023, 10, 1223–1233. [Google Scholar] [CrossRef]
  33. Fakoorian, S.; Mohammadi, A.; Azimi, V.; Simon, D. Robust Kalman-Type Filter for Non-Gaussian Noise: Performance Analysis With Unknown Noise Covariances. J. Dyn. Syst. Meas. Control 2019, 141, 091011. [Google Scholar] [CrossRef]
  34. Li, M.Z.; Jing, Z.L.; Zhu, H.Y.; Song, Y.R. Multi-sensor measurement fusion based on minimum mixture error entropy with non-Gaussian measurement noise. Digit. Signal Process. 2022, 123, 103377. [Google Scholar] [CrossRef]
  35. Ghorbani, E.; Cha, Y.-J. An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data. J. Sound Vib. 2018, 420, 21–34. [Google Scholar] [CrossRef]
  36. Kulikova, M.V. Gradient-Based Parameter Estimation in Pairwise Linear Gaussian System. IEEE Trans. Autom. Control 2017, 62, 1511–1517. [Google Scholar] [CrossRef]
  37. Zhang, G.; Lan, J.; Le, Z.; He, F.; Li, S. Filtering in Pairwise Markov Model With Student’s t Non-Stationary Noise With Application to Target Tracking. IEEE Trans. Signal Process. 2021, 69, 1627–1641. [Google Scholar] [CrossRef]
  38. Lehmann, F.; Pieczynski, W. Reduced-Dimension Filtering in Triplet Markov Models. IEEE Trans. Autom. Control 2022, 67, 605–617. [Google Scholar] [CrossRef]
  39. Zhang, G.; Zhang, X.; Zeng, L.; Dai, S.; Zhang, M.; Lian, F. Filtering in Triplet Markov Chain Model in the Presence of Non-Gaussian Noise with Application to Target Tracking. Remote Sens. 2023, 15, 5543. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed ACKMC-CKF algorithm.
Figure 1. Flowchart of the proposed ACKMC-CKF algorithm.
Electronics 13 00114 g001
Figure 2. The position RMSE of each filtering algorithm under Gaussian measurement noise.
Figure 2. The position RMSE of each filtering algorithm under Gaussian measurement noise.
Electronics 13 00114 g002
Figure 3. The velocity RMSE of each filtering algorithm under Gaussian measurement noise.
Figure 3. The velocity RMSE of each filtering algorithm under Gaussian measurement noise.
Electronics 13 00114 g003
Figure 4. The position RMSE of each filtering algorithm under non-Gaussian measurement noise.
Figure 4. The position RMSE of each filtering algorithm under non-Gaussian measurement noise.
Electronics 13 00114 g004
Figure 5. The adaptive kernel size of ACKMC-CKF with different kernel bandwidth upper limits in Experiment 2.
Figure 5. The adaptive kernel size of ACKMC-CKF with different kernel bandwidth upper limits in Experiment 2.
Electronics 13 00114 g005
Figure 6. The position RMSE of each filtering algorithm in a 9-DoF system.
Figure 6. The position RMSE of each filtering algorithm in a 9-DoF system.
Electronics 13 00114 g006
Figure 7. The position RMSE of each filtering algorithm under observation outliers.
Figure 7. The position RMSE of each filtering algorithm under observation outliers.
Electronics 13 00114 g007
Figure 8. The adaptive kernel size of ACKMC-CKF with different kernel bandwidth upper limits in Experiment 3.
Figure 8. The adaptive kernel size of ACKMC-CKF with different kernel bandwidth upper limits in Experiment 3.
Electronics 13 00114 g008
Table 1. Parameters for simulation.
Table 1. Parameters for simulation.
ParameterCorresponding Value
Discrete sampling periodT = 1 s
Turning rate ω = 3 °   s 1
Initial process noise covariance matrix Q k 1 = d i a g M , M , M = T 3 / 3 , T 2 / 2 ; T 2 / 2 , T
Initial measurement noise covariance matrix R k = d i a g σ r 2 , σ θ 2 , σ r = 30   m , σ θ = 0.5 °
Initial true state and estimation x 0 = x ^ 0 = 1000   m , 300 m / s , 1000   m , 0 m / s T
Initial state covariance matrix P 0 = d i a g 100 m 2 , 10 m 2 / s 2 , 100 m 2 , 10 m 2 / s 2
Table 2. The ARMSE of different algorithms under Gaussian noise.
Table 2. The ARMSE of different algorithms under Gaussian noise.
FiltersARMSE of Position (m)ARMSE of Velocity (m/s)
CKF33.434.91
MC-CKF ( δ = 0.5 )46.295.53
MC-CKF ( δ = 1 )39.505.18
MC-CKF ( δ = 2 )34.514.96
MC-CKF ( δ = 3 )33.784.92
MC-CKF ( δ = 5 )33.524.91
CKMC-CKF ( σ = 1 )36.565.09
CKMC-CKF ( σ = 5 )34.574.97
CKMC-CKF ( σ = 20 )33.684.92
CKMC-CKF ( σ = 50 )33.524.91
CKMC-CKF ( σ = 100 )33.444.91
ACKMC-CKF ( σ m a x = 100 )33.454.91
Table 3. The ARMSE of different algorithms under non-Gaussian noise.
Table 3. The ARMSE of different algorithms under non-Gaussian noise.
FiltersARMSE of Position (m)ARMSE of Velocity (m/s)
CKF90.198.45
MC - CKF   ( δ = 5 )57.026.16
MC - CKF   ( δ = 8 )68.756.92
MC - CKF   ( δ = 10 )74.027.29
CKMC - CKF   ( σ = 10 )57.116.15
CKMC - CKF   ( σ = 15 )58.136.22
CKMC - CKF   ( σ = 30 )61.586.45
ACKMC - CKF   ( σ m a x = 50 )41.425.33
ACKMC - CKF   ( σ m a x = 100 )40.335.27
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, X.; Lu, S.; Wang, J.; Wu, D.; Zhang, Y. Robust State Estimation Using the Maximum Correntropy Cubature Kalman Filter with Adaptive Cauchy-Kernel Size. Electronics 2024, 13, 114. https://doi.org/10.3390/electronics13010114

AMA Style

Ye X, Lu S, Wang J, Wu D, Zhang Y. Robust State Estimation Using the Maximum Correntropy Cubature Kalman Filter with Adaptive Cauchy-Kernel Size. Electronics. 2024; 13(1):114. https://doi.org/10.3390/electronics13010114

Chicago/Turabian Style

Ye, Xiangzhou, Siyu Lu, Jian Wang, Dongjie Wu, and Yong Zhang. 2024. "Robust State Estimation Using the Maximum Correntropy Cubature Kalman Filter with Adaptive Cauchy-Kernel Size" Electronics 13, no. 1: 114. https://doi.org/10.3390/electronics13010114

APA Style

Ye, X., Lu, S., Wang, J., Wu, D., & Zhang, Y. (2024). Robust State Estimation Using the Maximum Correntropy Cubature Kalman Filter with Adaptive Cauchy-Kernel Size. Electronics, 13(1), 114. https://doi.org/10.3390/electronics13010114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop