Next Article in Journal
Advanced Modulation Formats for 400 Gbps Optical Networks and AI-Based Format Recognition
Previous Article in Journal
ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer

1
College of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China
2
Mianyang Zhongke Huinong Digital Intelligence Technology Co., Ltd., Mianyang 621010, China
3
Sichuan Engineering Technology Research Center of Industrial Self-Supporting and Artificial Intelligence, Mianyang 621010, China
4
School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang 621010, China
5
School of Life Science and Engineering, Southwest University of Science and Technology, Mianyang 621010, China
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290
Submission received: 17 September 2024 / Revised: 12 November 2024 / Accepted: 12 November 2024 / Published: 14 November 2024
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains.

1. Introduction

Quadruped robots, which draw inspiration from four-legged animals, are advanced robotic systems. In contrast to other mobile robot types, quadruped robots possess outstanding stability and flexibility. They are able to traverse a wide variety of complex and uneven terrains, including sand, grass, gravel roads, and even slippery surfaces. These capabilities endow quadruped robots with great potential for applications in rescue missions [1], environmental monitoring [2], and smart agriculture [3]. As they are increasingly used in challenging scenarios such as mountainous areas, forests, and post-disaster rubble environments, precise state estimation becomes crucial for their stability and reliability.
Multi-sensor fusion-based state estimation algorithms are commonly used in quadruped robots. These algorithms incorporate external sensors like LiDAR, cameras, and GPS, along with proprioceptive sensors such as IMUs and joint encoders [4]. Although external sensors supply abundant environmental information, they do have limitations. For instance, cameras exhibit poor performance in low-light or low-texture environments, the accuracy of LiDAR deteriorates under smoky or dusty conditions, and GPS signals can be disrupted in indoor or urban canyon settings [5]. In contrast, proprioceptive sensors, which rely on joint kinematics and IMU data, are not affected by such environmental factors and provide higher measurement frequencies. This enables the fast and precise control of quadruped robots. Consequently, it is of vital importance to explore state estimation methods based on proprioceptive sensors like IMUs and joint encoders. IMUs provide high-frequency acceleration and angular velocity data, while joint encoders offer precise measurements of joint angles and speeds. The combination of these sensors allows for accurate motion state estimation in complex environments, especially when external sensors malfunction or are limited. This approach augments the autonomy, robustness, and reliability of quadruped robots in various scenarios.
State estimation methods for legged robots date back to 2005 when Lin et al. [6] developed an approach based on leg kinematic information for a hexapod robot. This method required the robot to maintain a tripod gait with three legs always in contact with a flat terrain. In 2006, Lin et al. [7] further proposed fusing leg odometry with IMU data to improve state estimation during tripod gait running. Since then, state estimation for legged robots has garnered significant attention. Bloesch et al. [8] introduced a quadruped robot state estimation method using an extended Kalman filter (EKF), incorporating leg contact points into the filter’s state variables and using forward kinematics to estimate leg contact positions and body pose. Camurri et al. [9] developed a legged odometry method without contact sensors, using ground reaction forces to determine the reliability of leg contact and weigh each leg’s contribution to body velocity. However, due to inconsistencies between EKF’s positive feedback and observation information, the filter may diverge. To address this, Hartley et al. [10] proposed a contact-aided invariant extended Kalman filter (InEKF) based on invariant observer theory, showing better performance compared to the traditional quaternion-based EKF.
The aforementioned methods assume stable, slip-free foot-end contact between legged robots and the ground. However, on slippery or soft terrains such as grass, mud, or sand, foot-end slippage occurs, rendering stable contact a challenge. This leads to the introduction of non-Gaussian errors that accumulate over time and cause unbounded drift. To tackle this issue, Ting et al. [11] regarded slippage in foot-end observations as an outlier and used a weighted least squares method to mitigate its impact on the EKF. Similarly, Bloesch et al. [12] replaced the EKF with an unscented Kalman filter (UKF) and applied an outlier rejection method to manage occasional slippage. Jenelten et al. [13] developed a probabilistic slip detector using a Hidden Markov Model, enabling robots to walk on slippery surfaces through impedance control and friction adjustment. However, this approach does not completely resolve pose estimation drift in pose estimation. To reduce drift caused by slippage, some methods fuse external sensor observations. Wisth et al. [14] proposed a state estimation method based on factor graph optimization that tightly integrates visual features, inertial data, and kinematic constraints. This method was further extended to incorporate LiDAR observations and an online-estimated linear velocity deviation term to minimize drift in legged odometry [4]. Kim et al. [15] introduced a state estimator for quadruped robots based on a pre-integrated foot velocity factor, which does not rely on precise contact detection or fixed foot positions, showing strong performance on uneven and slippery terrains. However, factor graph-based methods use measurements along the entire trajectory for smoothing, which limits their update rates for real-time control. To address this limitation, Teng et al. [16] integrated camera observations into the invariant extended Kalman filter (InEKF) to enhance state estimation on slippery terrains. Fink et al. [17] combined a Global Exponential Stability (GES) nonlinear attitude observer with legged odometry, ensuring consistently bounded speed estimation and reducing drift in unobservable position estimates.
Recently, advancements in computational hardware (e.g., GPUs) have enabled the training and deployment of complex deep learning models, encouraging researchers to develop learning-based slip detection and state estimation methods. Rotella et al. [18] proposed a method using fuzzy clustering to learn contact probabilities for humanoid robots. When integrated into a basic state estimation framework, this method can significantly reduce estimation errors. Buchanan et al. [19] developed a deep neural network to learn motion patterns from IMU data. When combined with traditional legged odometry, it substantially reduces drift in proprioceptive state estimation. Yang et al. [20] applied neural networks to train weight functions for foot-end forces and legged odometry states, enhancing observation accuracy. However, these methods also face their own technical challenges. Supervised models require large amounts of labeled data, which is extremely difficult to obtain in practice. Additionally, unsupervised methods, which treat slip detection as a classification task, limit the ability to observe slip velocity. This severely affects the performance and generalization ability of the models and is a technical bottleneck that needs to be overcome when applying such methods to quadruped robot state estimation.
Actually, unstable foot-end contact can be regarded as a form of deviation within the filter framework, as it introduces non-Gaussian errors that accumulate over time and lead to unbounded drift in state estimation. In state estimation systems where biases affect the measurements or system dynamics, researchers have developed several advanced Kalman filter-based approaches to address these issues. Zhong et al. [21] demonstrated the effectiveness of a UKF in vehicle state estimation by handling environmental biases, such as those caused by slope and wind conditions. Bellés et al. [22] developed robust error-state Kalman filters that incorporate error states, significantly improving resilience against non-Gaussian disturbances. In another approach, Huang et al. [23] proposed a robust Kalman filter that utilizes an adaptive estimation of time-varying measurement bias specifically designed to handle unknown, non-Gaussian, heavy-tailed noise through a Student’s t–inverse-Wishart distribution. This method provides an important reference for addressing complex noise conditions in state estimation. Additionally, Zhang et al. [24] proposed a distributed bias-correcting estimator, which independently adjusts each sensor’s bias, providing scalability and robustness in complex environments with persistent biases. Inspired by those studies, we consider modeling slip velocity as a deviation term of speed and explicitly incorporate it into the state equations to reduce the pose drift caused by slip through slip velocity estimation. To meet real-time requirements, we opted for a filtering-based method. The invariant extended Kalman filter (InEKF), developed based on invariant observer theory, has been successfully applied in simultaneous localization and mapping (SLAM) [25] and has aided inertial navigation systems [26] in recent years. Its symmetry ensures that the estimation error satisfies the “log-linear” autonomous differential equation on the Lie algebra of the corresponding Lie group in the system dynamics. Thus, it is possible to design a nonlinear observer or state estimator that exhibits strong convergence within an attraction domain independent of the system’s trajectory. Yu et al. [27] presented a similar approach for wheeled platforms, mainly using speed observations from encoders. We have extended this to legged robot platforms. Unlike wheeled platforms, speed observations from encoders at the foot-end are less reliable than those from wheel encoders. To address this issue, we incorporate the contact point positions of each leg into the filter state variables and consider both foot-end position and velocity observations in the observation equation, making the filter more suitable for legged robots.
The main contributions of this paper are as follows:
  • Model foot-end slippage caused by legged robot motion on non-stationary terrain as a deviation term of body velocity, reducing the drift caused by foot-end slippage through velocity deviation estimation.
  • Develop a real-time RI-EKF state and slip estimator for quadruped robots by fusing foot-end velocity and position observations.
  • Validate the mathematical derivation and the proposed state estimator’s effectiveness through experimental results using the Jueying Mini robot on multiple non-stationary terrains.

2. Theoretical Background

We assume a matrix Lie group denoted as G , with its corresponding Lie algebra denoted as g . Let · ^ : R d i m g g represent the isomorphism that maps elements in the tangent space at the identity element to their corresponding matrix representations in G . The exponential mapping of the Lie group is denoted by exp : R d i m g G and is represented as exp ( ξ ) = exp m ( ξ ^ ) , where exp m ( · ) is the usual exponential mapping of n × n matrices. A system that evolves over time t on a Lie group can be represented by the dynamics d d t X t = f u t X t , where X t G represents the system state and X ^ t is usually used to denote the estimated state.
Definition 1.
(Right-invariant and left-invariant errors). The right-invariant and left-invariant errors between the two trajectories  X t  and  X ^ t  are defined as follows:
η t r = X ^ t X t 1 = X ^ t L X t L 1 · (right-invariant error)
η t l = X t 1 X ^ t = L X ^ t 1 L X t · (left-invariant error)
where  L  is an arbitrary element of the group  G .
Definition 2.
(Adjoint map). Let  G  be a matrix Lie group, with its Lie algebra denoted as  g . For any  X G , the adjoint map  Ad X : g g  is a linear map that satisfies  Ad X ξ ^ : X ξ ^ X 1 . Furthermore, the matrix representation of the adjoint map is denoted by  Ad X .
Theorem 1.
A system is group-affine, if for all  t > 0 ,   X 1 , X 2 G , its dynamics  f u t X t  satisfy the following:
f u t X 1 X 2 = f u t X 1 X 2 + X 1 f u t X 2 X 1 f u t I d X 2
where  I d G  denotes the group identity element. If a system is group-affine, then its right-invariant and left-invariant error dynamics are independent of the trajectory and satisfy the following:
d d t η t r = g u t r η t r ,   where   g u t r η = f u t η η f u t I d
d d t η t l = g u t l η t l ,   where   g u t l η = f u t η f u t I d η
We define a matrix  A t  of size  dim g × dim g , such that for any  t 0 , the function  g u t exp ξ : = A t ξ ^ + O ξ 2  holds. Let  ξ t  be the solution to the following linear differential equation:
d d t ξ t = A ξ t
Theorem 2.
Consider any two trajectories defined by (4) and (5) with a left-invariant and right-invariant error, respectively. For any initial error  ξ 0 R dim g , if  η 0 e x p ξ 0 , then for all  t 0 , the nonlinear estimation error  η t  can be accurately recovered by the linear differential equation (6), where  η t e x p ξ t .

3. System Model

3.1. State Equation

Our state estimator can estimate the state variables of the robot’s motion in the world coordinate system W , which include the robot’s orientation R W B , the robot’s position WpB, the robot’s velocity WvB, and the positions of the four feet Wpf1, Wpf2, Wpf3, and Wpf4. To consider the impact of end-effector slippage on state estimation, we use the disturbance observer idea, introducing a bias term b B v modeled as drift velocity and explicitly incorporating it into the state equations. These state variables form a group in S E 7 3 ; hence, we represent the state variables of our InEKF model as follows:
X t = R W B ( t ) v B W ( t ) p B W ( t ) p f 1 W ( t ) p f 2 W ( t ) p f 3 W ( t ) p f 4 W ( t ) b B v ( t ) 0 7.3 I 7
Similar to [27], we describe the slippage velocity using an autoregressive model, modeling the slippage velocity as an exponentially decaying variable:
b ˙ B v ( t ) = α b B v ( t ) + n b v ( t ) ,   n b v ( t ) N 0 3 , Σ b B v
where α > 0 denotes the velocity’s decay rate. The IMU provides the following measurements:
w ˜ t = w t + n t w ,   a ˜ t = a t + n t a
The measurement of the position of the i t h foot-end primarily depends on the joint encoder measurements of the i t h leg:
θ ˜ t f i = θ t f i + n t f i
Based on the IMU measurement model and the foot-end contact model, the system’s dynamic model is as follows:
R ˙ W B ( t ) = R W B ( t ) w ˜ t n t w ×
v ˙ B W ( t ) = R W B ( t ) a ˜ t n t a + g
p ˙ B W ( t ) = v B W ( t )
p ˙ f i W ( t ) = R W B ( t ) h f i ( θ ˜ t f i ) n t f i
where g is the gravity vector, and h f i θ ~ t f i is the forward kinematics function which maps the joint position to the relative position from the body coordinate to the i t h foot. We can rewrite the above model in matrix form:
d d t X t = R W B ( t ) [ w ˜ t ] × R W B ( t ) a ˜ t + g v B W ( t ) 0 3.1 0 3.1 0 3.1 0 3.1 α b B v ( t ) 0 7.3 0 7.7 X t n t f u t X t X t n t
where n t = Δ vec n t w , n t a , 0 , h f 1 θ ~ t f 1 n t f 1 , h f 2 θ ~ t f 2 n t f 2 , h f 3 θ ~ t f 3 n t f 3 , h f 4 θ ~ t f 4 n t f 4 . It can be proven that the deterministic system dynamics f u t · satisfy the group affine property (3). According to Theorem 1, the left-invariant and right-invariant error dynamics are autonomous and evolve independently of the system’s state. The system’s right-invariant error is as follows:
d d t η t r = f u t η t r η t r f u t I d + X ^ t n t X ^ t 1 η t r : = g u t r η t r + Ad X ^ n t η t r
Theorem 2 further explains the logarithmic linear property of invariant errors. If A t is defined as
g u t r exp ( ξ ) A t ξ + O ξ 2
then the invariant error ξ satisfies the following linear system:
d d t ξ t = A t ξ t + Ad X ^ n t
Using first-order approximation to linearize g u t r η t r yields the following:
η t r = exp ( ξ t ) I d + ξ t ^
Substituting this into the expression for g u t r η t r , the following equation is obtained:
g u t r η t r = 0 3.3 0 3.3 0 3.15 0 3.3 [ g ] × 0 3.3 0 3.15 0 3.3 0 3.3 I 3 0 3.15 0 3.3 0 12.3 0 12.3 0 12.15 0 12.3 0 3.3 0 3.3 0 3.15 α I 3 ξ t
Based on these computations, the prediction step of the right-invariant extended Kalman filter (EKF) can be derived. The state estimate is denoted as X ^ t , and the covariance matrix P t is computed using the Riccati equation:
d d t X ^ t = f u t X ^ t
d d t P t = A t P t + P t A t Τ + Q ¯ t
where Q ¯ t = Ad X ^ Cov ( n t ) Ad X ^ .

3.2. Measurement Model

According to the forward kinematics of the quadruped robot, the position of the foot relative to the body coordinate system { B } is given by the following:
p f i B ( t ) = h f i θ t f i = R W B T ( t ) p f i W ( t ) p B W ( t )
The velocity of the foot is composed of the velocity generated by the joint rotation v b j f i B and the velocity of the body motion v b e f i B
v b j f i B ( t ) = J i θ ˙ t f i
v b e f i B ( t ) = v B B ( t ) + w t × p f i B ( t )
where J i is the Jacobian matrix of the i t h leg. By transforming v b j f i B and v b e f i B into the world coordinate system { W } , we can obtain the foot velocity in the world coordinate system:
v f i W ( t ) = R W B ( t ) v b j f i B ( t ) + v b e f i B ( t ) = v B W ( t ) + R W B ( t ) J i θ ˙ t f i + w t × p f i B ( t )
When the foot is in stable contact with the ground without slipping, v f i W ( t ) = 0 , that is
v B W ( t ) = R W B ( t ) J i θ ˙ t f i + w t × p f i B ( t ) .
Assuming that the joint encoder is affected by Gaussian white noise with a covariance matrix of R θ , i.e., θ ~ t = θ t + n t θ , we can derive the following observation model:
h f i ( θ ˜ t f i ) = R W B Τ ( t ) p f i W ( t ) p B W ( t ) + J i θ ˜ t f i n t θ
v B W ( t ) = R W B ( t ) J i θ ˜ ˙ t f i + w t × h f i ( θ ˜ t f i ) + R ˙ W B ( t ) J i θ ˜ t f i n t θ
Therefore, when in stable contact, the following two observations exist:
h f i ( θ ˜ t f i ) 0 1 0 i 1.1 1 0 5 i , 1 = X t 1 0 3.1 0 1 0 i 1.1 1 0 5 i , 1 + J i θ ˜ t f i n t θ 0 6.1 ,   J i θ ˜ ˙ t f i + w × h f i ( θ ˜ t f i ) 1 0 5.1 = X t 1 0 3.1 1 0 5.1 + R ˙ W B ( t ) J i θ ˜ t f i n t θ 0 6.1 ,   i = 1 , 2 , , 4
The first equation represents the observation of the foot position of the quadruped robot, denoted as Y t p f i = X t 1 c t p f i + N t p f i where, Y t p f i = h f i Τ ( θ ˜ t f i ) 0 1 0 1 , i 1 1 0 1.5 i Τ , c p f i = 0 1.3 0 1 0 1 , i 1 1 0 1.5 i Τ , N t p f i = J i θ ˜ t f i n t θ Τ 0 1.6 Τ , and i = 1 , 2 , , 4 . The second equation represents the observation of the body velocity of the quadruped robot, denoted as Y t v i = X t 1 c v i + N t v f i , where Y t v i = J i θ ˜ ˙ t f i + w t × h f i ( θ ˜ t f i ) 1 0 1.5 Τ , c v = 0 1.3 1 0 1.5 Τ , N t v f i = R ˙ W B ( t ) J i θ ˜ t f i n t θ Τ 0 1.6 Τ , and i = 1 , 2 , , 4 . Hartley [10] used position as the observation in bipedal robots, while Teng [16] used velocity as the observation. In practice, these two observations can be considered together. Using the moving pose as an example, the observation error is defined as follows:
Z t p f i = X ^ t Y t p f i c t p f i
Substituting Y t p f i = X t 1 c t p f i + N t p f i into (31), it can be noted that
Z t p f i = X ^ t X t 1 c t p f i + N t p f i c t p f i = η t c t p f i c t p f i + X ^ t N t p f i = ξ t p ξ t p f i 0 7.1 + R ^ t J i θ ˜ t f i n t θ 0 7.1 .
Note that for each foot’s data observation, except for the first row, all other elements are zero. Let z t p f i be the term Z t p f i where all zero elements are removed:
z t p f i = 0 3.3 0 3.3 I 3 0 3.3 * ( i 1 ) I 3 0 3.3 * ( 5 i ) ξ t R ^ W B ( t ) J i θ ˜ t f i n t θ
Combining all foot position observations, we obtain the following:
z t p f 1 z t p f 2 z t p f 3 z t p f 4 = 0 3.3 0 3.3 I 3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 ξ t + R ^ t J 1 θ ˜ t f 1 n t θ R ^ t J 2 θ ˜ t f 2 n t θ R ^ t J 3 θ ˜ t f 3 n t θ R ^ t J 4 θ ˜ t f 4 n t θ
Similarly, for velocity observations, we obtain the following:
z t v 1 z t v 2 z t v 3 z t v 4 = 0 3.3 I 3 0 3.18 0 3.3 I 3 0 3.18 0 3.3 I 3 0 3.18 0 3.3 I 3 0 3.18 ξ t + [ w ˜ t ] × J 1 θ ˜ t f 1 n t θ [ w ˜ t ] × J 2 θ ˜ t f 2 n t θ [ w ˜ t ] × J 3 θ ˜ t f 3 n t θ [ w ˜ t ] × J 4 θ ˜ t f 4 n t θ
By merging velocity observations with position observations, this leads to the following:
z t p f 1 z t p f 2 z t p f 3 z t p f 4 z t v 1 z t v 2 z t v 3 z t v 4 = 0 3.3 0 3.3 I 3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 ξ t + R ^ t J 1 θ ˜ t f 1 n t θ R ^ t J 2 θ ˜ t f 2 n t θ R ^ t J 3 θ ˜ t f 3 n t θ R ^ t J 4 θ ˜ t f 4 n t θ [ w ˜ t ] × J 1 θ ˜ t f 1 n t θ [ w ˜ t ] × J 2 θ ˜ t f 2 n t θ [ w ˜ t ] × J 3 θ ˜ t f 3 n t θ [ w ˜ t ] × J 4 θ ˜ t f 4 n t θ

3.3. Considering Unstable Contact and Slipping

The above model assumes that all four feet are in stable contact with the ground. In practice, however, the quadruped robot’s feet may be airborne during walking, and on many uneven or slippery terrains, the feet may make unstable contact. When the feet are airborne or in unstable contact, the foot positions calculated by the state space model do not conform to this assumption. To address this issue, we use the foot contact probability model proposed in [9] to determine the probability of stable contact between the feet and the ground:
P k S i = 1 | f k i = 1 1 + exp β 1 f z , i k β 2
where β 1 and β 2 are constants. f k i represents the reaction force exerted on the foot-end of the i t h leg by the ground at time k and f z , i k represents the z -axis component of the this reaction force. S i = 1 indicates that the foot-end of the i t h leg is in stable contact with the ground. Using this contact probability model, we can combine the basic velocities generated by each leg to obtain the body’s velocity and use it as an observation of the body’s velocity:
v B B = i C P k S i = 1 | f k i p ˙ f i B i C P k S i = 1 | f k i
where C is the set of feet that exceed the 0.5 threshold of the logistic regression. Considering the velocity bias, the velocity expression of the quadruped robot in the world coordinate system at time t is as follows:
v B W ( t ) = R W B ( t ) v B B ( t ) + b B v ( t ) + n t v
where n t v represents the error in the body’s velocity, which includes the error generated by each foot’ velocity observation. To correctly compute the covariance matrix t v of n t v , we consider the consistency between each foot’s velocity p ˙ f i B and the ground’s impact. For each axis r x , y , z , the variance at a given moment can be computed as follows:
σ r 2 ( k ) = σ 0 2 + γ 1 std p ˙ f i C B ( k ) C + ( 1 γ 1 ) γ 2 Δ f ¯ z k 2
where f ¯ z = 1 d i m ( C ) i C f z , i k f z , i k 1 is the average absolute difference in the ground reaction force in the z axis between the current and previous contact times. σ 0 2 is the baseline standard deviation of velocity, while s t d p ˙ f i C B is the standard deviation of the stance-phase foot velocity contribution for the i t h elements. γ 1 is a factor that balances consistency and the impact of collision forces (we use 0.5), while γ 2 is a normalization factor to normalize the typical velocity error difference f ¯ z at the same moment.
For the observation of foot position, since the foot may be in a hovering state or in unstable contact, the estimated foot position obtained through the state space model might be unreliable. By dynamically adjusting the noise covariance of the foot contact using the formula below, the impact of unstable or non-contact states on state estimation can be mitigated:
Σ t p f i k = 1 + L 1 P k S i = 1 | f k i Σ t p f i k
where L is a sufficiently large scalar. The final observation model accounting for unstable contact and sliding is as follows:
z t p f 1 z t p f 2 z t p f 3 z t p f 4 z t v = 0 3.3 0 3.3 I 3.3 I 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3.3 0 3.3 I 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3.3 0 3.3 0 3.3 I 3.3 0 3.3 0 3.3 0 3.3 0 3.3 I 3.3 0 3.3 0 3.3 0 3.3 I 3.3 0 3.3 0 3.3 I 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 0 3.3 ξ t + n t p f 1 n t p f 2 n t p f 3 n t p f 4 n t v
An observation equation of the form z t = C ξ t + V t is obtained. Based on the definition of a right-invariant error, the update steps for the InEKF are as follows:
K t = P t C t T C t P t C t T + N t
X t + = Exp K t z t X ^ t
P t + = I K t C t P t

4. Experimental Results and Analysis

To validate the effectiveness of the proposed algorithm, experiments were conducted on three different terrains using the Jueying Mini quadruped robot. The Jueying Mini quadruped robot is equipped with an IMU and 12 joint encoders, all with a measurement frequency of 166 Hz. The IMU provides measurements of angular velocity and linear acceleration, while the joint encoders provide measurements of joint angles, joint torques, and joint angular velocities. The initial values and parameters of the filter are as follows: R W B ( 0 ) = I 3 , WvB(0) = [0 0 0]T, WpB(0) = [0 0 0.2]T, Wpfi(0) = WpB(0) + hfi( θ 0 f i ), α = 20, P = I24 and L = 104. Additionally, Table 1 presents the corresponding noise parameters. The experimental terrains include typical outdoor rugged slopes, shallow grass, and deep grass, as shown in Figure 1.

4.1. Contact Probability Calculation

We use the phase information of the foot-end as the true value of the contact state between the foot-end and the ground to calculate the parameters β 1 and β 2 in Equation (37). The ground reaction force on the foot-end is solved by the joint torque:
f k i = J i T θ t f i τ i h i ( θ t f i , θ ˙ t f i , g B )
Considering that the foot-end starts to touch the ground and prepares to leave the ground, the position of the foot-end always undergoes some displacement. An appropriate adjustment of the parameter β reduces the sensitivity of the model to input variation, thereby making the fluctuation in the foot-end contact probability smoother. In our experiment, β1 = 1.3 and β2 = 0.037. Figure 2 illustrates the scenario where the quadruped robot’s foot slips during contact with the ground, and Figure 3 presents the final results, showing the probability model for right-front-foot slipping.

4.2. Algorithm Evaluation

Due to the absence of a motion capture system in outdoor areas, we used the LiDAR SLAM method, which performs relatively well in outdoor environments, with LIO-SAM’s computed pose information serving as the ground truth. The experimental results are shown in the figures. Figure 4 presents the position estimate results of various methods for the quadruped robot across three different terrains, with the position information in each case being provided for the X, Y, and Z directions. Figure 5 presents the attitude angle estimates; as the yaw is unobservable [10], only the pitch and roll angles are presented. We compared a total of five methods: the traditional quaternion-based Kalman filter method (Q-EKF) [8], the InEKF with only velocity updates [16], the InEKF with only position updates, and the proposed method considering and not considering velocity bias. For the methods [10,16], when the foot is in the contact phase, it is determined to be in stable contact with the ground, while the remaining methods employ the proposed contact probability judgment and covariance adjustment strategy. The results from the figure show that the proposed method is closer to the true trajectory.
In Figure 4, it is evident that the InEKF with only velocity updates exhibits significant deviations in position estimation. One possible reason is that the measurement of velocity itself has a relatively large deviation. After multiple filtrations, the accumulated error progressively increases and eventually diverges. In the method presented in reference [16], the author achieved relatively accurate results through fusion with camera observations. However, in the method of this paper, only proprioceptive sensors are discussed. When reproducing the method of [16], camera observations are not incorporated. Consequently, a large deviation occurs, which also indicates that the deviation in foot-end velocity measurement is indeed real.
As depicted in Figure 5, it is apparent that the attitude angles estimated by the two methods, the Q-EKF and the InEKF with only velocity updates, deviate significantly from the true values. The inaccuracy of the InEKF with only velocity updates has been previously addressed, potentially stemming from the deviation in velocity measurement. In contrast, the other methods are relatively close to the actual values, indicating that the method based on the InEKF is better than the Q-EKF in the estimation of attitude angles.
The trajectories of different methods under various terrains were evaluated using the EVO tool [28], with the results shown in Table 2. The evaluation metrics include the Absolute Trajectory Error (ATE) and the Relative Pose Error (RPE), where ATE and RPE measure position error (in meters) and rotation error (in radians), respectively.
Table 2 shows that the proposed method (PM with Vel Bias) outperforms other methods in ATE and RPE across all terrains, especially on rugged slopes and shallow grass, where its position error (ATE) and attitude error (RPE) are the lowest, at 0.4572 m and 0.0345 radians (for rugged slope position error) and 0.3431 m and 0.0367 radians (for shallow grass position error), respectively. This indicates that this method offers higher accuracy and robustness in scenarios with velocity bias compensation. In contrast, InEKF with Vel update shows larger errors in most terrains, particularly in deep grass, where the ATE reaches 3.3818 m and the RPE is 1.3146 radians, demonstrating its sensitivity to errors in complex terrains. However, the method shows good stability and accuracy in RPE rotational error performance across various terrains, particularly on shallow grass and rugged slopes. These results indicate that velocity observation plays a crucial role in maintaining low rotational errors. Combining velocity observations with other methods (such as position observation) holds promise for achieving more robust attitude estimation. PM without Vel Bias also shows relatively stable performance in shallow grass and deep grass, especially in position error, suggesting that the strategy of removing velocity bias can be beneficial for trajectory estimation accuracy under certain terrain conditions. The QEKF method’s performance ranks between the other methods but has the lowest rotation error (RPE) on shallow grass, at 0.0367 radians, indicating its advantage in handling certain types of attitude changes. Overall, the experimental results show that the proposed method provides the best performance in most cases.

5. Conclusions

To address the state estimation drift issue caused by unstable foot contact in non-stationary terrains for quadruped robots, we propose an improved invariant extended Kalman filter (InEKF) method. This method models foot-end sliding as a bias term in body velocity and integrates both foot-end velocity and position observations into the observation equation. By using foot-end contact probability assessment and adaptive covariance adjustment strategies, it effectively improves the state estimation accuracy of quadruped robots in complex outdoor environments. The experimental results show that compared to several existing methods (such as Q-EKF and InEKF with a single observation update), our method exhibits a lower Root Mean Square Error (RMSE) across multiple non-stationary terrains and demonstrates significant advantages in position and attitude estimation. This method offers a new approach to the state estimation problem of quadruped robots on non-stationary terrains and suggests further optimization of the filtering algorithm for application in more complex environments in the future.
Future research directions could focus on several aspects: firstly, investigating the potential of fusion with external perception sensors, such as vision cameras, lidars, etc., to enhance the robot’s positioning and mapping capabilities; secondly, exploring the adaptation of this filtering algorithm to handle more extreme and unpredictable terrains, including those with rapidly changing surface properties or dynamic obstacles; thirdly, studying the possibility of optimizing the computational efficiency of the algorithm to enable real-time state estimation with lower latency, especially in scenarios where the quadruped robot needs to make quick decisions based on its current state.

Author Contributions

Methodology, M.W.; Formal analysis, D.L.; Resources, L.L.; Writing—original draft, M.W.; Writing—review & editing, Z.P. and Z.L.; Project administration, J.W., L.L. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cruz Ulloa, C.; del Cerro, J.; Barrientos, A. Mixed-reality for quadruped-robotic guidance in SAR tasks. J. Comput. Des. Eng. 2023, 10, 1479–1489. [Google Scholar] [CrossRef]
  2. Halder, S.; Afsari, K.; Serdakowski, J.; DeVito, S.; Ensafi, M.; Thabet, W. Real-Time and Remote Construction Progress Monitoring with a Quadruped Robot Using Augmented Reality. Buildings 2022, 12, 2027. [Google Scholar] [CrossRef]
  3. Hansen, H.; Yubin, L.; Ryoichi, I.; Takeshi, O.; Yoshihiro, S. Quadruped robot platform for selective pesticide spraying. In Proceedings of the 2023 18th International Conference on Machine Vision and Applications (MVA), Hamamatsu, Japan, 23–25 July 2023; pp. 1–6. [Google Scholar]
  4. Wisth, D.; Marco, C.; Maurice, F. VILENS: Visual, Inertial, Lidar, and Leg Odometry for All-Terrain Legged Robots. IEEE Trans. Robot. 2023, 39, 309–326. [Google Scholar] [CrossRef]
  5. Junwoon, L.; Ren, K.; Mitsuru, S.; Toshihiro, K. Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. IEEE Robot. Autom. Lett. 2024, 9, 7270–7277. [Google Scholar] [CrossRef]
  6. Lin, P.-C.; Komsuoglu, H.; Koditschek, D. A leg configuration measurement system for full-body pose estimates in a hexapod robot. IEEE Trans. Robot. 2005, 21, 411–422. [Google Scholar] [CrossRef]
  7. Lin, P.-C.; Komsuoglu, H.; Koditschek, D. Sensor data fusion for body state estimation in a hexapod robot with dynamical gaits. IEEE Trans. Robot. 2006, 22, 932–943. [Google Scholar] [CrossRef]
  8. Bloesch, M.; Hutter, M.; Hoepflinger, M.A. State estimation for legged robots: Consistent fusion of leg kinematics and IMU. In Robotics: Science and Systems VIII; Nicholas, R., Paul, N., Eds.; MIT Press: Cambridge, MA, USA, 2013; pp. 17–24. [Google Scholar] [CrossRef]
  9. Camurri, M.; Fallon, M.; Bazeille, S.; Radulescu, A.; Barasuol, V.; Caldwell, D.G.; Semini, C. Probabilistic Contact Estimation and Impact Detection for State Estimation of Quadruped Robots. IEEE Robot. Autom. Lett. 2017, 2, 1023–1030. [Google Scholar] [CrossRef]
  10. Hartley, R.; Jadidi, M.G.; Grizzle, J.W.; Eustice, R.M. Contact-aided invariant extended Kalman filtering for legged robot state estimation. Int. J. Robot. Res. 2020, 39, 402–430. [Google Scholar] [CrossRef]
  11. Ting, J.; Theodorou, E.A. A Kalman filter for robust outlier detection. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1514–1519. [Google Scholar]
  12. Michael, B.; Christian, G.; Péter, F.; Marco, H.; Mark, A. State estimation for legged robots on unstable and slippery terrain. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 6058–6064. [Google Scholar]
  13. Jenelten, F.; Hwangbo, J.; Tresoldi, F.D.; Bellicoso, C.D.; Hutter, M. Dynamic Locomotion on Slippery Ground. IEEE Robot. Autom. Lett. 2019, 4, 4170–4176. [Google Scholar] [CrossRef]
  14. Wisth, D.; Camurri, M.; Fallon, M. Robust Legged Robot State Estimation Using Factor Graph Optimization. IEEE Robot. Autom. Lett. 2019, 4, 4507–4514. [Google Scholar] [CrossRef]
  15. Kim, Y.; Yu, B.; Lee, E.M.; Kim, J.-H.; Park, H.-W.; Myung, H. STEP: State Estimator for Legged Robots Using a Preintegrated Foot Velocity Factor. IEEE Robot. Autom. Lett. 2022, 7, 4456–4463. [Google Scholar] [CrossRef]
  16. Teng, S.; Mueller, M.W.; Sreenath, K. Legged robot state estimation in slippery environments using invariant extended kalman filter with velocity update. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 3104–3110. [Google Scholar]
  17. Fink, G.; Semini, C. Proprioceptive sensor fusion for quadruped robot state estimation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2021; pp. 10914–10920. [Google Scholar]
  18. Rotella, N.; Schaal, S.; Righetti, L. Unsupervised contact learning for humanoid estimation and control. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 411–417. [Google Scholar]
  19. Buchanan, R.; Camurri, M.; Dellaert, F. Learning inertial odometry for dynamic legged robot state estimation. In Proceedings of the 5th Annual Conference on Robot Learning, London, UK, 8–11 November 2021; pp. 1575–1584. [Google Scholar]
  20. Yang, S.; Yang, Q.; Zhu, R.; Zhang, Z.; Li, C.; Liu, H. State estimation of hydraulic quadruped robots using invariant-EKF and kinematics with neural networks. Neural Comput. Appl. 2023, 36, 2231–2244. [Google Scholar] [CrossRef]
  21. Zhong, S.; Zhao, Y.; Ge, L.; Shan, Z.; Ma, F. Vehicle State and Bias Estimation Based on Unscented Kalman Filter with Vehicle Hybrid Kinematics and Dynamics Models. Automot. Innov. 2023, 6, 571–585. [Google Scholar] [CrossRef]
  22. Bellés, A.; Medina, D.; Chauchat, P.; Labsir, S.; Vilà-Valls, J. Robust error-state Kalman-type filters for attitude estimation. EURASIP J. Adv. Signal Process. 2024, 2024, 1–19. [Google Scholar] [CrossRef]
  23. Huang, Y.; Jia, G.; Chen, B.; Zhang, Y. A New Robust Kalman Filter with Adaptive Estimate of Time-Varying Measurement Bias. IEEE Signal Process Lett. 2020, 27, 700–704. [Google Scholar] [CrossRef]
  24. Zhang, X.; Xue, W.; He, X.; Fang, H. Distributed Filter with Biased Measurements: A Scalable Bias-Correction Approach. IEEE Trans. Signal Inf. Process. Over Netw. 2022, 8, 844–854. [Google Scholar] [CrossRef]
  25. Zhang, T.; Wu, K.; Song, J.; Huang, S.; Dissanayake, G. Convergence and Consistency Analysis for a 3-D Invariant-EKF SLAM. IEEE Robot. Autom. Lett. 2017, 2, 733–740. [Google Scholar] [CrossRef]
  26. Zhang, H.; Xiao, R.; Li, J. A High-Precision LiDAR-Inertial Odometry via Invariant Extended Kalman Filtering and Efficient Surfel Mapping. IEEE Trans. Instrum. Meas. 2024, 73, 1–11. [Google Scholar] [CrossRef]
  27. Yu, X.; Teng, S.; Chakhachiro, T. Fully proprioceptive slip-velocity-aware state estimation for mobile robots via invariant kalman filtering and disturbance observer. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 8096–8103. [Google Scholar]
  28. Grupp, M. evo: Python Package for the Evaluation of Odometry and SLAM. Available online: https://github.com/MichaelGrupp/evo (accessed on 17 September 2024).
Figure 1. Test environments.
Figure 1. Test environments.
Sensors 24 07290 g001
Figure 2. Foot slipping scenarios of a quadruped robot during ground contact.
Figure 2. Foot slipping scenarios of a quadruped robot during ground contact.
Sensors 24 07290 g002
Figure 3. Estimation of foot contact probability during unstable contact events, with (a) representing right front leg and (b) left rear leg.
Figure 3. Estimation of foot contact probability during unstable contact events, with (a) representing right front leg and (b) left rear leg.
Sensors 24 07290 g003
Figure 4. The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (ac) depicting the position estimate for rugged slope terrain, (df) for shallow grass terrain, and (gi) for deep grass terrain.
Figure 4. The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (ac) depicting the position estimate for rugged slope terrain, (df) for shallow grass terrain, and (gi) for deep grass terrain.
Sensors 24 07290 g004aSensors 24 07290 g004b
Figure 5. Pitch and roll angle estimation of the quadruped robot on different terrains, with (a,d) depicting the estimate for rugged slope terrain, (b,e) for shallow grass terrain, and (c,f) for deep grass terrain.
Figure 5. Pitch and roll angle estimation of the quadruped robot on different terrains, with (a,d) depicting the estimate for rugged slope terrain, (b,e) for shallow grass terrain, and (c,f) for deep grass terrain.
Sensors 24 07290 g005aSensors 24 07290 g005b
Table 1. Noise parameters and initial covariances.
Table 1. Noise parameters and initial covariances.
Measurement TypeNoise Std. DevState VariableInitial Covariance
Gyroscope0.1 rad/sRobot Orientation0.03 rad
Accelerometer0.1 m/s2Robot Velocity0.01 m/s
Foot Encoder Pos0.01 mRobot Position0.01 m
Foot Encoder Vel0.1 m/sRobot Slip Velocity0.01 m/s
Disturbance Process5 m/s
Table 2. RMSE evaluation of different state estimation methods for quadruped robots on various terrains.
Table 2. RMSE evaluation of different state estimation methods for quadruped robots on various terrains.
TerrainMethodATE RMSERPE RMSE
Position (m)Rotation (rad)Position (m)Rotation (rad)
Rugged SlopesQEKF1.34591.05410.06540.0385
InEKF with Vel update1.58982.83880.07430.0395
InEKF with Pos update0.47040.21850.06330.0358
PM without Vel Bias0.46660.16280.06010.0358
PM with Vel Bias0.45720.16260.06010.0345
Shallow GrassQEKF0.78930.46610.06970.0367
InEKF with Vel update2.37473.05280.06650.0368
InEKF with Pos update0.65250.30390.06260.0655
PM without Vel Bias0.35010.12380.03930.0630
PM with Vel Bias0.34310.12610.03910.0641
Deep GrassQEKF0.59930.28670.06280.0558
InEKF with Vel update3.38181.31460.07970.0452
InEKF with Pos update0.75520.3526.0.06530.0522
PM without Vel Bias0.69200.32280.06460.0506
PM with Vel Bias0.52890.20350.06010.0498
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wan, M.; Liu, D.; Wu, J.; Li, L.; Peng, Z.; Liu, Z. State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer. Sensors 2024, 24, 7290. https://doi.org/10.3390/s24227290

AMA Style

Wan M, Liu D, Wu J, Li L, Peng Z, Liu Z. State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer. Sensors. 2024; 24(22):7290. https://doi.org/10.3390/s24227290

Chicago/Turabian Style

Wan, Mingfei, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng, and Zhigui Liu. 2024. "State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer" Sensors 24, no. 22: 7290. https://doi.org/10.3390/s24227290

APA Style

Wan, M., Liu, D., Wu, J., Li, L., Peng, Z., & Liu, Z. (2024). State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer. Sensors, 24(22), 7290. https://doi.org/10.3390/s24227290

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop